id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
1,033,354
https://en.wikipedia.org/wiki/Guillermo%20Haro
Guillermo Haro Barraza (; 21 March 1913 – 26 April 1988) was a Mexican astronomer. Through his own astronomical research and the formation of new institutions, Haro was influential in the development of modern observational astronomy in Mexico. Internationally, he is best known for his contribution to the discovery of Herbig–Haro objects. Early life Haro was born in Mexico City on 21 March 1913 to Ignacio Haro and Leonor Barraza. He studied philosophy and law at the National Autonomous University of Mexico (UNAM). While working as a reporter for Excélsior, Haro became interested in astronomy after a 1937 interview with Luis Enrique Erro. As a result of his dedication and enthusiasm for astronomy, he was hired by Erro in 1943 as an assistant at the newly founded Observatorio Astrofísico de Tonantzintla. Erro arranged for Haro to further his astronomical training in the United States at the Harvard College Observatory, Case Observatory (1944), Yerkes Observatory and McDonald Observatory (1945 to 1947). Career Upon his return to Mexico in 1945, Haro continued working at the Observatorio Astrofísico de Tonantzintla where he was responsible for the commissioning of the new 24-31-inch Schmidt camera and where he became involved in the study of extremely red and extremely blue stars. In 1947 he started working for the Observatorio de Tacubaya of the UNAM. Haro's contributions to observational astronomy, Among them were the detection of a large number of planetary nebulae in the direction of the Galactic Center and the discovery (also independently done by George Herbig) of the nonstellar condensations in high density clouds near regions of recent star formation (now called Herbig–Haro objects). Haro and co-workers discovered flare stars in the Orion nebula region, and later on in stellar aggregates of different ages. Other major research projects carried out by Haro included the list of 8746 blue stars in the direction of the north galactic pole published jointly with W. J. Luyten in 1961. Work made with the 48-inch Palomar Schmidt using the three-color image technique developed at Tonantzintla. At least 50 of these objects turned out to be quasars (which had not yet been discovered in 1961). Haro's list of 44 blue galaxies, compiled in 1956, was a precursor to the work of Benjamin Markarian and others in searching for such galaxies. Haro also discovered a number of T Tauri stars, one supernova, more than 10 novae, and one comet. Major accomplishments Haro was very influential in the development of astronomy in Mexico, not only by virtue of his own astronomical research but also by promoting the development of new institutions. In a more important aspect he defined modern astrophysical research in Mexico where he gave impulse to different initial lines of research and established general scientific policies. Guillermo Haro discovered a new type of large nebulae with American colleague George Herbig - that were named Herbig-Haro objects. Haro became a member of the Colegio Nacional at age 40, the youngest person to do so. In 1959, Haro became the first person from Mexico elected to the Royal Astronomical Society. His students included Silvia Torres-Peimbert and Manuel Peimbert. Haro founded the Mexican Academy of Sciences (first president 1960) and the National Institute of Astrophysics, Optics and Electronics (an observatory named after him is in the state of Sonora). Recognition Galaxy Haro 11 (H11) is a small galaxy situated in the southern constellation of Sculptor and is named after Haro who first included it in a study published in 1956. The Guillermo Haro International Program on Advanced Astrophysical Research at INAOE, which was created in August 1995, was named after him. On 21 March 2018, 105 years after his birth, Google featured Haro in a Google Doodle. Personal life In 1968, Haro married journalist and writer Elena Poniatowska, with whom he had two children: Felipe and Paula. He was previously divorced from his first wife, Gladys Learn Rojas. Haro died on 27 April 1988 in Mexico City, and is interred at the Rotonda de las Personas Ilustres of the Panteón Civil de Dolores. See also Guillermo Haro Observatory References External links Guillermo Haro Observatory In Spanish. Guillermo Haro International Astrophysics Program 1913 births 1988 deaths Members of El Colegio Nacional (Mexico) Mexican astronomers People from Mexico City National Autonomous University of Mexico alumni Harvard University staff 20th-century astronomers Recipients of the Lomonosov Gold Medal Foreign fellows of the Royal Astronomical Society
Guillermo Haro
[ "Technology" ]
956
[ "Science and technology awards", "Recipients of the Lomonosov Gold Medal" ]
1,033,489
https://en.wikipedia.org/wiki/Barracks
Barracks are buildings used to accommodate military personnel and quasi-military personnel such as police. The English word originates from the 17th century via French and Italian from an old Spanish word 'soldier's tent', but today barracks are usually permanent buildings. The word may apply to separate housing blocks or to complete complexes, and the plural form often refers to a single structure and may be singular in construction. The main objective of barracks is to separate soldiers from the civilian population and reinforce discipline, training, and esprit de corps. They have been called "discipline factories for soldiers". Like industrial factories, some are considered to be shoddy or dull buildings, although others are known for their magnificent architecture such as Collins Barracks in Dublin and others in Paris, Berlin, Madrid, Vienna, or London. From the rough barracks of 19th-century conscript armies, filled with hazing and illness and barely differentiated from the livestock pens that housed the draft animals, to the clean and Internet-connected barracks of modern all-volunteer militaries, the word can have a variety of connotations. History Early barracks such as those of the Roman Praetorian Guard were built to maintain elite forces. There are a number of remains of Roman army barracks in frontier forts such as Vercovicium and Vindolanda. From these and from contemporary Roman sources we can see that the basics of life in a military camp have remained constant for thousands of years. In the Early Modern Period, they formed part of the Military Revolution that scholars believe contributed decisively to the formation of the nation state by increasing the expense of maintaining standing armies. Large, permanent barracks were developed in the 18th century by the two dominant states of the period, France the "caserne" and Spain the "cuartel". The English term 'barrack', on the other hand, derives from the Spanish word for a temporary shelter erected by soldiers on campaign, barraca; (because of fears that a standing army in barracks would be a threat to the constitution, barracks were not generally built in Great Britain until 1790, on the eve of the Napoleonic Wars). Early barracks were multi-story blocks, often grouped in a quadrangle around a courtyard or parade ground. A good example is Berwick Barracks, which was among the first in England to be purpose-built and begun in 1717 to the design of the distinguished architect Nicholas Hawksmoor. During the 18th century, the increasing sophistication of military life led to separate housing for different ranks (officers always had larger rooms) and married quarters; as well as the provision of specialized buildings such as dining rooms and cook houses, bath houses, mess rooms, schools, hospitals, armories, gymnasia, riding schools and stables. The pavilion plan concept of hospital design was influential in barrack planning after the Crimean War. The first large-scale training camps were built in the Kingdom of France and the Holy Roman Empire (Germany) during the early 18th century. The British Army built Aldershot camps from 1854. By the First World War, infantry, artillery, and cavalry regiments had separate barracks. The first naval barracks were hulks, old wooden sailing vessels; but these insanitary lodgings were replaced with large naval barracks at the major dockyard towns of Europe and the United States, usually with hammocks instead of beds. These were inadequate for the enormous armies mobilized after 1914. Hut camps were developed using variations of the eponymous Nissen hut, made from timber or corrugated iron. Military In many military forces, both NCO and SNCO personnel will frequently be housed in barracks for service or training. Officers are often charged with ensuring the barracks and personnel are maintained in an orderly fashion. Junior enlisted and sometimes junior NCOs will often receive less space and may be housed in bays, while senior NCOs and officers may share or have their own room. Junior enlisted personnel are typically tasked with the cleanliness of the barracks. The term "Garrison town" is a common expression for any town that has military barracks, i.e., a permanent military presence nearby. Prison Prison cell blocks often are built and arranged like barracks, and some military prisons may have barracks in their name, such as the United States Disciplinary Barracks of Leavenworth. Worldwide Canada Barracks were used to house troops in forts during the Upper Canadian period. Leading up to and during the War of 1812, Lieutenant-Governor John Graves Simcoe and Major-General Isaac Brock oversaw the construction of Fort York on the shores of Lake Ontario in present-day Toronto. There are several surviving British Army barracks built between 1814 and 1815 at that site today. Multiple limestone barracks were built half a mile west of Fort York in 1840, only one of which survives. The British Army handed over "New Fort York", as the second fort was called, to the Canadian Militia in 1870 after Confederation. The Stone Frigate, completed in 1820, served as barracks briefly in 1837–38, and was refitted as a dormitory and classrooms to house the Royal Military College of Canada by 1876. The Stone frigate is a large stone building originally designed to hold gear and rigging from British warships dismantled to comply with the Rush–Bagot Treaty. Poland In Poland barracks are represented usually as a complex of buildings, each consisting of a separate entity or an administrative or business premises. As an example, the Barracks Complex in Września. Portugal Each of the Portuguese Army bases is referred as a quartel (barracks). In a barracks, each of the dormitory buildings is referred as a caserna (casern). Most of them are regimental barracks, constituting the fixed component of the Army system of forces and being responsible for the training, sustenance and general support to the Army. In addition to the regimental administrative, logistic and training bodies, each barracks can lodge one or more operational units (operational battalions, independent companies or equivalent units). Although there are housing blocks within the perimeter of some regimental barracks, the Portuguese usual practice is for the members of the Armed Forces to live outside the military bases with their families, inserted in the local civilian communities. Many of the Portuguese regimental barracks are of a model developed by the old Administrative Commission for the New Infrastructures of the Armed Forces (CANIFA). Because of this, they are commonly referred as "CANIFA type barracks". These types of barracks were built in the 1950s and 1960s, following a standardized architectural model, usually with an area of between 100,000 and 200,000 square metres, including a headquarters building, a guard house, a general mess building, an infirmary building, a workshop and garage building, an officer house building, a sergeant house building, three to ten rank and file caserns, fire ranges and sports facilities. In average each CANIFA type barracks was intended to lodge around 1000 soldiers and their respective armament, vehicles and other equipment. Russia Until the end of the 18th century personnel of the Imperial Russian Army were billeted with civilians homes or accommodated in slobodas in a countryside. First barracks were built during the reign of Emperor Paul I. For these purposes, Paul I established a one-time land tax based on the amount of land owned by citizen. This tax was not mandatory, but person who paid it was permanently exempted from billets. From the end of 1882, the money collected for exemption from billet was transferred to the military ministry. This has made it possible to step up the construction of barracks for the army. By 1 January 1900, 19,015 barracks had been built, which accommodated 94% of the troops. United Kingdom In the 17th and 18th centuries there were concerns around the idea of a standing army housed in barracks; instead, the law provided for troops routinely to be billeted in small groups in inns and other locations. (The concerns were various: political, ideological and constitutional, provoked by memories of Cromwell's New Model Army and of the use of troops in reign of James II to intimidate areas of civil society. Furthermore, grand urban barracks were associated with absolutist monarchies, where they could be seen as emblematic of power sustained through military might; and there was an ongoing suspicion that gathering soldiers together in barracks might encourage sedition.) Nevertheless, some "soldiers' lodgings" were built in Britain at this time, usually attached to coastal fortifications or royal palaces. The first recorded use of the word 'barracks' in this context was for the Irish Barracks, built in the precinct of the Tower of London in 1669. At the Ordnance Office (responsible for construction and upkeep of barracks) Bernard de Gomme played a key role in developing a 'domestic' style of barrack design in the latter half of the 17th century: he provided barrack blocks for such locations as Plymouth Citadel and Tilbury Fort, each with rows of square rooms arranged in pairs on two stories, accommodating a Company of some sixty men, four to a room, two to a bed. Standard furnishings were provided, and each room had a grate used for heating and cooking. In England, this domestic style continued to be used through the first half of the eighteenth century; most new barracks of this period were more or less hidden within the precincts of medieval castles and Henrician forts. In Scotland, however, a more demonstrative style was employed following the Jacobite rising of 1715 (as at Ruthven Barracks) and that of 1745 (as seen in the monumental Fort George). This bolder approach gradually began to be adopted south of the border during the eighteenth century (beginning with nearby Berwick, 1717). There was much building in and around the Royal Dockyards at this time: during the Seven Years' War, fears of a land attack led to defensive 'lines' being built around the dockyard towns, and infantry barracks were established within them (e.g. at Chatham, Upper and Lower Barracks, 1756, and Plymouth, six defensible square barracks, 1758–63). The newly constituted Royal Marines were also provided with accommodation in the vicinity of the Dockyards (e.g. Stonehouse Barracks, 1779) becoming the first Corps in Britain to be fully provided with its own accommodation. Large urban barracks were still a rarity, though. In London there was a fair amount of barrack accommodation, but most of it was within the precincts of various royal palaces (as at Horse Guards, 1753). The prominent Royal Artillery Barracks in Woolwich (1776) was one exception (but significantly the Artillery were under the command of the Board of Ordnance rather than of the Army). In the aftermath of the French Revolution, though, things changed. The size of the army grew from 40,000 to 225,000 between 1790 and 1814 (with the Militia adding a further 100,000). Barrack accommodation at the time was provided for a mere 20,000. To deal with the situation, responsibility for building barracks was transferred in 1792 from the Board of Ordnance to a specialist Barracks Department overseen by the War Office. With a view to dealing with sedition, and perhaps quelling thoughts of revolution, several large cavalry barracks were built in the 1790s: first at Knightsbridge (close to the royal palaces), then in several provincial towns and cities: Birmingham, Coventry, Manchester, Norwich, Nottingham and Sheffield (as well as Hounslow Barracks just west of London). Several smaller cavalry and artillery barracks were established around this time, but very little was built for the infantry; instead, a number of large camps (with wooden huts) were set up, including at Chelmsford, Colchester and Sunderland, as well as at various locations along the south coast. Barrack-masters were appointed, one such was Captain George Manby at the Royal Barracks, Great Yarmouth. Coincidentally his father, Captain Matthew Manby, had been barrack-master at Limerick. It was not until some years after the end of the Napoleonic Wars (and post-war recession) that barrack-building began again. John Nash built four as part of his London improvements: Regent's Park and St John's Wood for the Cavalry, Wellington Barracks for the Guards, and St George's Barracks (since demolished) behind the National Gallery. In several instances elsewhere, buildings were converted rather than newly built (or a mixture of the two, as at Cambridge Barracks, Portsmouth where a new frontage, housing officers, was built in front of a range of warehouses converted to house the men). In response to the Chartist riots three barracks were established in north-west England in the 1840s, Ladysmith Barracks at Ashton-under-Lyne, Wellington Barracks at Bury and Fulwood Barracks at Preston. A review conducted following the demise of the Board of Ordnance in 1855 noted that only seven barracks outside London had accommodation for more than 1,000. This changed with the establishment of large-scale Army Camps such as Aldershot (1854), and the expansion of Garrison towns such as Colchester; over time in these locations temporary huts were replaced with more permanent barracks buildings. Large-scale camps were not the only way forward, however; from the 1870s, the localisation agenda of the Cardwell Reforms saw new and old barracks established as depots for regional or County brigades and regiments. The latter part of the 19th century also saw the establishment of a number of Naval barracks (an innovation long resisted by the Royal Navy, which had tended to accommodate its sailors afloat either on their ships or else in hulks moored in its harbours). The first of these, Keyham Barracks in Devonport (later HMS Drake), was begun in 1879, and only completed in 1907. During the 20th century, activity ranged from the need for speedy expansion during the First World War (when large camps such as Catterick were established), to the closure of many barracks in the interwar period. Many of those that remained were rebuilt in the 1960s, either substantially (as happened at Woolwich, behind the facade) or entirely (as at Hyde Park and at Chelsea – built 1863, demolished and rebuilt 1963, closed 2008). There has been an ongoing focus on improving the quality of barracks accommodation; since the 1970s several former RAF bases have been converted to serve as Army barracks, in place of some of the more cramped urban sites. Today, generally, only single and unmarried personnel or those who choose not to move their families nearby live in barracks. Most British military barracks are named after battles, military figures or the locality. United States In basic training, and sometimes follow-on training, service members live in barracks. Formerly, the U.S. Marine Corps had gender-separate basic training units. Currently, all services have training where male and female recruits share barracks, but are separated during personal time and lights out. All the services integrate male and female members following boot camp and first assignment. After training, unmarried junior enlisted members will typically reside in barracks. During unaccompanied, dependent-restricted assignments, non-commissioned and commissioned officer ranks may also be required to live in barracks. Amenities in these barracks increase with the rank of the occupant. Unlike the other services, the U.S. Air Force officially uses the term "dormitory" to refer to its unaccompanied housing. During World War II, many U.S. barracks were made of inexpensive, sturdy and easy to assemble Quonset huts that resembled Native American long houses (having a rounded roof but made out of metal). See also Cantonment, a temporary or semi-permanent military quarters. B hut Barkas, Hyderabad Notes References Black, Jeremy, A Military Revolution?: Military Change and European Society, 1550–1800 (London, 1991) Dallemagne, François, Les casernes françaises, (1990) Douet, James, British Barracks, their social and architectural importance, 1660–1914 (London, 1997) Roberts, Michael The Military Revolution, 1560–1660 (Belfast, 1956); reprinted with some amendments in Rogers, Clifford, ed., The Military Revolution Debate Rogers, Clifford, ed., The Military Revolution Debate: Readings on the Military Transformation of Early Modern Europe (Boulder, 1995) 1911 Encyclopædia Britannica External links Royal Engineers Museum Military Works (Barrack construction) Military units and formations by type Human habitats Total institutions
Barracks
[ "Biology" ]
3,319
[ "Behavioural sciences", "Behavior", "Total institutions" ]
1,033,633
https://en.wikipedia.org/wiki/Baden%20Powell%20%28mathematician%29
Baden Powell, MA FRS FRGS (22 August 1796 – 11 June 1860) was an English mathematician and Church of England priest. He held the Savilian Chair of Geometry at the University of Oxford from 1827 to 1860. Powell was a prominent liberal theologian who put forward advanced ideas about evolution. Origins Baden Powell II was born at Stamford Hill, Hackney in London. His father, Baden Powell I (1767–1841), of Langton and Speldhurst in Kent, was a wine merchant, who served as High Sheriff of Kent in 1831, and as Master of the Worshipful Company of Mercers in 1822. The mother of Baden Powell II was Hester Powell (1776–1848), his father's paternal first cousin, a daughter of James Powell (1737–1824) of Clapton, Hackney, Middlesex, Master of the Worshipful Company of Salters in 1818. The Powell family can be traced back to the early 16th century, where they were yeomen farmers at Mildenhall in Suffolk. Baden Powell II's great-grandfather, David Powell (1725–1810) of Homerton, Middlesex, a second son, migrated to the City of London aged 17 in 1712, subsequently going into business as a merchant at Old Broad Street and buying the manor of Wattisfield in Suffolk. In 1740 a branch of the family bought the Whitefriars Glass works. The name Baden originated in Susanna Baden (1663–1737), the maternal grandmother of David Powell (1725–1810) of Homerton, Middlesex, and one of the ten children of Andrew Baden (1637–1716), a Mercer who served as Mayor of Salisbury in 1682. Education Powell was admitted as an undergraduate at Oriel College, Oxford in 1814, and graduated with a first-class honours degree in mathematics in 1817. Ordination Powell was ordained as a priest of the Church of England in 1821, having served as curate of Midhurst, Sussex. His first living was as Vicar of Plumstead, Kent, of which the advowson was owned by his family. He immediately began his scientific work there, starting with experiments on radiant heat. Marriages and children Powell married three times, and had fourteen children in total. His widow changed the last name of the surviving children of his third marriage to "Baden-Powell". Powell's first marriage on 21 July 1821 to Eliza Rivaz (died 13 March 1836) was childless. His second marriage on 27 September 1837 to Charlotte Pope (died 14 October 1844) produced one son and three daughters: Charlotte Elizabeth Powell, (14 September 1838–20 October 1917) Baden Henry Baden-Powell, FRSE (23 August 1841–2 January 1901) Louisa Ann Powell, (18 March 1843–1 August 1896) Laetitia Mary Powell, (4 June 1844–2 September 1865) His third marriage on 10 March 1846 (at St Luke's Church, Chelsea) to Henrietta Grace Smyth (3 September 1824–13 October 1914), a daughter of Admiral Smyth, produced seven sons and three daughters: Henry Warington Baden-Powell, (3 February 1847–24 April 1921), a naval officer, a fellow of the Royal Geographical Society and a King's Counsel (K.C.) Sir George Smyth Baden-Powell, (24 December 1847–20 November 1898), a politician and Conservative MP (1885–1898) Augustus Smyth Powell (1849–1863) Francis (Frank) Smyth Baden-Powell (29 July 1850– 25 December 1933), an artist who exhibited at the Royal Academy of Arts Henrietta Smyth Powell (28 October 1851–9 March 1854) John Penrose Smyth Powell (21 December 1852–14 December 1855) Jessie Smyth Powell (25 November 1855–24 July 1856) Robert Stephenson Smyth Baden-Powell, 1st Baron Baden-Powell, (22 February 1857–8 January 1941), an army officer, writer and a founder of the World Scouting Movement and (with his sister Agnes) founder of the Girl Guides. Agnes Smyth Baden-Powell, (16 December 1858–2 June 1945), founder of the Girl Guides. Baden Fletcher Smyth Baden-Powell, (22 May 1860–3 October 1937), an army officer, aviator and president of the Royal Aeronautical Society Shortly after Powell's death in 1860, his wife renamed the remaining children of his third marriage 'Baden-Powell'; the name was eventually legally changed by royal licence on 30 April 1902. Baden Henry Powell is often also referred to as Baden Henry Baden-Powell, and was using this name by the 1891 census. Evolution Powell was an outspoken advocate of the constant uniformity of the laws of the material world. His views were liberal, and he was sympathetic to evolutionary theory long before Charles Darwin had revealed his ideas. He argued that science should not be placed next to scripture or the two approaches would conflict, and in his own version of Francis Bacon's dictum, contended that the book of God's works was separate from the book of God's word, claiming that moral and physical phenomena were completely independent. His faith in the uniformity of nature (except man's mind) was set out in a theological argument; if God is a lawgiver, then a "miracle" would break the lawful edicts that had been issued at Creation. Therefore, a belief in miracles would be entirely atheistic. Powell's most significant works defended, in succession, the uniformitarian geology set out by Charles Lyell and the evolutionary ideas in Vestiges of the Natural History of Creation published anonymously by Robert Chambers which applied uniform laws to the history of life in contrast to more respectable ideas such as catastrophism involving a series of divine creations. "He insisted that no tortured interpretation of Genesis would ever suffice; we had to let go of the Days of Creation and base Christianity on the moral laws of the New Testament." The boldness of Powell and other theologians in dealing with science led Joseph Dalton Hooker to comment in a letter to Asa Gray dated 29 March 1857: "These parsons are so in the habit of dealing with the abstractions of doctrines as if there was no difficulty about them whatever, so confident, from the practice of having the talk all to themselves for an hour at least every week with no one to gainsay a syllable they utter, be it ever so loose or bad, that they gallop over the course when their field is Botany or Geology as if we were in the pews and they in the pulpit. Witness the self-confident style of Whewell and Baden Powell, Sedgwick and Buckland." William Whewell, Adam Sedgwick and William Buckland opposed evolutionary ideas. When the idea of natural selection was mooted by Charles Darwin and Alfred Russel Wallace in their 1858 papers to the Linnaean Society, both Powell and his brother-in-law William Henry Flower thought that natural selection made creation rational. Essays and Reviews He was one of seven liberal theologians who produced a manifesto titled Essays and Reviews around February 1860, which amongst other things joined in the debate over On the Origin of Species. These Anglicans included Oxford professors, country clergymen, the headmaster of Rugby school and a layman. Their declaration that miracles were irrational stirred up unprecedented anger, drawing much of the fire away from Charles Darwin. Essays sold 22,000 copies in two years, more than the Origin sold in twenty years, and sparked five years of increasingly polarised debate with books and pamphlets furiously contesting the issues. Referring to "Mr Darwin's masterly volume" and restating his argument that belief in miracles is atheistic, Baden Powell wrote that the book "must soon bring about an entire revolution in opinion in favour of the grand principle of the self-evolving powers of nature.": He would have been on the platform at the British Association for the Advancement of Science 1860 Oxford evolution debate that was a highlight of the reaction to Darwin's theory. Huxley's antagonist Wilberforce was also the foremost critic of Essays and Reviews. Powell died of a heart attack a fortnight before the meeting. He is buried in Kensal Green Cemetery, London. Works 1837: History of Natural Philosophy from the Earliest Periods to the Present Time Published by Longman, Brown, Green, and Longmans 1838: The Connexion of Natural and Divine Truth Or the Study of the Inductive Philosophy Considered as Subservient to Theology: Or, The Study of the Inductive Philosophy, Considered as Subservient to Theology, Published by J.W. Parker 1841: A General and Elementary View of the Undulatory Theory, as Applied to the Dispersion of Light, and Some Other Subjects: Including the Substance of Several Papers, Printed in the Philosophical Transactions, and Other Journals, Published by J.W. Parker 1854: (as editor) Lectures on Polarized Light: Together with a Lecture on the Microscope, Delivered Before the Pharmaceutical Society of Great Britain, and at the Medical School of the London Hospital by Jonathan Pereira, published by Longman, Brown, Green, and Longmans 1859: The Order of Nature: Considered in Reference to the Claims of Revelation : a Third Series of Essays, Published by Longman, Brown, Green, Longmans, & Roberts Papers to the Royal Society, the Ashmolean Society and others 1828 "The elements of curves: comprising, I. The geometrical principles of the conic sections; II. An introduction to the algebraic theory of curves; designed for the use of students in the University." 1829 "A short treatise on the principles of the differential and integral calculus" 1830 "An elementary treatise on the geometry of curves and curved surfaces, investigated by the application of the differential and integral calculus." 1832 "The present state and future prospects of mathematical and physical studies in the University of Oxford." 1833 "A short elementary treatise on experimental and mathematical optics." 1834 "On the achromatism of the eye " 1836 "On the theory of ratio and proportion, as treated by EUCLID, including an inquiry into the nature of quantity " 1836 "Observations for determining the refractive indices for the standard rays of the solar spectrum in various media " 1837 "An historical view of the progress of the physical and mathematical sciences from the earliest ages to the present times " 1837 "On the nature and evidence of the primary laws of motion " 1838 "Additional observations for determining the refractive indices for definite rays of the solar spectrum in several media " 1839 "A second supplement to observations for determining the refractive indices for definite rays of the solar spectrum in several media " 1841 "A general and elementary view of the undulatory theory, as applied to the dispersion of light and some other subjects... " 1842 "History of natural philosophy, from the earliest periods to the present time " 1842 "On the theory of parallel lines " 1842 "On necessary and contingent truth, considered in regard to some primary principles of mathematical and mechanical science... " 1849 "An essay on the relation of the several parts of a mathematical science to the fundamental idea therein contained... " 1850 "On irradiation" 1854 "Lectures on polarized light, together with a lecture on the microscope ... " with Jonathan Pereira 1855 "Essays on the spirit of the inductive philosophy, the unity of worlds, and the philosophy of creation " 1857 "Biographies of distinguished scientific men", by Francois ARAGO; translated (from the French) by William Henry SMYTH, Baden POWELL, and Robert GRANT Books published 1829: A Short Treatise on the Principles of the Differential and Integral Calculus 1837: On the Nature and Evidence of the Primary Laws of Motion 1839: Tradition Unveiled: Or, an Exposition of the Pretensions and Tendency of Authoritative Teaching in the Church 1841: The Protestant's Warning and Safeguard in the Present Times 1841: A General and Elementary View of the Undulatory Theory, As Applied to the Dispersion of Light, and Some Other Subjects, Including the substance of several papers, printed in the Philosophical Transactions, and other journals. 1855: The Unity of Worlds and of Nature: Three Essays on the Spirit of Inductive Philosophy; the Plurality of Worlds; and the Philosophy of Creation 1856: Christianity without Judaism. Two sermons, London – Longman, Brown, Green Longmans and Roberts via HathiTrust 1859: The Order of Nature: Considered in Reference to the Claims of Revelation: A Third Series of Essays Publications Theology 1833 Revelation and Science. 1834 To the Editor of the British Critic. 1836 Remarks on Dr. Hampden, &c. 1838 Connection of Natural and Divine Truth 1839 Tradition Unveiled .... London and America. 1840 Supplement to Tradition Unveiled. Ditto ditto. 1841 State Education. 1841 The Protestant's Warning. 1843–4 Three Articles on Anglo-Catholicism in British and Foreign Review, Nos. 31, 32, 33. 1845 Kitto's Cyclopaedia of Biblical Literature – Articles, "Creation","Deluge", "Lord's Day", "Sabbath". 1845 Life of Blanco White December Westminster Review 1845 Tendency of Puseyism June Ditto. 1846 Mysticism and Scepticism . . . July Edinburgh Review. 1847 Protestant Principles Oxford Protestant Magazine 1847 On the Study of Christian Evidences . . Edinburgh Review. 1848 Freedom of Opinion Oxford Protestant Magazine 1848 Church and State Ditto. 1848 Free Enquiry and Liberality. . Kitto's Journal of Sacred Literature. 1848 The Law and the Gospel., ... Ditto. 1848 On the Application and Misapplication of Scripture Ditto. 1850 The State Church – A Sermon before the university. 1855 Unity of Worlds – Two Editions. 1856 On the Burnett Prizes, and the Study of Natural Theology – Oxford Essays 1857 Christianity without Judaism—2nd Series of Essays – Two Editions. 1859 The Order of Nature – 3rd Series of Essays. 1860 On the Study of the Evidences of Christianity, in Essays and Reviews Science 1828 Elements of Curves-and two Supplements 1829 Differential Calculus, and application to Curves. 1830 On Examination Statutes 1832 On Mathematical Studies. 1833 Elementary Treatise on Optics. 1834 History of Natural Philosophy Cabinet Cyclopaedia. 1841 Treatise on the Undulatory Theory applied to Dispersion. 1851 Lecture Synopses in four parts – Geometry, Algebra, Conic Sections – Newton. 1857 Translation of Arago's Autobiography. 1857 Translations of Arago's Lives of Young, Malus, and Fresnel, with Optical Notes. Papers in Philosophical Transactions of the Royal Society 1825 On Radiant Heat. 1826 Second on Radiant Heat. 1834 On Repulsion of Heat. 1835 On Dispersion of Light. 1836 Second on Dispersion of Light. 1837 Third and fourth on Dispersion of Light. 1840 On the Theory of the Dispersion of Light, &c. 1842 On certain cases of Elliptic Polarization. 1845 On Metallic Reflexion, &c. 1848 On Prismatic Interference 1832 On Radiant Heat. 1839 On Refractive Indices. 1841 On Radiant Heat – Second Report. Reports to the British Association 1848–9 On Luminous Meteors (continued to 1869). 1882 to 1849 Numerous Papers on Sectional Proceedings. 1854 On Radiant Heat—Third Report. In Memoirs of the Royal Astronomical Society 1845 On a Double Image Micrometer. 1847 On Luminous Rings, &c. 1849 On Irradiation. (In Royal Astronomical Society's Proceedings.) 1847 On the Beads seen in Eclipses. 1853 On Foucault's Experiments on Rotation of Earth, &c. 1858 On C. Piazzi Smyth's Artificial Horizon. In Ashmolean Society's memoirs 1832 On the Acromatism of the Eye. On Refractive Indices – Three Papers. On Ratios and Proportion. 1849 On the Laws of Motion. On the Theory of Parallels. On Necessary and Contingent Truth Royal Institution abstracts of lectures 1848 On Shooting Stars. 1849 On the Nebular Theory. 1850 On Optical Phenomena in Astronomy. 1851 On Foucault's Pendulum Experiment 1852 On Light and Heat. 1854 On Rotatory Motion. 1858 On Rotatory Motion Applied to Observations at Sea. 1822 Translation of Raymond on Barometrical Measurement, with an Appendix .... Annals of Philosophy. 1823-5 Various, Papers on Light and Heat. Ditto. 1825-6 Two Papers on Heat. Quar. Jour. of Science. 1828 Two Papers on Polarization of Heat. Brewster's Philosophical Journal. 1830 On Mathematical Studies....London Review. 1832-3 Several Papers on Interference of Light, Diffraction, &c – Annals of Philosophy and Phil. Mag. 1834 On Radiant Heat Jameson's Phil. Journ. 1835–6 On Cauchy's Theory of Dispersion of Light, &c Journal of Science and Phil. Mag. Various Papers in Vol. I. of Mag. of Popular Science. Many Papers in Journal of Education. On the Progress of Optics . . . British Annual. On the State of Oxford Ditto. The Lives of Black and Lavoisier....Useful Knowledge Gallery of Portraits. 1838 On University Reform . .July Monthly Chron. 1838-9 Various Papers on Light. Journal of Science. 1838-9 Papers on Light .... Philosophical Magazine. 1839 Correspondence with Brewster, Athenaeum. 1839 On Comte's Philosophie Positive ....Monthly Chronicle. 1841 On Light Philosophical Magazine. 1841 Papers on Light Journal of Science. 1843 Review of Carpenter's Cyclopaedia ....Dublin University Magazine. 1843 Sir Isaac Newton and his Contemporaries Edinburgh Review. 1843 Review of Rigaud's History of the Principia. Ditto. 1846 On Aberration of Light . . .Journal of Science and Philosophical Magazine. 1852 On Lord Brougham's Optical Experiments. Journal of Science 1854 On Foucault's Gyroscope. . Journal of Science and Philosophical Magazine 1856 Life of Young . . . National Review and Philosophical Magazine 1856 On Brewster's Life of Newton .... Edinburgh Review. 1856 On Fresnel's Formulae for Light – July, August, and October – Journal of Science and Philosophical Magazine. 1857 Life and Writings of Arago Ditto. Also 1834 A Letter to the Editor of The British Critic Notable students Lewis Carroll attended the lectures on pure geometry by Baden Powell. Collections In 1970, 170 volumes from Powell's library were presented to the Bodleian Libraries by his grandson, D. F. W. Baden Powell. Notes References Further reading * Corsi, Pietro (1988). Science and Religion: Baden Powell and the Anglican Debate, 1800-1860, Cambridge University Press , 346 pages External links Collection of obituary notices 1796 births 1860 deaths Alumni of Oriel College, Oxford English Christian theologians 19th-century English mathematicians Fellows of the Royal Society Proto-evolutionary biologists Savilian Professors of Geometry 19th-century English Anglican priests Baden Fellows of the Royal Geographical Society
Baden Powell (mathematician)
[ "Biology" ]
3,866
[ "Non-Darwinian evolution", "Biology theories", "Proto-evolutionary biologists" ]
1,033,664
https://en.wikipedia.org/wiki/Morava%20K-theory
In stable homotopy theory, a branch of mathematics, Morava K-theory is one of a collection of cohomology theories introduced in algebraic topology by Jack Morava in unpublished preprints in the early 1970s. For every prime number p (which is suppressed in the notation), it consists of theories K(n) for each nonnegative integer n, each a ring spectrum in the sense of homotopy theory. published the first account of the theories. Details The theory K(0) agrees with singular homology with rational coefficients, whereas K(1) is a summand of mod-p complex K-theory. The theory K(n) has coefficient ring Fp[vn,vn−1] where vn has degree 2(pn − 1). In particular, Morava K-theory is periodic with this period, in much the same way that complex K-theory has period 2. These theories have several remarkable properties. They have Künneth isomorphisms for arbitrary pairs of spaces: that is, for X and Y CW complexes, we have They are "fields" in the category of ring spectra. In other words every module spectrum over K(n) is free, i.e. a wedge of suspensions of K(n). They are complex oriented (at least after being periodified by taking the wedge sum of (pn − 1) shifted copies), and the formal group they define has height n. Every finite p-local spectrum X has the property that K(n)∗(X) = 0 if and only if n is less than a certain number N, called the type of the spectrum X. By a theorem of Devinatz–Hopkins–Smith, every thick subcategory of the category of finite p-local spectra is the subcategory of type-n spectra for some n. See also Chromatic homotopy theory Morava E-theory References Hovey-Strickland, "Morava K-theory and localisation" Algebraic topology Cohomology theories
Morava K-theory
[ "Mathematics" ]
424
[ "Fields of abstract algebra", "Topology", "Algebraic topology" ]
1,033,666
https://en.wikipedia.org/wiki/Complex%20cobordism
In mathematics, complex cobordism is a generalized cohomology theory related to cobordism of manifolds. Its spectrum is denoted by MU. It is an exceptionally powerful cohomology theory, but can be quite hard to compute, so often instead of using it directly one uses some slightly weaker theories derived from it, such as Brown–Peterson cohomology or Morava K-theory, that are easier to compute. The generalized homology and cohomology complex cobordism theories were introduced by using the Thom spectrum. Spectrum of complex cobordism The complex bordism of a space is roughly the group of bordism classes of manifolds over with a complex linear structure on the stable normal bundle. Complex bordism is a generalized homology theory, corresponding to a spectrum MU that can be described explicitly in terms of Thom spaces as follows. The space is the Thom space of the universal -plane bundle over the classifying space of the unitary group . The natural inclusion from into induces a map from the double suspension to . Together these maps give the spectrum ; namely, it is the homotopy colimit of . Examples: is the sphere spectrum. is the desuspension of . The nilpotence theorem states that, for any ring spectrum , the kernel of consists of nilpotent elements. The theorem implies in particular that, if is the sphere spectrum, then for any , every element of is nilpotent (a theorem of Goro Nishida). (Proof: if is in , then is a torsion but its image in , the Lazard ring, cannot be torsion since is a polynomial ring. Thus, must be in the kernel.) Formal group laws and showed that the coefficient ring (equal to the complex cobordism of a point, or equivalently the ring of cobordism classes of stably complex manifolds) is a polynomial ring on infinitely many generators of positive even degrees. Write for infinite dimensional complex projective space, which is the classifying space for complex line bundles, so that tensor product of line bundles induces a map A complex orientation on an associative commutative ring spectrum E is an element x in whose restriction to is 1, if the latter ring is identified with the coefficient ring of E. A spectrum E with such an element x is called a complex oriented ring spectrum. If E is a complex oriented ring spectrum, then and is a formal group law over the ring . Complex cobordism has a natural complex orientation. showed that there is a natural isomorphism from its coefficient ring to Lazard's universal ring, making the formal group law of complex cobordism into the universal formal group law. In other words, for any formal group law F over any commutative ring R, there is a unique ring homomorphism from MU*(point) to R such that F is the pullback of the formal group law of complex cobordism. Brown–Peterson cohomology Complex cobordism over the rationals can be reduced to ordinary cohomology over the rationals, so the main interest is in the torsion of complex cobordism. It is often easier to study the torsion one prime at a time by localizing MU at a prime p; roughly speaking this means one kills off torsion prime to p. The localization MUp of MU at a prime p splits as a sum of suspensions of a simpler cohomology theory called Brown–Peterson cohomology, first described by . In practice one often does calculations with Brown–Peterson cohomology rather than with complex cobordism. Knowledge of the Brown–Peterson cohomologies of a space for all primes p is roughly equivalent to knowledge of its complex cobordism. Conner–Floyd classes The ring is isomorphic to the formal power series ring where the elements cf are called Conner–Floyd classes. They are the analogues of Chern classes for complex cobordism. They were introduced by . Similarly is isomorphic to the polynomial ring Cohomology operations The Hopf algebra MU*(MU) is isomorphic to the polynomial algebra R[b1, b2, ...], where R is the reduced bordism ring of a 0-sphere. The coproduct is given by where the notation ()2i means take the piece of degree 2i. This can be interpreted as follows. The map is a continuous automorphism of the ring of formal power series in x, and the coproduct of MU*(MU) gives the composition of two such automorphisms. See also Adams–Novikov spectral sequence List of cohomology theories Algebraic cobordism Notes References . . Translation of . External links Complex bordism at the manifold atlas Algebraic topology
Complex cobordism
[ "Mathematics" ]
990
[ "Fields of abstract algebra", "Topology", "Algebraic topology" ]
1,033,847
https://en.wikipedia.org/wiki/Transient%20receptor%20potential%20channel
Transient receptor potential channels (TRP channels) are a group of ion channels located mostly on the plasma membrane of numerous animal cell types. Most of these are grouped into two broad groups: Group 1 includes TRPC ( "C" for canonical), TRPV ("V" for vanilloid), TRPVL ("VL" for vanilloid-like), TRPM ("M" for melastatin), TRPS ("S" for soromelastatin), TRPN ("N" for mechanoreceptor potential C), and TRPA ("A" for ankyrin). Group 2 consists of TRPP ("P" for polycystic) and TRPML ("ML" for mucolipin). Other less-well categorized TRP channels exist, including yeast channels and a number of Group 1 and Group 2 channels present in non-animals. Many of these channels mediate a variety of sensations such as pain, temperature, different kinds of taste, pressure, and vision. In the body, some TRP channels are thought to behave like microscopic thermometers and used in animals to sense hot or cold. Some TRP channels are activated by molecules found in spices like garlic (allicin), chili pepper (capsaicin), wasabi (allyl isothiocyanate); others are activated by menthol, camphor, peppermint, and cooling agents; yet others are activated by molecules found in cannabis (i.e., THC, CBD and CBN) or stevia. Some act as sensors of osmotic pressure, volume, stretch, and vibration. Most of the channels are activated or inhibited by signaling lipids and contribute to a family of lipid-gated ion channels. These ion channels have a relatively non-selective permeability to cations, including sodium, calcium and magnesium. TRP channels were initially discovered in the so-called "transient receptor potential" mutant (trp-mutant) strain of the fruit fly Drosophila, hence their name (see History of Drosophila TRP channels below). Later, TRP channels were found in vertebrates where they are ubiquitously expressed in many cell types and tissues. Most TRP channels are composed of 6 membrane-spanning helices with intracellular N- and C-termini. Mammalian TRP channels are activated and regulated by a wide variety of stimuli and are expressed throughout the body. Families In the animal TRP superfamily there are currently 9 proposed families split into two groups, each family containing a number of subfamilies. Group one consists of TRPC, TRPV, TRPVL, TRPA, TRPM, TRPS, and TRPN, while group two contains TRPP and TRPML. There is an additional family labeled TRPY that is not always included in either of these groups. All of these sub-families are similar in that they are molecular sensing, non-selective cation channels that have six transmembrane segments, however, each sub-family is unique and shares little structural homology with one another. This uniqueness gives rise to the various sensory perception and regulation functions that TRP channels have throughout the body. Group one and group two vary in that both TRPP and TRPML of group two have a much longer extracellular loop between the S1 and S2 transmembrane segments. Another differentiating characteristic is that all the group one sub-families either contain an N-terminal intracellular ankyrin repeat sequence, a C-terminal TRP domain sequence, or both—whereas both group two sub-families have neither. Below are members of the sub-families and a brief description of each: TRPA TRPA, A for "ankyrin", is named for the large amount of ankyrin repeats found near the N-terminus. TRPA is primarily found in afferent nociceptive nerve fibers and is associated with the amplification of pain signaling as well as cold pain hypersensitivity. These channels have been shown to be both mechanical receptors for pain and chemosensors activated by various chemical species, including isothiocyanates (pungent chemicals in substances such as mustard oil and wasabi), cannabinoids, general and local analgesics, and cinnamaldehyde. While TRPA1 is expressed in a wide variety of animals, a variety of other TRPA channels exist outside of vertebrates. TRPA5, painless, pyrexia, and waterwitch are distinct phylogenetic branches within the TRPA clade, and are only evidenced to be expressed in crustaceans and insects, while HsTRPA arose as a Hymenoptera-specific duplication of waterwitch. Like TRPA1 and other TRP channels, these function as ion channels in a number of sensory systems. TRPA- or TRPA1-like channels also exists in a variety of species as a phylogenetically distinct clade, but these are less well understood. TRPC TRPC, C for "canonical", is named for being the most closely related to Drosophila TRP, the namesake of TRP channels. The phylogeny of TRPC channels has not been resolved in detail, but they are present across animal taxa. There are actually only six TRPC channels expressed in humans because TRPC2 is found to be expressed solely in mice and is considered a pseudo-gene in humans; this is partly due to the role of TRPC2 in detecting pheromones, which mice have an increased ability compared to humans. Mutations in TRPC channels have been associated with respiratory diseases along with focal segmental glomerulosclerosis in the kidneys. All TRPC channels are activated either by phospholipase C (PLC) or diacyglycerol (DAG). TRPML TRPML, ML for "mucolipin", gets its name from the neurodevelopmental disorder mucolipidosis IV. Mucolipidosis IV was first discovered in 1974 by E.R. Berman who noticed abnormalities in the eyes of an infant. These abnormalities soon became associated with mutations to the MCOLN1 gene which encodes for the TRPML1 ion channel. TRPML is still not highly characterized. The three known vertebrate copies are restricted to jawed vertebrates, with some exceptions (e.g. Xenopus tropicalis). TRPM TRPM, M for "melastatin", was found during a comparative genetic analysis between benign nevi and malignant nevi (melanoma). Mutations within TRPM channels have been associated with hypomagnesemia with secondary hypocalcemia. TRPM channels have also become known for their cold-sensing mechanisms, such is the case with TRPM8. Comparative studies have shown that the functional domains and critical amino acids of TRPM channels are highly conserved across species. Phylogenetics has shown that TRPM channels are split into two major clades, αTRPM and βTRPM. αTRPMs include vertebrate TRPM1, TRPM3, and the "chanzymes" TRPM6 and TRPM7, as well as the only insect TRPM channel, among others. βTRPMs include, but are not limited to, vertebrate TRPM2, TRPM4, TRPM5, and TRPM8 (the cold and menthol sensor). Two additional major clades have been described: TRPMc, which is present only in a variety of arthropods, and a basal clade, which has since been proposed to be a distinct and separate TRP channel family (TRPS). TRPN TRPN was originally described in Drosophila melanogaster and Caenorhabditis elegans as nompC, a mechanically gated ion channel. Only a single TRPN, N for "no mechanoreceptor potential C," or "nompC", is known to be broadly expressed in animals (although some Cnidarians have more), and is notably only a pseudogene in amniote vertebrates. Despite TRPA being named for ankyrin repeats, TRPN channels are thought to have the most of any TRP channel, typically around 28, which are highly conserved across taxa Since its discovery, Drosophila nompC has been implicated in mechanosensation (including mechanical stimulation of the cuticle and sound detection) and cold nociception. TRPP TRPP, P for "polycistin", is named for polycystic kidney disease, which is associated with these channels. These channels are also referred to as PKD (polycistic kidney disease) ion channels. PKD2-like genes (examples include TRPP2, TRPP3, and TRPP5) encode canonical TRP channels. PKD1-like genes encode much larger proteins with 11 transmembrane segments, which do not have all the features of other TRP channels. However, 6 of the transmebrane segments of PKD1-like proteins have substantial sequence homology with TRP channels, indicating they may simply have diversified greatly from other closely related proteins. Insects have a third sub-family of TRPP, called brividos, which participate in cold sensing. TRPS TRPS, S for Soromelastatin, was named as it forms a sister group to TRPM. TRPS is broadly present in animals, but notably absent in vertebrates and insects (among others). TRPS has not yet been well described functionally, though it is known that the C. elegans TRPS, known as CED-11, is a calcium channel which participates in apoptosis. TRPV TRPV, V for "vanilloid", was originally discovered in Caenorhabditis elegans, and is named for the vanilloid chemicals that activate some of these channels. These channels have been made famous for their association with molecules such as capsaicin (a TRPV1 agonist). In addition to the 6 known vertebrate paralogues, 2 major clades are known outside of the deterostomes: nanchung and Iav. Mechanistic studies of these latter clades have been largely restricted to Drosophila, but phylogenetic analyses has placed a number of other genes from Placozoa, Annelida, Cnidaria, Mollusca, and other arthropods within them. TRPV channels have also been described in protists. TRPVL TRPVL has been proposed to be a sister clade to TRPV, and is limited to the cnidarians Nematostella vectensis and Hydra magnipapillata, and the annelid Capitella teleta. Little is known concerning these channels. TRPY TRPY, Y for "yeast", is highly localized to the yeast vacuole, which is the functional equivalent of a lysosome in a mammalian cell, and acts as a mechanosensor for vacuolar osmotic pressure. Patch clamp techniques and hyperosmotic stimulation have illustrated that TRPY plays a role in intracellular calcium release. Phylogenetic analysis has shown that TRPY1 does not form a part with the other metazoan TRP groups one and two, and is suggested to have evolved after the divergence of metazoans and fungi. Others have indicated that TRPY are more closely related to TRPP. Structure TRP channels are composed of 6 membrane-spanning helices (S1-S6) with intracellular N- and C-termini. Mammalian TRP channels are activated and regulated by a wide variety of stimuli including many post-transcriptional mechanisms like phosphorylation, G-protein receptor coupling, ligand-gating, and ubiquitination. The receptors are found in almost all cell types and are largely localized in cell and organelle membranes, modulating ion entry. Most TRP channels form homo- or heterotetramers when completely functional. The ion selectivity filter, pore, is formed by the complex combination of p-loops in the tetrameric protein, which are situated in the extracellular domain between the S5 and S6 transmembrane segments. As with most cation channels, TRP channels have negatively charged residues within the pore to attract the positively charged ions. Group 1 Characteristics Each channel in this group is structurally unique, which adds to the diversity of functions that TRP channels possess, however, there are some commonalities that distinguish this group from others. Starting from the intracellular N-terminus there are varying lengths of ankryin repeats (except in TRPM) that aid with membrane anchoring and other protein interactions. Shortly following S6 on the C-terminal end, there is a highly conserved TRP domain (except in TRPA) which is involved with gating modulation and channel multimerization. Other C-terminal modifications such as alpha-kinase domains in TRPM7 and M8 have been seen as well in this group. Group 2 Characteristics Group two most distinguishable trait is the long extracellular span between the S1 and S2 transmembrane segments. Members of group two are also lacking in ankryin repeats and a TRP domain. They have been shown, however, to have endoplasmic reticulum (ER) retention sequences towards on the C-terminal end illustrating possible interactions with the ER. Function TRP channels modulate ion entry driving forces and Ca2+ and Mg2+ transport machinery in the plasma membrane, where most of them are located. TRPs have important interactions with other proteins and often form signaling complexes, the exact pathways of which are unknown. TRP channels were initially discovered in the trp mutant strain of the fruit fly Drosophila which displayed transient elevation of potential in response to light stimuli and were so named transient receptor potential channels. TRPML channels function as intracellular calcium release channels and thus serve an important role in organelle regulation. Importantly, many of these channels mediate a variety of sensations like the sensations of pain, temperature, different kinds of taste, pressure, and vision. In the body, some TRP channels are thought to behave like microscopic thermometers and are used in animals to sense hot or cold. TRPs act as sensors of osmotic pressure, volume, stretch, and vibration. TRPs have been seen to have complex multidimensional roles in sensory signaling. Many TRPs function as intracellular calcium release channels. Pain and temperature sensation TRP ion channels convert energy into action potentials in somatosensory nociceptors. Thermo-TRP channels have a C-terminal domain that is responsible for thermosensation and have a specific interchangeable region that allows them to sense temperature stimuli that is tied to ligand regulatory processes. Although most TRP channels are modulated by changes in temperature, some have a crucial role in temperature sensation. There are at least 6 different Thermo-TRP channels and each plays a different role. For instance, TRPM8 relates to mechanisms of sensing cold, TRPV1 and TRPM3 contribute to heat and inflammation sensations, and TRPA1 facilitates many signaling pathways like sensory transduction, nociception, inflammation and oxidative stress. Taste TRPM5 is involved in taste signaling of sweet, bitter and umami tastes by modulating the signal pathway in type II taste receptor cells. TRPM5 is activated by the sweet glycosides found in the stevia plant. Several other TRP channels play a significant role in chemosensation through sensory nerve endings in the mouth that are independent from taste buds. TRPA1 responds to mustard oil (allyl isothiocyanate), wasabi, and cinnamon, TRPA1 and TRPV1 responds to garlic (allicin), TRPV1 responds to chilli pepper (capsaicin), TRPM8 is activated by menthol, camphor, peppermint, and cooling agents; TRPV2 is activated by molecules (THC, CBD and CBN) found in marijuana. TRP-like channels in insect vision The trp-mutant fruit flies, which lack a functional copy of trp gene, are characterized by a transient response to light, unlike wild-type flies that demonstrate a sustained photoreceptor cell activity in response to light. A distantly related isoform of TRP channel, TRP-like channel (TRPL), was later identified in Drosophila photoreceptors, where it is expressed at approximately 10- to 20-fold lower levels than TRP protein. A mutant fly, trpl, was subsequently isolated. Apart from structural differences, the TRP and TRPL channels differ in cation permeability and pharmacological properties. TRP/TRPL channels are solely responsible for depolarization of insect photoreceptor plasma membrane in response to light. When these channels open, they allow sodium and calcium to enter the cell down the concentration gradient, which depolarizes the membrane. Variations in light intensity affect the total number of open TRP/TRPL channels, and, therefore, the degree of membrane depolarization. These graded voltage responses propagate to photoreceptor synapses with second-order retinal neurons and further to the brain. It is important to note that the mechanism of insect photoreception is dramatically different from that in mammals. Excitation of rhodopsin in mammalian photoreceptors leads to the hyperpolarization of the receptor membrane but not to depolarization as in the insect eye. In Drosophila and, it is presumed, other insects, a phospholipase C (PLC)-mediated signaling cascade links photoexcitation of rhodopsin to the opening of the TRP/TRPL channels. Although numerous activators of these channels such as phosphatidylinositol-4,5-bisphosphate (PIP2) and polyunsaturated fatty acids (PUFAs) were known for years, a key factor mediating chemical coupling between PLC and TRP/TRPL channels remained a mystery until recently. It was found that breakdown of a lipid product of PLC cascade, diacylglycerol (DAG), by the enzyme diacylglycerol lipase, generates PUFAs that can activate TRP channels, thus initiating membrane depolarization in response to light. This mechanism of TRP channel activation may be well-preserved among other cell types where these channels perform various functions. Clinical significance Mutations in TRPs have been linked to neurodegenerative disorders, skeletal dysplasia, kidney disorders, and may play an important role in cancer. TRPs may make important therapeutic targets. There is significant clinical significance to TRPV1, TRPV2, TRPV3 and TRPM8’s role as thermoreceptors, and TRPV4 and TRPA1’s role as mechanoreceptors; reduction of chronic pain may be possible by targeting ion channels involved in thermal, chemical, and mechanical sensation to reduce their sensitivity to stimuli. For instance the use of TRPV1 agonists would potentially inhibit nociception at TRPV1, particularly in pancreatic tissue where TRPV1 is highly expressed. The TRPV1 agonist capsaicin, found in chili peppers, has been indicated to relieve neuropathic pain. TRPV1 agonists inhibit nociception at TRPV1 Role in cancer Altered expression of TRP proteins often leads to tumorigenesis, as reported for TRPV1, TRPV6, TRPC1, TRPC6, TRPM4, TRPM5, and TRPM8. TRPV1 and TRPV2 have been implicated in breast cancer. TRPV1 expression in aggregates found at endoplasmic reticulum or Golgi apparatus and/or surrounding these structures in breast cancer patients confer worse survival. TRPM family of ion channels are particularly associated with prostate cancer where TRPM2 (and its long noncoding RNA TRPM2-AS), TRPM4, and TRPM8 are overexpressed in prostate cancer associated with more aggressive outcomes. TRPM3 has been shown to promote growth and autophagy in clear cell renal cell carcinoma, TRPM4 is overexpressed in diffuse large B-cell lymphoma associated with poorer survival, while TRPM5 has oncogenic properties in melanoma. TRP channels take center stage in modulating chemotherapy resistance in breast cancer. Some TRP channels such as TRPA1 and TRPC5 are tightly associated with drug resistance during cancer treatment; TRPC5-mediated high Ca2+ influx activates the transcription factor NFATC3 (Nuclear Factor of Activated T Cells, Cytoplasmic 3), which triggers p-glycoprotein (p-gp) transcription. The overexpression of p-gp is widely recognized as a major factor in chemoresistance in cancer cells, as it functions as an active efflux pump that can remove various foreign substances, including chemotherapeutic agents, from within the cell. Contrarily, other TRP channels, such as TRPV1 and TRPV2, have been demonstrated to potentiate the anti-tumorigenic effects of certain chemotherapeutic agents and TRPV2 is a potential biomarker and therapeutic target in triple negative breast cancer. Role in inflammatory responses In addition to TLR4 mediated pathways, certain members of the family of the transient receptor potential ion channels recognize LPS. LPS-mediated activation of TRPA1 was shown in mice and Drosophila melanogaster flies. At higher concentrations, LPS activates other members of the sensory TRP channel family as well, such as TRPV1, TRPM3 and to some extent TRPM8. LPS is recognized by TRPV4 on epithelial cells. TRPV4 activation by LPS was necessary and sufficient to induce nitric oxide production with a bactericidal effect. History of Drosophila TRP channels The original TRP-mutant in Drosophila was first described by Cosens and Manning in 1969 as "a mutant strain of D. melanogaster which, though behaving phototactically positive in a T-maze under low ambient light, is visually impaired and behaves as though blind". It also showed an abnormal electroretinogram response of photoreceptors to light which was transient rather than sustained as in the "wild type". It was investigated subsequently by Baruch Minke, a post-doc in the group of William Pak, and named TRP according to its behavior in the ERG. The identity of the mutated protein was unknown until it was cloned by Craig Montell, a post-doctoral researcher in Gerald Rubin's research group, in 1989, who noted its predicted structural relationship to channels known at the time and Roger Hardie and Baruch Minke who provided evidence in 1992 that it is an ion channel that opens in response to light stimulation. The TRPL channel was cloned and characterized in 1992 by the research group of Leonard Kelly. In 2013, Montell and his research group found that the TRPL (TRP-like) cation channel was a direct target for tastants in gustatory receptor neurons and could be reversibly down-regulated. See also Endocannabinoid system Transient receptor potential channel-interacting protein database (2010) References External links Membrane biology Ion channels Voltage-gated ion channels
Transient receptor potential channel
[ "Chemistry" ]
4,994
[ "Neurochemistry", "Membrane biology", "Ion channels", "Molecular biology" ]
1,033,865
https://en.wikipedia.org/wiki/Reduction%20%28mathematics%29
In mathematics, reduction refers to the rewriting of an expression into a simpler form. For example, the process of rewriting a fraction into one with the smallest whole-number denominator possible (while keeping the numerator a whole number) is called "reducing a fraction". Rewriting a radical (or "root") expression with the smallest possible whole number under the radical symbol is called "reducing a radical". Minimizing the number of radicals that appear underneath other radicals in an expression is called denesting radicals. Algebra In linear algebra, reduction refers to applying simple rules to a series of equations or matrices to change them into a simpler form. In the case of matrices, the process involves manipulating either the rows or the columns of the matrix and so is usually referred to as row-reduction or column-reduction, respectively. Often the aim of reduction is to transform a matrix into its "row-reduced echelon form" or "row-echelon form"; this is the goal of Gaussian elimination. Calculus In calculus, reduction refers to using the technique of integration by parts to evaluate integrals by reducing them to simpler forms. Static (Guyan) reduction In dynamic analysis, static reduction refers to reducing the number of degrees of freedom. Static reduction can also be used in finite element analysis to refer to simplification of a linear algebraic problem. Since a static reduction requires several inversion steps it is an expensive matrix operation and is prone to some error in the solution. Consider the following system of linear equations in an FEA problem: where K and F are known and K, x and F are divided into submatrices as shown above. If F2 contains only zeros, and only x1 is desired, K can be reduced to yield the following system of equations is obtained by writing out the set of equations as follows: Equation () can be solved for (assuming invertibility of ): And substituting into () gives Thus In a similar fashion, any row or column i of F with a zero value may be eliminated if the corresponding value of xi is not desired. A reduced K may be reduced again. As a note, since each reduction requires an inversion, and each inversion is an operation with computational cost O(n3), most large matrices are pre-processed to reduce calculation time. History In the 9th century, Persian mathematician Al-Khwarizmi's Al-Jabr introduced the fundamental concepts of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation and the cancellation of like terms on opposite sides of the equation. This is the operation which Al-Khwarizmi originally described as al-jabr. The name "algebra" comes from the "al-jabr" in the title of his book. References Mathematical terminology Linear algebra Calculus Iranian inventions
Reduction (mathematics)
[ "Mathematics" ]
589
[ "Linear algebra", "nan", "Algebra", "Calculus" ]
1,033,877
https://en.wikipedia.org/wiki/Dixon%27s%20factorization%20method
In number theory, Dixon's factorization method (also Dixon's random squares method or Dixon's algorithm) is a general-purpose integer factorization algorithm; it is the prototypical factor base method. Unlike for other factor base methods, its run-time bound comes with a rigorous proof that does not rely on conjectures about the smoothness properties of the values taken by a polynomial. The algorithm was designed by John D. Dixon, a mathematician at Carleton University, and was published in 1981. Basic idea Dixon's method is based on finding a congruence of squares modulo the integer N which is intended to factor. Fermat's factorization method finds such a congruence by selecting random or pseudo-random x values and hoping that the integer x2 mod N is a perfect square (in the integers): For example, if , (by starting at 292, the first number greater than and counting up) the is 256, the square of 16. So . Computing the greatest common divisor of and N using Euclid's algorithm gives 163, which is a factor of N. In practice, selecting random x values will take an impractically long time to find a congruence of squares, since there are only squares less than N. Dixon's method replaces the condition "is the square of an integer" with the much weaker one "has only small prime factors"; for example, there are 292 squares smaller than 84923; 662 numbers smaller than 84923 whose prime factors are only 2,3,5 or 7; and 4767 whose prime factors are all less than 30. (Such numbers are called B-smooth with respect to some bound B.) If there are many numbers whose squares can be factorized as for a fixed set of small primes, linear algebra modulo 2 on the matrix will give a subset of the whose squares combine to a product of small primes to an even power — that is, a subset of the whose squares multiply to the square of a (hopefully different) number mod N. Method Suppose the composite number N is being factored. Bound B is chosen, and the factor base is identified (which is called P), the set of all primes less than or equal to B. Next, positive integers z are sought such that z2 mod N is B-smooth. Therefore we can write, for suitable exponents ai, When enough of these relations have been generated (it is generally sufficient that the number of relations be a few more than the size of P), the methods of linear algebra, such as Gaussian elimination, can be used to multiply together these various relations in such a way that the exponents of the primes on the right-hand side are all even: This yields a congruence of squares of the form which can be turned into a factorization of N, This factorization might turn out to be trivial (i.e. ), which can only happen if in which case another try must be made with a different combination of relations; but if a nontrivial pair of factors of N is reached, the algorithm terminates. Pseudocode input: positive integer output: non-trivial factor of Choose bound Let be all primes repeat for to do Choose such that is -smooth Let such that end for Find non-empty such that Let while return Example This example will try to factor N = 84923 using bound B = 7. The factor base is then P = {2, 3, 5, 7}. A search can be made for integers between and N whose squares mod N are B-smooth. Suppose that two of the numbers found are 513 and 537: So Then That is, The resulting factorization is 84923 = gcd(20712 − 16800, 84923) × gcd(20712 + 16800, 84923) = 163 × 521. Optimizations The quadratic sieve is an optimization of Dixon's method. It selects values of x close to the square root of such that x2 modulo N is small, thereby largely increasing the chance of obtaining a smooth number. Other ways to optimize Dixon's method include using a better algorithm to solve the matrix equation, taking advantage of the sparsity of the matrix: a number z cannot have more than factors, so each row of the matrix is almost all zeros. In practice, the block Lanczos algorithm is often used. Also, the size of the factor base must be chosen carefully: if it is too small, it will be difficult to find numbers that factorize completely over it, and if it is too large, more relations will have to be collected. A more sophisticated analysis, using the approximation that a number has all its prime factors less than with probability about (an approximation to the Dickman–de Bruijn function), indicates that choosing too small a factor base is much worse than too large, and that the ideal factor base size is some power of . The optimal complexity of Dixon's method is in big-O notation, or in L-notation. References Integer factorization algorithms Squares in number theory
Dixon's factorization method
[ "Mathematics" ]
1,065
[ "Squares in number theory", "Number theory" ]
1,034,009
https://en.wikipedia.org/wiki/Solanine
Solanine is a glycoalkaloid poison found in species of the nightshade family within the genus Solanum, such as the potato (Solanum tuberosum). It can occur naturally in any part of the plant, including the leaves, fruit, and tubers. Solanine has pesticidal properties, and it is one of the plant's natural defenses. Solanine was first isolated in 1820 from the berries of the European black nightshade (Solanum nigrum), after which it was named. It belongs to the chemical family of saponins. Solanine poisoning Symptoms Solanine poisoning is primarily displayed by gastrointestinal and neurological disorders. Symptoms include nausea, diarrhea, vomiting, stomach cramps, burning of the throat, cardiac dysrhythmia, nightmares, headache, dizziness, itching, eczema, thyroid problems, and inflammation and pain in the joints. In more severe cases, hallucinations, loss of sensation, paralysis, fever, jaundice, dilated pupils, hypothermia, and death have been reported. Ingestion of solanine in moderate amounts can cause death. One study suggests that doses of 2 to 5 mg/kg of body weight can cause toxic symptoms, and doses of 3 to 6 mg/kg of body weight can be fatal. Symptoms usually occur 8 to 12 hours after ingestion, but may occur as rapidly as 10 minutes after eating high-solanine foods. Correlation with birth defects Some studies show a correlation between the consumption of potatoes suffering from late blight (which increases solanine and other glycoalkaloid levels) and the incidence of spina bifida in humans. However, other studies have shown no correlation between potato consumption and the incidence of birth defects. Livestock poisoning Livestock can also be susceptible to glycoalkaloids. High concentrations of solanine are necessary to cause death to mammals. The gastrointestinal tract cannot efficiently absorb solanine, which helps decrease its strength to the mammal body. Livestock can hydrolyze solanine and excrete its contents to diminish its presence in the body. Mechanism of action There are several proposed mechanisms of how solanine causes toxicity in humans, but the true mechanism of action is not well understood. Solanum glycoalkaloids have been shown to inhibit cholinesterase, disrupt cell membranes, and cause birth defects. One study suggests that the toxic mechanism of solanine is caused by the chemical's interaction with mitochondrial membranes. Experiments show that solanine exposure opens the potassium channels of mitochondria, increasing their membrane potential. This, in turn, leads to Ca2+ being transported from the mitochondria into the cytoplasm, and this increased concentration of Ca2+ in the cytoplasm triggers cell damage and apoptosis. Potato, tomato, and eggplant glycoalkaloids like solanine have also been shown to affect active transport of sodium across cell membranes. This cell membrane disruption is likely the cause of many of the symptoms of solanine toxicity, including burning sensations in the mouth, nausea, vomiting, abdominal cramps, diarrhea, internal hemorrhaging, and stomach lesions. Biosynthesis Solanine is a glycoalkaloid poison created by various plants in the genus Solanum, such as the potato plant. When the plant's stem, tubers, or leaves are exposed to sunlight, it stimulates the biosynthesis of solanine and other glycoalkaloids as a defense mechanism so it is not eaten. It is therefore considered to be a natural pesticide. Though the structures of the intermediates in this biosynthetic pathway are shown, many of the specific enzymes involved in these chemical processes are not known. However, it is known that in the biosynthesis of solanine, cholesterol is first converted into the steroidal alkaloid solanidine. This is accomplished through a series of hydroxylation, transamination, oxidation, cyclization, dehydration, and reduction reactions. Specifically, solanidine formation involves sequential hydroxylation, transamination, and cyclization reactions.The solanidine is then converted into solanine through a series of glycosylation reactions catalyzed by specific glycosyltransferases. Plants like the potato and tomato constantly synthesize low levels of glycoalkaloids like solanine. However, under stress, such as the presence of a pest or herbivore, they increase the synthesis of compounds like solanine as a natural chemical defense. This rapid increase in glycoalkaloid concentration gives the potatoes a bitter taste, and stressful stimuli like light also stimulate photosynthesis and the accumulation of chlorophyll. As a result, the potatoes turn green, and are thus unattractive to pests. Other stressors that can stimulate increased solanine biosynthesis include mechanical damage, improper storage conditions, improper food processing, and sprouting. The largest concentration of solanine in response to stress is on the surface in the peel, making it an even better defense mechanism against pests trying to consume it. Safety Suggested limits on consumption of solanine Toxicity typically occurs when people ingest potatoes containing high levels of solanine. The average consumption of potatoes in the U.S. is estimated to be about 167 g of potatoes per day per person. There is variation in glycoalkaloid levels in different types of potatoes, but potato farmers aim to keep solanine levels below 0.2 mg/g. Signs of solanine poisoning have been linked to eating potatoes with solanine concentrations of between 0.1 and 0.4 mg per gram of potato. The average potato has 0.075 mg solanine/g potato, which is equal to about 0.18 mg/kg based on average daily potato consumption. Calculations have shown that 2 to 5 mg/kg of body weight is the likely toxic dose of glycoalkaloids like solanine in humans, with 3 to 6 mg/kg constituting the fatal dose. Other studies have shown that symptoms of toxicity were observed with consumption of even 1 mg/kg. Storage of potatoes Various storage conditions can have an impact on the level of solanine in potatoes. Glycoalkaloid levels increase when potatoes are exposed to light because light increases synthesis of glycoalkaloids like solanine. Potatoes stored in a dark place avoid increased solanine synthesis. Potatoes that have turned green due to increased chlorophyll and photosynthesis are indicative of increased light exposure and are therefore associated with high levels of solanine. Synthesis of solanine is also stimulated by mechanical injury because glycoalkaloids are synthesized at cut surfaces of potatoes. Storage of potatoes for extended periods of time has also been associated with increased solanine content. A study found that the solanine levels in Kurfi Jyoti and Kurfi Giriraj potatoes increase solanine levels by 0.232 mg/g and 0.252 mg/g respectively after being poorly stored in a heap. Effects of cooking on solanine levels Most home processing methods like boiling, cooking, and frying potatoes have been shown to have minimal effects on solanine levels. For example, boiling potatoes reduces the α-chaconine and α-solanine levels by only 3.5% and 1.2% respectively, but microwaving potatoes reduces the alkaloid content by 15%. Deep frying at also does not result in any measurable change. Alkaloids like solanine have been shown to start decomposing and degrading at approximately , and deep-frying potatoes at for 10 minutes causes a loss of ~40% of the solanine. Freeze-drying and dehydrating potatoes has a very minimal effect on solanine content. The majority (30–80%) of the solanine in potatoes is found in the outer layer of the potato. Therefore, peeling potatoes before cooking them reduces the glycoalkaloid intake from potato consumption. Fried potato peels have been shown to have 1.4–1.5 mg solanine/g, which is seven times the recommended upper safety limit of 0.2 mg/g. Chewing a small piece of the raw potato peel before cooking can help determine the level of solanine contained in the potato; bitterness indicates high glycoalkaloid content. If the potato has more than 0.2 mg/g of solanine, an immediate burning sensation will develop in the mouth. Recorded human poisonings Though fatalities from solanine poisoning are rare, there have been several notable cases of human solanine poisonings. Between 1865 and 1983, there were around 2000 documented human cases of solanine poisoning, with most recovering fully and 30 deaths. Because the symptoms are similar to those of food poisoning, it is possible that there are many undiagnosed cases of solanine toxicity. In 1899, 56 German soldiers fell ill due to solanine poisoning after consuming cooked potatoes containing 0.24 mg of solanine per gram of potato. There were no fatalities, but a few soldiers were left partially paralyzed and jaundiced. In 1918, there were 41 cases of solanine poisoning in people who had eaten a bad crop of potatoes with 0.43 mg solanine/g potato with no recorded fatalities. In Scotland in 1918, there were 61 cases of solanine poisoning after consumption of potatoes containing 0.41 mg of solanine per gram of potato, resulting in the death of a five-year old. A case report from 1925 reported that 7 family members who ate green potatoes fell ill from solanine poisoning two days later, resulting in the deaths of the 45-year-old mother and 16-year-old daughter. The other family members recovered fully. In another case report from 1959, four members of a British family exhibited symptoms of solanine poisoning after eating jacket potatoes containing 0.5 mg of solanine per gram of potato. There was a mass solanine poisoning incident in 1979 in the U.K., when 78 adolescent boys at a boarding school exhibited symptoms after eating potatoes that had been stored improperly over the summer. Seventeen of them ended up hospitalized, but they all recovered. The potatoes were determined to have between 0.25 and 0.3 mg of solanine per gram of potato. Another mass poisoning was reported in Canada in 1984, after 61 schoolchildren and teachers showed symptoms of solanine toxicity after consuming baked potatoes with 0.5 mg of solanine per gram of potato. In potatoes Potatoes naturally produce solanine and chaconine, a related glycoalkaloid, as a defense mechanism against insects, disease, and herbivores. Potato leaves, stems, and shoots are naturally high in glycoalkaloids. When potato tubers are exposed to light, they turn green and increase glycoalkaloid production. This is a natural defense to help prevent the uncovered tuber from being eaten. The green colour is from chlorophyll, and is itself harmless. However, it is an indication that increased level of solanine and chaconine may be present. In potato tubers, 30–80% of the solanine develops in and close to the skin, and some potato varieties have high levels of solanine. Some potato diseases, such as late blight, can dramatically increase the levels of glycoalkaloids present in potatoes. Tubers damaged in harvesting and/or transport also produce increased levels of glycoalkaloids; this is believed to be a natural reaction of the plant in response to disease and damage. Also, the tuber glycoalkaloids (such as solanine) can be affected by some chemical fertilization. For example, different studies have reported that glycoalkaloids content increases by increasing the concentration of nitrogen fertilizer. Green colouring under the skin strongly suggests solanine build-up in potatoes, although each process can occur without the other. A bitter taste in a potato is another – potentially more reliable – indicator of toxicity. Because of the bitter taste and appearance of such potatoes, solanine poisoning is rare outside conditions of food shortage. The symptoms are mainly vomiting and diarrhea, and the condition may be misdiagnosed as gastroenteritis. Most potato poisoning victims recover fully, although fatalities are known, especially when victims are undernourished or do not receive suitable treatment. The United States National Institutes of Health's information on solanine strongly advises against eating potatoes that are green below the skin. In other plants Fatalities are also known from solanine poisoning from other plants in the nightshade family, such as the berries of Solanum dulcamara (woody nightshade). Some, such as the California Poison Control Center, have claimed that unripe tomatoes and tomato leaves contain solanine. However, Mendel Friedman of the United States Department of Agriculture contradicts this claim, stating that tomatine, a relatively benign alkaloid, is the tomato alkaloid while solanine is found in potatoes. Food science writer Harold McGee has found scant evidence for tomato toxicity in the medical and veterinary literature. In popular culture Dorothy L. Sayers's short story "The Leopard Lady", in the 1939 collection In the Teeth of the Evidence, features a child poisoned by potato berries injected with solanine to increase their toxicity. See also Lenape (potato) Solanidine References External links a-Chaconine and a-Solanine, Review of Toxicological Literature – "Green tubers and sprouts" Steroidal alkaloids Alkaloid glycosides Steroidal alkaloids found in Solanaceae Nitrogen heterocycles Saponins Plant toxins
Solanine
[ "Chemistry" ]
2,927
[ "Biomolecules by chemical classification", "Chemical ecology", "Natural products", "Steroidal alkaloids", "Plant toxins", "Alkaloids by chemical classification", "Saponins" ]
1,034,012
https://en.wikipedia.org/wiki/Sufentanil
Sufentanil, sold under the brand names Sufenta among others, is a synthetic opioid analgesic drug approximately 5 to 10 times as potent as its parent drug, fentanyl, and 500 to 1,000 times as potent as morphine. Structurally, sufentanil differs from fentanyl through the addition of a methoxymethyl group on the piperidine ring (which increases potency but is believed to reduce duration of action), and the replacement of the phenyl ring by thiophene. Sufentanil first was synthesized at Janssen Pharmaceutica in 1974. Medical uses Sufentanil offers properties of sedation and can be used as analgesic component of anesthetic regimen during an operation. Because of its extremely high potency, it is often used in surgery and post-operative pain management for patients that are heavily opioid dependent/opioid tolerant because of long term opiate use for chronic pain or illicit opiate use. It is also used in surgery and post-operative pain control in people that are taking high dose buprenorphine for chronic pain because it has the potency and binding affinity strong enough to displace buprenorphine from the opioid receptors in the central nervous system and provide analgesia. In 2018, the Food and Drug Administration (FDA) approved Dsuvia, a sublingual tablet form of the drug, that was developed in a collaboration between AcelRx Pharmaceuticals and the United States Department of Defense for use in battlefield settings where intravenous (IV) treatments may not be readily available. The decision to approve this new potent synthetic opioid came under criticism from politicians and from the chair of the FDA advisory committee, who fear that the tablets will be easily diverted to the illegal drug market. Dsuvia has since been withdrawn from the market due to "unresolvable manufacturing constraints." Overdose Management Because sufentanil is very potent, practitioners must be prepared to reverse the effects of the drug should the patient exhibit symptoms of overdose such as respiratory depression or respiratory arrest. As for all other opioid-based medications, naloxone (trade name Narcan) is the definitive antidote for overdose. Depending on the amount administered, it can reverse the respiratory depression and, if enough is administered, completely reverse the effects of sufentanil. Society and culture Brand names Sufentanil is marketed under various brand names including Dsuvia, Dzuveo, Sufenta, and Sufentil. References Anilides Belgian inventions Ethers Fentanyl General anesthetics Janssen Pharmaceutica Mu-opioid receptor agonists Opioids Piperidines Propionamides Thiophenes
Sufentanil
[ "Chemistry" ]
577
[ "Organic compounds", "Functional groups", "Ethers" ]
1,034,106
https://en.wikipedia.org/wiki/Strain%20%28biology%29
In biology, a strain is a genetic variant, a subtype or a culture within a biological species. Strains are often seen as inherently artificial concepts, characterized by a specific intent for genetic isolation. This is most easily observed in microbiology where strains are derived from a single cell colony and are typically quarantined by the physical constraints of a Petri dish. Strains are also commonly referred to within virology, botany, and with rodents used in experimental studies. Microbiology and virology It has been said that "there is no universally accepted definition for the terms 'strain', 'variant', and 'isolate' in the virology community, and most virologists simply copy the usage of terms from others". A strain is a genetic variant or subtype of a microorganism (e.g., a virus, bacterium or fungus). For example, a "flu strain" is a certain biological form of the influenza or "flu" virus. These flu strains are characterized by their differing isoforms of surface proteins. New viral strains can be created due to mutation or swapping of genetic components when two or more viruses infect the same cell in nature. These phenomena are known respectively as antigenic drift and antigenic shift. Microbial strains can also be differentiated by their genetic makeup using metagenomic methods to maximize resolution within species. This has become a valuable tool to analyze the microbiome. Artificial constructs Scientists have modified strains of viruses in order to study their behavior, as in the case of the H5N1 influenza virus. While funding for such research has aroused controversy at times due to safety concerns, leading to a temporary pause, it has subsequently proceeded. In biotechnology, microbial strains have been constructed to establish metabolic pathways suitable for treating a variety of applications. Historically, a major effort of metabolic research has been devoted to the field of biofuel production. Escherichia coli is most common species for prokaryotic strain engineering. Scientists have succeeded in establishing viable minimal genomes from which new strains can be developed. These minimal strains provide a near guarantee that experiments on genes outside the minimal framework will not be effected by non-essential pathways. Optimized strains of E. coli are typically used for this application. E. coli are also often used as a chassis for the expression of simple proteins. These strains, such as BL21, are genetically modified to minimize protease activity, hence enabling potential for high efficiency industrial scale protein production. Strains of yeasts are the most common subjects of eukaryotic genetic modification, especially with respect to industrial fermentation. Plants The term has no official ranking status in botany; the term refers to the collective descendants produced from a common ancestor that share a uniform morphological or physiological character. A strain is a designated group of offspring that are either descended from a modified plant (produced by conventional breeding or by biotechnological means), or which result from genetic mutation. As an example, some rice strains are made by inserting new genetic material into a rice plant, all the descendants of the genetically modified rice plant are a strain with unique genetic information that is passed on to later generations; the strain designation, which is normally a number or a formal name, covers all the plants that descend from the originally modified plant. The rice plants in the strain can be bred to other rice strains or cultivars, and if desirable plants are produced, these are further bred to stabilize the desirable traits; the stabilized plants that can be propagated and "come true" (remain identical to the parent plant) are given a cultivar name and released into production to be used by farmers. Rodents A laboratory mouse or rat strain is a group of animals that is genetically uniform. Strains are used in laboratory experiments. Mouse strains can be inbred, mutated, or genetically modified, while rat strains are usually inbred. A given inbred rodent population is considered genetically identical after 20 generations of sibling-mating. Many rodent strains have been developed for a variety of disease models, and they are also often used to test drug toxicity. Insects The common fruit fly (Drosophila melanogaster) was among the first organisms used for genetic analysis, has a simple genome, and is very well understood. It has remained a popular model organism for many other reasons, like the ease of its breeding and maintenance, and the speed and volume of its reproduction. Various specific strains have been developed, including a flightless version with stunted wings (also used in the pet trade as live food for small reptiles and amphibians). See also Genetic isolate Clone (cell biology) Race (biology) Variant (biology) Fish stocks References External links Coli Genetic Stock Center EcoliWiki E. coli strain index International Mouse Strain Resource (IMSR) Rat strain index Microbiology terms Virology Taxa by rank Infraspecific virus taxa Infraspecific bacteria taxa
Strain (biology)
[ "Biology" ]
1,012
[ "Microbiology terms" ]
1,034,326
https://en.wikipedia.org/wiki/Hentriacontane
Hentriacontane, also called untriacontane, is a solid, long-chain alkane hydrocarbon with the structural formula CH3(CH2)29CH3. It is the main component of paraffin wax. It is found in a variety of plants, including peas (Pisum sativum), Acacia senegal, Gymnema sylvestre and others, and also comprises about 8–9% of beeswax. It has 10,660,307,791 constitutional isomers. References External links Hentriacontane at Dr. Duke's Phytochemical and Ethnobotanical Databases Alkanes
Hentriacontane
[ "Chemistry" ]
139
[ "Organic compounds", "Alkanes" ]
1,034,339
https://en.wikipedia.org/wiki/Most%20recent%20common%20ancestor
A most recent common ancestor (MRCA), also known as a last common ancestor (LCA), is the most recent individual from which all organisms of a set are descended. The term is also used in reference to the ancestry of groups of genes (haplotypes) rather than organisms. The MRCA of a set of individuals can sometimes be determined by referring to an established pedigree. However, in general, it is impossible to identify the exact MRCA of a large set of individuals, but an estimate of the time at which the MRCA lived can often be given. Such time to most recent common ancestor (TMRCA) estimates can be given based on DNA test results and established mutation rates as practiced in genetic genealogy, or by reference to a non-genetic, mathematical model or computer simulation. In organisms using sexual reproduction, the matrilineal MRCA and patrilineal MRCA are the MRCAs of a given population considering only matrilineal and patrilineal descent, respectively. The MRCA of a population by definition cannot be older than either its matrilineal or its patrilineal MRCA. In the case of Homo sapiens, the matrilineal and patrilineal MRCA are also known as "Mitochondrial Eve" (mt-MRCA) and "Y-chromosomal Adam" (Y-MRCA) respectively. The age of the human MRCA is unknown. It is no greater than the age of either the Y-MRCA or the mt-MRCA, estimated at 200,000 years. Unlike in pedigrees of individual humans or domesticated lineages where historical parentage is known, in the inference of relationships among species or higher groups of taxa (systematics or phylogenetics), ancestors are not directly observable or recognizable. They are inferences based on patterns of relationship among taxa inferred in a phylogenetic analysis of extant organisms and/or fossils. The last universal common ancestor (LUCA) is the most recent common ancestor of all current life on Earth, estimated to have lived some 3.5 to 3.8 billion years ago (in the Paleoarchean). MRCA of different species The project of a complete description of the phylogenetic relationships among all biological species is dubbed the "tree of life". This involves inference of ages of divergence for all hypothesized clades; for example, the MRCA of all Carnivora (cats, dogs, etc.) is estimated to have diverged some 42 million years ago (Miacidae). The concept of the last common ancestor from the perspective of human evolution is described for a popular audience in The Ancestor's Tale by Richard Dawkins. Dawkins lists "concestors" of the human lineage in order of increasing age, including hominin (humanchimpanzee), hominine (humangorilla), hominid (humanorangutan), hominoid (humangibbon), and so on in 40 stages in total, down to the last universal common ancestor (humanbacteria). MRCA of a population identified by a single genetic marker It is also possible to consider the ancestry of individual genes (or groups of genes, haplotypes) instead of an organism as a whole. Coalescent theory describes a stochastic model of how the ancestry of such genetic markers maps to the history of a population. Unlike organisms, a gene is passed down from a generation of organisms to the next generation either as perfect replicas of itself or as slightly mutated descendant genes. While organisms have ancestry graphs and progeny graphs via sexual reproduction, a gene has a single chain of ancestors and a tree of descendants. An organism produced by sexual cross-fertilization (allogamy) has at least two ancestors (its immediate parents), but a gene always has one ancestor per generation. Patrilineal and matrilineal MRCA Mitochondrial DNA (mtDNA) is nearly immune to sexual mixing, unlike the nuclear DNA whose chromosomes are shuffled and recombined in Mendelian inheritance. Mitochondrial DNA, therefore, can be used to trace matrilineal inheritance and to find the Mitochondrial Eve (also known as the African Eve), the most recent common ancestor of all humans via the mitochondrial DNA pathway. Likewise, Y chromosome is present as a single sex chromosome in the male individual and is passed on to male descendants without recombination. It can be used to trace patrilineal inheritance and to find the Y-chromosomal Adam, the most recent common ancestor of all humans via the Y-DNA pathway. Approximate dates for Mitochondrial Eve and Y-chromosomal Adam have been established by researchers using genealogical DNA tests. Mitochondrial Eve is estimated to have lived about 200,000 years ago. A paper published in March 2013 determined that, with 95% confidence and that provided there are no systematic errors in the study's data, Y-chromosomal Adam lived between 237,000 and 581,000 years ago. The MRCA of all humans alive today would, therefore, need to have lived more recently than either. It is more complicated to infer human ancestry via autosomal chromosomes. Although an autosomal chromosome contains genes that are passed down from parents to children via independent assortment from only one of the two parents, genetic recombination (chromosomal crossover) mixes genes from non-sister chromatids from both parents during meiosis, thus changing the genetic composition of the chromosome. Time to MRCA estimates Different types of MRCAs are estimated to have lived at different times in the past. These time to MRCA (TMRCA) estimates are also computed differently depending on the type of MRCA being considered. Patrilineal and matrilineal MRCAs (Mitochondrial Eve and Y-chromosomal Adam) are traced by single gene markers, thus their TMRCA are computed based on DNA test results and established mutation rates as practiced in genetic genealogy. The time to the genealogical MRCA (most recent common ancestor by any line of descent) of all living humans cannot be traced genetically because the DNA of the great majority of ancestors is completely lost after a few hundred years. It is therefore computed based on non-genetic, mathematical models and computer simulations. Since Mitochondrial Eve and Y-chromosomal Adam are traced by single genes via a single ancestral parent line, the time to these genetic MRCAs will necessarily be greater than that for the genealogical MRCA. This is because single genes will coalesce more slowly than tracing of conventional human genealogy via both parents. The latter considers only individual humans, without taking into account whether any gene from the computed MRCA actually survives in every single person in the current population. TMRCA via genetic markers Mitochondrial DNA can be used to trace the ancestry of a set of populations. In this case, populations are defined by the accumulation of mutations on the mtDNA, and special trees are created for the mutations and the order in which they occurred in each population. The tree is formed through the testing of a large number of individuals all over the world for the presence or lack of a certain set of mutations. Once this is done it is possible to determine how many mutations separate one population from another. The number of mutations, together with estimated mutation rate of the mtDNA in the regions tested, allows scientists to determine the approximate time to MRCA (TMRCA) which indicates time passed since the populations last shared the same set of mutations or belonged to the same haplogroup. In the case of Y-Chromosomal DNA, TMRCA is arrived at in a different way. Y-DNA haplogroups are defined by single-nucleotide polymorphism in various regions of the Y-DNA. The time to MRCA within a haplogroup is defined by the accumulation of mutations in STR sequences of the Y-Chromosome of that haplogroup only. Y-DNA network analysis of Y-STR haplotypes showing a non-star cluster indicates Y-STR variability due to multiple founding individuals. Analysis yielding a star cluster can be regarded as representing a population descended from a single ancestor. In this case the variability of the Y-STR sequence, also called the microsatellite variation, can be regarded as a measure of the time passed since the ancestor founded this particular population. The descendants of Genghis Khan or one of his ancestors represents a famous star cluster that can be dated back to the time of Genghis Khan. TMRCA calculations are considered critical evidence when attempting to determine migration dates of various populations as they spread around the world. For example, if a mutation is deemed to have occurred 30,000 years ago, then this mutation should be found amongst all populations that diverged after this date. If archeological evidence indicates cultural spread and formation of regionally isolated populations then this must be reflected in the isolation of subsequent genetic mutations in this region. If genetic divergence and regional divergence coincide it can be concluded that the observed divergence is due to migration as evidenced by the archaeological record. However, if the date of genetic divergence occurs at a different time than the archaeological record, then scientists will have to look at alternate archaeological evidence to explain the genetic divergence. The issue is best illustrated in the debate surrounding the demic diffusion versus cultural diffusion during the European Neolithic. TMRCA of all living humans The age of the MRCA of all living humans is unknown. It is necessarily younger than the age of either the matrilinear or the patrilinear MRCA, both of which have an estimated age of between roughly 100,000 and 200,000 years ago. A study by mathematicians Joseph T. Chang, Douglas Rohde and Steve Olson used a theoretical model to calculate that the MRCA may have lived remarkably recently, possibly as recently as 2,000 years ago. It concludes that the MRCA of all humans probably lived in East Asia, which would have given them key access to extremely isolated populations in Australia and the Americas. Possible locations for the MRCA include places such as the Chuckchi and Kamchatka Peninsulas that are close to Alaska, places such as Indonesia and Malaysia that are close to Australia or a place such as Taiwan or Japan that is more intermediate to Australia and the Americas. European colonization of the Americas and Australia was found by Chang to be too recent to have had a substantial impact on the age of the MRCA. In fact, if the Americas and Australia had never been discovered by Europeans, the MRCA would only be about 2.3% further back in the past than it is. Note that the age of the MRCA of a population does not correspond to a population bottleneck, let alone a "first couple". It rather reflects the presence of a single individual with high reproductive success in the past, whose genetic contribution has become pervasive throughout the population over time. It is also incorrect to assume that the MRCA passed all, or indeed any, genetic information to every living person. Through sexual reproduction, an ancestor passes half of his or her genes to each descendant in the next generation; in the absence of pedigree collapse, after just 32 generations the contribution of a single ancestor would be on the order of 2−32, a number proportional to less than a single basepair within the human genome. Identical ancestors point The MRCA is the most recent common ancestor shared by all individuals in the population under consideration. This MRCA may well have contemporaries who are also ancestral to some but not all of the extant population. The identical ancestors point is a point in the past more remote than the MRCA at which time there are no longer organisms which are ancestral to some but not all of the modern population. Due to pedigree collapse, modern individuals may still exhibit clustering, due to vastly different contributions from each of ancestral population. See also Cladistics Common descent Coalescent theory, a retrospective model of population genetics Crown group Genealogy, the study of families and the tracing of their lineages and history Genetic distance, the genetic divergence between species or between populations within a species Lowest common ancestor, an analogous concept in graph theory and computer science Phylogenetic tree, a branching diagram or "tree" showing the inferred evolutionary relationships among various biological species Timeline of evolution, outlines the major events in the development of life on the planet Earth Timeline of human evolution, outlines the major events in the development of the human species Last universal common ancestor, the most recent common ancestor of all life Notes References Further reading Evolutionary biology Genetic genealogy Genealogy Phylogenetics Population genetics Events in biological evolution
Most recent common ancestor
[ "Biology" ]
2,589
[ "Evolutionary biology", "Taxonomy (biology)", "Bioinformatics", "Phylogenetics", "Genealogy" ]
1,034,358
https://en.wikipedia.org/wiki/Chirplet%20transform
In signal processing, the chirplet transform is an inner product of an input signal with a family of analysis primitives called chirplets. Similar to the wavelet transform, chirplets are usually generated from (or can be expressed as being from) a single mother chirplet (analogous to the so-called mother wavelet of wavelet theory). Definitions The term chirplet transform was coined by Steve Mann, as the title of the first published paper on chirplets. The term chirplet itself (apart from chirplet transform) was also used by Steve Mann, Domingo Mihovilovic, and Ronald Bracewell to describe a windowed portion of a chirp function. In Mann's words: The chirplet transform thus represents a rotated, sheared, or otherwise transformed tiling of the time–frequency plane. Although chirp signals have been known for many years in radar, pulse compression, and the like, the first published reference to the chirplet transform described specific signal representations based on families of functions related to one another by time–varying frequency modulation or frequency varying time modulation, in addition to time and frequency shifting, and scale changes. In that paper, the Gaussian chirplet transform was presented as one such example, together with a successful application to ice fragment detection in radar (improving target detection results over previous approaches). The term chirplet (but not the term chirplet transform) was also proposed for a similar transform, apparently independently, by Mihovilovic and Bracewell later that same year. Applications The first practical application of the chirplet transform was in water-human-computer interaction (WaterHCI) for marine safety, to assist vessels in navigating through ice-infested waters, using marine radar to detect growlers (small iceberg fragments too small to be visible on conventional radar, yet large enough to damage a vessel). Other applications of the chirplet transform in WaterHCI include the SWIM (Sequential Wave Imprinting Machine). More recently other practical applications have been developed, including image processing (e.g. where there is periodic structure imaged through projective geometry), as well as to excise chirp-like interference in spread spectrum communications, in EEG processing, and Chirplet Time Domain Reflectometry. Extensions The warblet transform is a particular example of the chirplet transform introduced by Mann and Haykin in 1992 and now widely used. It provides a signal representation based on cyclically varying frequency modulated signals (warbling signals). See also Time–frequency representation Other time–frequency transforms Fractional Fourier transform Short-time Fourier transform Wavelet transform References LEM, Logon Expectation Maximization introduces Logon Expectation Maximization (LEM) and Radial Basis Functions (RBF) in Time–Frequency space. Osaka Kyoiku, Gabor, wavelet and chirplet transforms...(PDF) J. "Richard" Cui, etal, Time–frequency analysis of visual evoked potentials using chirplet transform , IEE Electronics Letters, vol. 41, no. 4, pp. 217–218, 2005. Florian Bossmann, Jianwei Ma, Asymmetric chirplet transform—Part 2: phase, frequency, and chirp rate, Geophysics, 2016, 81 (6), V425-V439. Florian Bossmann, Jianwei Ma, Asymmetric chirplet transform for sparse representation of seismic data, Geophysics, 2015, 80 (6), WD89-WD100. External links DiscreteTFDs - software for computing chirplet decompositions and time–frequency distributions The Chirplet Transform (web tutorial and info). Transforms Fourier analysis Time–frequency analysis Image processing Radar signal processing
Chirplet transform
[ "Physics", "Mathematics" ]
783
[ "Functions and mappings", "Spectrum (physical sciences)", "Time–frequency analysis", "Frequency-domain analysis", "Mathematical objects", "Mathematical relations", "Transforms" ]
1,034,370
https://en.wikipedia.org/wiki/Dodecane
Dodecane (also known as dihexyl, bihexyl, adakane 12, or duodecane) is an oily liquid n-alkane hydrocarbon with the chemical formula C12H26 (which has 355 isomers). It is used as a solvent, distillation chaser, and scintillator component. It is used as a diluent for tributyl phosphate (TBP) in nuclear reprocessing plants. Combustion reaction The combustion reaction of dodecane is as follows: C12H26(l) + 18.5 O2(g) → 12 CO2(g) + 13 H2O(g) ΔH° = −7513 kJ One litre of fuel needs about 15 kg of air to burn (2.6 kg of oxygen), and generates 2.3 kg (or 1.2 m3) of CO2 upon complete combustion. Jet fuel surrogate In recent years, n-dodecane has garnered attention as a possible surrogate for kerosene-based fuels such as Jet-A, S-8, and other conventional aviation fuels. It is considered a second-generation fuel surrogate designed to emulate the laminar flame speed, largely supplanting n-decane, primarily due to its higher molecular mass and lower hydrogen-to-carbon ratio which better reflect the n-alkane content of jet fuels. See also Higher alkanes Kerosene List of isomers of dodecane References External links Material Safety Data Sheet for Dodecane Dodecane, Dr. Duke's Phytochemical and Ethnobotanical Databases Alkanes Hydrocarbon solvents
Dodecane
[ "Chemistry" ]
358
[ "Organic compounds", "Alkanes" ]
1,034,397
https://en.wikipedia.org/wiki/Tridecane
Tridecane or n-tridecane is an alkane with the chemical formula CH3(CH2)11CH3. Tridecane is a combustible colourless liquid. In industry, they have no specific value aside from being components of various fuels and solvents. In the research laboratory, tridecane is also used as a distillation chaser. Natural occurrence Nymphs of the southern green shield bug produce tridecane as a dispersion/aggregation pheromone, which possibly serves as a defense against predators. It is also the main component of the defensive fluid produced by the stink bug Cosmopepla bimaculata. See also Higher alkanes List of isomers of tridecane References External links Material Safety Data Sheet for Tridecane Phytochemical and Ethnobotanical Databases Alkanes Hydrocarbon solvents
Tridecane
[ "Chemistry" ]
191
[ "Organic compounds", "Alkanes" ]
1,034,470
https://en.wikipedia.org/wiki/Scattering%20amplitude
In quantum physics, the scattering amplitude is the probability amplitude of the outgoing spherical wave relative to the incoming plane wave in a stationary-state scattering process. At large distances from the centrally symmetric scattering center, the plane wave is described by the wavefunction where is the position vector; ; is the incoming plane wave with the wavenumber along the axis; is the outgoing spherical wave; is the scattering angle (angle between the incident and scattered direction); and is the scattering amplitude. The dimension of the scattering amplitude is length. The scattering amplitude is a probability amplitude; the differential cross-section as a function of scattering angle is given as its modulus squared, The asymptotic form of the wave function in arbitrary external field takes the form where is the direction of incidient particles and is the direction of scattered particles. Unitary condition When conservation of number of particles holds true during scattering, it leads to a unitary condition for the scattering amplitude. In the general case, we have Optical theorem follows from here by setting In the centrally symmetric field, the unitary condition becomes where and are the angles between and and some direction . This condition puts a constraint on the allowed form for , i.e., the real and imaginary part of the scattering amplitude are not independent in this case. For example, if in is known (say, from the measurement of the cross section), then can be determined such that is uniquely determined within the alternative . Partial wave expansion In the partial wave expansion the scattering amplitude is represented as a sum over the partial waves, , where is the partial scattering amplitude and are the Legendre polynomials. The partial amplitude can be expressed via the partial wave S-matrix element () and the scattering phase shift as Then the total cross section , can be expanded as is the partial cross section. The total cross section is also equal to due to optical theorem. For , we can write X-rays The scattering length for X-rays is the Thomson scattering length or classical electron radius, 0. Neutrons The nuclear neutron scattering process involves the coherent neutron scattering length, often described by . Quantum mechanical formalism A quantum mechanical approach is given by the S matrix formalism. Measurement The scattering amplitude can be determined by the scattering length in the low-energy regime. See also Levinson's theorem Veneziano amplitude Plane wave expansion References Neutron X-rays Electron Scattering Diffraction Quantum mechanics
Scattering amplitude
[ "Physics", "Chemistry", "Materials_science" ]
485
[ "Electron", "Molecular physics", "Spectrum (physical sciences)", "X-rays", "Theoretical physics", "Quantum mechanics", "Electromagnetic spectrum", "Scattering", "Diffraction", "Crystallography", "Particle physics", "Condensed matter physics", "Nuclear physics", "Spectroscopy" ]
1,034,471
https://en.wikipedia.org/wiki/The%20Planiverse
The Planiverse is a novel by A. K. Dewdney, written in 1984 about a two-dimensional world. Plot In the spirit of Edwin Abbott Abbott's Flatland, Dewdney and his computer science students simulate a two-dimensional world with a complex ecosystem. To their surprise, they find their artificial 2D universe has somehow accidentally become a means of communication with an actual 2D world: Arde. They make a sort of "telepathic" contact with "YNDRD", referred to by the students as Yendred, a highly philosophical Ardean, as he begins a journey across the western half, Punizla, of the single continent Ajem Kollosh to learn more about the spiritual beliefs of the people of the East, Vanizla. Yendred mistakes Dewdney's class for "spirits" and takes great interest in communicating with them. The students and narrator communicate with Yendred by typing on the keyboard; Yendred's answers appear on the computer's printout. The name Yendred (or "Yendwed", as pronounced by one of the students, who has a speech impediment) is simply "Dewdney" reversed. Written as a travelogue, Yendred's journey through the West takes him through several cities. He visits the Punizlan Institute for Technology and Science, where Arde's technology is explored in great detail. For example, all houses are underground, so as not to be demolished by the periodic 2D rivers; nails are useless for attaching two objects, so tape and glue are used instead; most Ardean creatures cannot have deuterostomic digestive tracts since they would split into two; even games such as Go have one-dimensional Alak analogues. An appendix explains various other aspects of two-dimensional science and technology which could not fit into the main story. The underlying allegory culminates in Yendred's arrival at the watershed of the continent and the planet's only building above ground, where he at last finds Drabk, an Ardean who professes "knowledge of the Beyond", and teaches Yendred to fly. Yendred finds that to keep contact with Earth is no longer of benefit, and contact with Arde is lost. Development In 1977, Dewdney was inspired by an allegory of a two-dimensional universe, and decided to expand upon the physics and chemistry of such a universe. He published a short monograph in 1979 called Two-Dimensional Science and Technology. This was reviewed by Martin Gardner in his July 1980 "Mathematical Games" column in Scientific American, and shortly after this, all copies of the monograph were sold out. In 1981, following the success of the monograph, Dewdney published A Symposium on Two-Dimensional Science and Technology, which contained suggestions for how a two-dimensional universe would work from scientists and non-scientists on varied subjects. Dewdney wrote The Planiverse as a frame story in which to display the scientific and technical features from these previous works, as well as an allegory for his search for a reality deeper than that of scientific enquiry, and his subsequent conversion to Sufiism. Reception Dave Langford reviewed The Planiverse for White Dwarf #55, and stated that "This delightful book will be inspiring 2D game scenarios any second now." Kirkus Reviews considered it "an ingenious intellectual exercise—amusing, edifying, sometimes tedious" At Tor.com, Jason Shiga found it to be a "tour de force followup" to Flatland, and found the appendix to be the "most impressive section" of the book. See also Creatures (inspired by The Planiverse) Flatland - 1884 satirical novella by Edwin Abbott Abbott. Flatterland - 2001 book by Ian Stewart, a sequel to Flatland. Spaceland Sphereland References Further reading Begley, Sharon. 1982. "Life in Two Dimensions." Newsweek. January 18, pp. 84–85. Dewdney, A.K. 1979. "Exploring the Planiverse." Journal of Recreational Mathematics. 12:16–20. Dewdney, A.K. 2000. "The Planiverse Project: Then and Now." The Mathematical Intelligencer. 22:46–51. Gardner, Martin. 1980/2001. "The Wonders of a Planiverse." Scientific American, July 1980; reprinted with appendix in The Colossal Book of Mathematics (New York: Norton). Sandberg-Diment, Erik. 1984. "Review of Dewdney 1984/2001". New York Times, November 6. External links Author's bibliography Kontrol An online action game and 2D universe simulation inspired by The Planiverse 1984 Canadian novels 1984 science fiction novels Canadian science fiction novels Fictional dimensions Mathematics fiction books Novels about mathematics Speculative evolution
The Planiverse
[ "Mathematics", "Biology" ]
1,005
[ "Hypothetical life forms", "Mathematics fiction books", "Recreational mathematics", "Speculative evolution", "Biological hypotheses" ]
1,034,540
https://en.wikipedia.org/wiki/Tetradecane
Tetradecane is an alkane hydrocarbon with the chemical formula CH3(CH2)12CH3. Tetradecane has 1858 structural isomers. See also Higher alkanes List of isomers of tetradecane References External links Material Safety Data Sheet for Tetradecane http://www.ars-grin.gov/cgi-bin/duke/chemical.pl?TETRADECANE Alkanes
Tetradecane
[ "Chemistry" ]
99
[ "Organic compounds", "Alkanes" ]
1,034,555
https://en.wikipedia.org/wiki/Pentadecane
Pentadecane is an alkane hydrocarbon with the chemical formula C15H32. It can be monoterminally oxidized to 1-pentadecanol. References Alkanes
Pentadecane
[ "Chemistry" ]
45
[ "Organic compounds", "Alkanes" ]
1,034,567
https://en.wikipedia.org/wiki/Hexadecane
Hexadecane (also called cetane) is an alkane hydrocarbon with the chemical formula C16H34. Hexadecane consists of a chain of 16 carbon atoms, with three hydrogen atoms bonded to the two end carbon atoms, and two hydrogens bonded to each of the 14 other carbon atoms. Cetane number Cetane is often used as a shorthand for cetane number, a measure of the combustion of diesel fuel. Cetane ignites very easily under compression; for this reason, it is assigned a cetane number of 100, and serves as a reference for other fuel mixtures. Hexadecyl radical Hexadecyl is an alkyl radical of carbon and hydrogen derived from hexadecane, with formula C16H33 and with mass 225.433, occurring especially in cetyl alcohol. It confers strong hydrophobicity on molecules containing it. Carboplatin modified with hexadecyl and polyethylene glycol has increased liposolubility and PEGylation, proposed to useful in chemotherapy, specifically non-small-cell lung cancer. Hexadecyl was used from 1982 for radiolabelling, and this continues to be useful, for example for radiolabelling exosomes and hydrogels, and for positron emission tomography. Hexadecyl platelet-activating factor has profound effects on the lung, and hexadecyl glyceryl ether participates in the biosynthesis of plasmalogens. See also Cetane index Isocetane Higher alkanes References Cited sources External links Vapor pressure and liquid density calculation Technique to determine hexadecane transfer Alkanes
Hexadecane
[ "Chemistry" ]
365
[ "Organic compounds", "Alkanes" ]
1,034,578
https://en.wikipedia.org/wiki/Heptadecane
Heptadecane is an organic compound, an alkane hydrocarbon with the chemical formula C17H36. The name may refer to any of 24894 theoretically possible structural isomers, or to a mixture thereof. The unbranched isomer is normal or n-heptadecane, CH3(CH2)15CH3. In the IUPAC nomenclature, the name of this compound is simply heptadecane, since the other isomers are viewed and named as alkyl-substituted versions of smaller alkanes. The most compact and branched isomer would be tetra-tert-butylmethane, but its existence is believed to be impossible due to steric hindrance. Indeed, it is believed to be the smallest "impossible" alkane. References External links List of plant species containing heptadecane, Dr. Duke's Phytochemical and Ethnobotanical Databases The smallest alkanes which cannot be made, the goodman group, university of cambridge Alkanes
Heptadecane
[ "Chemistry" ]
220
[ "Organic compounds", "Alkanes" ]
1,034,588
https://en.wikipedia.org/wiki/Octadecane
Octadecane is an alkane hydrocarbon with the chemical formula CH3(CH2)16CH3. Properties Octadecane is distinguished by being the alkane with the lowest carbon number that is unambiguously solid at room temperature and pressure. References External links Phytochemical and Ethnobotanical Databases Alkanes
Octadecane
[ "Chemistry" ]
76
[ "Organic compounds", "Alkanes" ]
1,034,598
https://en.wikipedia.org/wiki/Nonadecane
Nonadecane is an alkane hydrocarbon with the chemical formula CH3(CH2)17CH3, simplified to C19H40. Occurrence in nature Nonadecane is found in Rosa × damascena (8%-15%), Rosa × alba (7%-13%) and n-Paraffin rich high altitude hybrids of both (20%-55%). See also Rose oil Paraffin References External links Material Safety Data Sheet for Nonadecane Activities of a Specific Chemical Query Alkanes
Nonadecane
[ "Chemistry" ]
114
[ "Organic compounds", "Organic compound stubs", "Alkanes", "Organic chemistry stubs" ]
1,034,602
https://en.wikipedia.org/wiki/Icosane
Icosane (alternative spelling eicosane and eichosane) is an alkane with the chemical formula C20H42. It has 366,319 constitutional isomers. n-Icosane (the straight-chain structural isomer of icosane) is the shortest compound found in paraffin waxes used to form candles. Icosane's size, state or chemical inactivity does not exclude it from the traits its smaller alkane counterparts have. It is a colorless, non-polar molecule, nearly unreactive except when it burns. It is less dense than and insoluble in water. Its non-polar trait means it can only perform weak intermolecular bonding (hydrophobic/van der Waals forces). Icosane's phase transition at a moderate temperature makes it a candidate phase change material, or PCM which can be used to store thermal energy and control temperature. It can be detected in the body odor of persons suffering from Parkinson's disease. Naming IUPAC currently recommends icosane, whereas Chemical Abstracts Service and Beilstein use eicosane. See also Perillaldehyde References External links Icosane at Dr. Duke's Phytochemical and Ethnobotanical Databases Alkanes
Icosane
[ "Chemistry" ]
272
[ "Organic compounds", "Alkanes" ]
1,034,699
https://en.wikipedia.org/wiki/Constitutive%20equation
In physics and engineering, a constitutive equation or constitutive relation is a relation between two or more physical quantities (especially kinetic quantities as related to kinematic quantities) that is specific to a material or substance or field, and approximates its response to external stimuli, usually as applied fields or forces. They are combined with other equations governing physical laws to solve physical problems; for example in fluid mechanics the flow of a fluid in a pipe, in solid state physics the response of a crystal to an electric field, or in structural analysis, the connection between applied stresses or loads to strains or deformations. Some constitutive equations are simply phenomenological; others are derived from first principles. A common approximate constitutive equation frequently is expressed as a simple proportionality using a parameter taken to be a property of the material, such as electrical conductivity or a spring constant. However, it is often necessary to account for the directional dependence of the material, and the scalar parameter is generalized to a tensor. Constitutive relations are also modified to account for the rate of response of materials and their non-linear behavior. See the article Linear response function. Mechanical properties of matter The first constitutive equation (constitutive law) was developed by Robert Hooke and is known as Hooke's law. It deals with the case of linear elastic materials. Following this discovery, this type of equation, often called a "stress-strain relation" in this example, but also called a "constitutive assumption" or an "equation of state" was commonly used. Walter Noll advanced the use of constitutive equations, clarifying their classification and the role of invariance requirements, constraints, and definitions of terms like "material", "isotropic", "aeolotropic", etc. The class of "constitutive relations" of the form stress rate = f (velocity gradient, stress, density) was the subject of Walter Noll's dissertation in 1954 under Clifford Truesdell. In modern condensed matter physics, the constitutive equation plays a major role. See Linear constitutive equations and Nonlinear correlation functions. Definitions Deformation of solids Friction Friction is a complicated phenomenon. Macroscopically, the friction force F between the interface of two materials can be modelled as proportional to the reaction force R at a point of contact between two interfaces through a dimensionless coefficient of friction μf, which depends on the pair of materials: This can be applied to static friction (friction preventing two stationary objects from slipping on their own), kinetic friction (friction between two objects scraping/sliding past each other), or rolling (frictional force which prevents slipping but causes a torque to exert on a round object). Stress and strain The stress-strain constitutive relation for linear materials is commonly known as Hooke's law. In its simplest form, the law defines the spring constant (or elasticity constant) k in a scalar equation, stating the tensile/compressive force is proportional to the extended (or contracted) displacement x: meaning the material responds linearly. Equivalently, in terms of the stress σ, Young's modulus E, and strain ε (dimensionless): In general, forces which deform solids can be normal to a surface of the material (normal forces), or tangential (shear forces), this can be described mathematically using the stress tensor: where C is the elasticity tensor and S is the compliance tensor. Solid-state deformations Several classes of deformations in elastic materials are the following: Plastic The applied force induces non-recoverable deformations in the material when the stress (or elastic strain) reaches a critical magnitude, called the yield point. Elastic The material recovers its initial shape after deformation. Viscoelastic If the time-dependent resistive contributions are large, and cannot be neglected. Rubbers and plastics have this property, and certainly do not satisfy Hooke's law. In fact, elastic hysteresis occurs. Anelastic If the material is close to elastic, but the applied force induces additional time-dependent resistive forces (i.e. depend on rate of change of extension/compression, in addition to the extension/compression). Metals and ceramics have this characteristic, but it is usually negligible, although not so much when heating due to friction occurs (such as vibrations or shear stresses in machines). Hyperelastic The applied force induces displacements in the material following a strain energy density function. Collisions The relative speed of separation vseparation of an object A after a collision with another object B is related to the relative speed of approach vapproach by the coefficient of restitution, defined by Newton's experimental impact law: which depends on the materials A and B are made from, since the collision involves interactions at the surfaces of A and B. Usually , in which for completely elastic collisions, and for completely inelastic collisions. It is possible for to occur – for superelastic (or explosive) collisions. Deformation of fluids The drag equation gives the drag force D on an object of cross-section area A moving through a fluid of density ρ at velocity v (relative to the fluid) where the drag coefficient (dimensionless) cd depends on the geometry of the object and the drag forces at the interface between the fluid and object. For a Newtonian fluid of viscosity μ, the shear stress τ is linearly related to the strain rate (transverse flow velocity gradient) ∂u/∂y (units s−1). In a uniform shear flow: with u(y) the variation of the flow velocity u in the cross-flow (transverse) direction y. In general, for a Newtonian fluid, the relationship between the elements τij of the shear stress tensor and the deformation of the fluid is given by with and where vi are the components of the flow velocity vector in the corresponding xi coordinate directions, eij are the components of the strain rate tensor, Δ is the volumetric strain rate (or dilatation rate) and δij is the Kronecker delta. The ideal gas law is a constitutive relation in the sense the pressure p and volume V are related to the temperature T, via the number of moles n of gas: where R is the gas constant (J⋅K−1⋅mol−1). Electromagnetism Constitutive equations in electromagnetism and related areas In both classical and quantum physics, the precise dynamics of a system form a set of coupled differential equations, which are almost always too complicated to be solved exactly, even at the level of statistical mechanics. In the context of electromagnetism, this remark applies to not only the dynamics of free charges and currents (which enter Maxwell's equations directly), but also the dynamics of bound charges and currents (which enter Maxwell's equations through the constitutive relations). As a result, various approximation schemes are typically used. For example, in real materials, complex transport equations must be solved to determine the time and spatial response of charges, for example, the Boltzmann equation or the Fokker–Planck equation or the Navier–Stokes equations. For example, see magnetohydrodynamics, fluid dynamics, electrohydrodynamics, superconductivity, plasma modeling. An entire physical apparatus for dealing with these matters has developed. See for example, linear response theory, Green–Kubo relations and Green's function (many-body theory). These complex theories provide detailed formulas for the constitutive relations describing the electrical response of various materials, such as permittivities, permeabilities, conductivities and so forth. It is necessary to specify the relations between displacement field D and E, and the magnetic H-field H and B, before doing calculations in electromagnetism (i.e. applying Maxwell's macroscopic equations). These equations specify the response of bound charge and current to the applied fields and are called constitutive relations. Determining the constitutive relationship between the auxiliary fields D and H and the E and B fields starts with the definition of the auxiliary fields themselves: where P is the polarization field and M is the magnetization field which are defined in terms of microscopic bound charges and bound current respectively. Before getting to how to calculate M and P it is useful to examine the following special cases. Without magnetic or dielectric materials In the absence of magnetic or dielectric materials, the constitutive relations are simple: where ε0 and μ0 are two universal constants, called the permittivity of free space and permeability of free space, respectively. Isotropic linear materials In an (isotropic) linear material, where P is proportional to E, and M is proportional to B, the constitutive relations are also straightforward. In terms of the polarization P and the magnetization M they are: where χe and χm are the electric and magnetic susceptibilities of a given material respectively. In terms of D and H the constitutive relations are: where ε and μ are constants (which depend on the material), called the permittivity and permeability, respectively, of the material. These are related to the susceptibilities by: General case For real-world materials, the constitutive relations are not linear, except approximately. Calculating the constitutive relations from first principles involves determining how P and M are created from a given E and B. These relations may be empirical (based directly upon measurements), or theoretical (based upon statistical mechanics, transport theory or other tools of condensed matter physics). The detail employed may be macroscopic or microscopic, depending upon the level necessary to the problem under scrutiny. In general, the constitutive relations can usually still be written: but ε and μ are not, in general, simple constants, but rather functions of E, B, position and time, and tensorial in nature. Examples are: As a variation of these examples, in general materials are bianisotropic where D and B depend on both E and H, through the additional coupling constants ξ and ζ: In practice, some materials properties have a negligible impact in particular circumstances, permitting neglect of small effects. For example: optical nonlinearities can be neglected for low field strengths; material dispersion is unimportant when frequency is limited to a narrow bandwidth; material absorption can be neglected for wavelengths for which a material is transparent; and metals with finite conductivity often are approximated at microwave or longer wavelengths as perfect metals with infinite conductivity (forming hard barriers with zero skin depth of field penetration). Some man-made materials such as metamaterials and photonic crystals are designed to have customized permittivity and permeability. Calculation of constitutive relations The theoretical calculation of a material's constitutive equations is a common, important, and sometimes difficult task in theoretical condensed-matter physics and materials science. In general, the constitutive equations are theoretically determined by calculating how a molecule responds to the local fields through the Lorentz force. Other forces may need to be modeled as well such as lattice vibrations in crystals or bond forces. Including all of the forces leads to changes in the molecule which are used to calculate P and M as a function of the local fields. The local fields differ from the applied fields due to the fields produced by the polarization and magnetization of nearby material; an effect which also needs to be modeled. Further, real materials are not continuous media; the local fields of real materials vary wildly on the atomic scale. The fields need to be averaged over a suitable volume to form a continuum approximation. These continuum approximations often require some type of quantum mechanical analysis such as quantum field theory as applied to condensed matter physics. See, for example, density functional theory, Green–Kubo relations and Green's function. A different set of homogenization methods (evolving from a tradition in treating materials such as conglomerates and laminates) are based upon approximation of an inhomogeneous material by a homogeneous effective medium (valid for excitations with wavelengths much larger than the scale of the inhomogeneity). The theoretical modeling of the continuum-approximation properties of many real materials often rely upon experimental measurement as well. For example, ε of an insulator at low frequencies can be measured by making it into a parallel-plate capacitor, and ε at optical-light frequencies is often measured by ellipsometry. Thermoelectric and electromagnetic properties of matter These constitutive equations are often used in crystallography, a field of solid-state physics. Photonics Refractive index The (absolute) refractive index of a medium n (dimensionless) is an inherently important property of geometric and physical optics defined as the ratio of the luminal speed in vacuum c0 to that in the medium c: where ε is the permittivity and εr the relative permittivity of the medium, likewise μ is the permeability and μr are the relative permeability of the medium. The vacuum permittivity is ε0 and vacuum permeability is μ0. In general, n (also εr) are complex numbers. The relative refractive index is defined as the ratio of the two refractive indices. Absolute is for one material, relative applies to every possible pair of interfaces; Speed of light in matter As a consequence of the definition, the speed of light in matter is for special case of vacuum; and , Piezooptic effect The piezooptic effect relates the stresses in solids σ to the dielectric impermeability a, which are coupled by a fourth-rank tensor called the piezooptic coefficient Π (units K−1): Transport phenomena Definitions Definitive laws There are several laws which describe the transport of matter, or properties of it, in an almost identical way. In every case, in words they read: Flux (density) is proportional to a gradient, the constant of proportionality is the characteristic of the material. In general the constant must be replaced by a 2nd rank tensor, to account for directional dependences of the material. See also Defining equation (physical chemistry) Governing equation Principle of material objectivity Rheology Notes References Elasticity (physics) Equations of physics Continuum mechanics Electric and magnetic fields in matter
Constitutive equation
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
2,987
[ "Physical phenomena", "Equations of physics", "Elasticity (physics)", "Continuum mechanics", "Deformation (mechanics)", "Mathematical objects", "Classical mechanics", "Electric and magnetic fields in matter", "Equations", "Materials science", "Condensed matter physics", "Physical properties" ]
1,034,724
https://en.wikipedia.org/wiki/Theodore%20Ts%27o
Theodore Yue Tak Ts'o (; born 1968) is an American software engineer mainly known for his contributions to the Linux kernel, in particular his contributions to file systems. He is the secondary developer and maintainer of e2fsprogs, the userspace utilities for the ext2, ext3, and ext4 filesystems, and is a maintainer for the ext4 file system. Biography Ts'o graduated from MIT with a degree in computer science in 1990, after which he worked in MIT's Information Systems (IS) department until 1999. During this time he was project leader of the Kerberos team. In 1994, Ts'o created the /dev/random Linux device node and the corresponding kernel driver, which was Linux's (and Unix's) first kernel interface that provided high-quality cryptographic random numbers to user programs. /dev/random works without access to a hardware random number generator, allowing user programs to depend upon its existence. Separate daemons such as rngd take random numbers from such hardware and make them accessible via /dev/random. Since its creation, interface /dev/random is used in Linux, FreeBSD, macOS, and Solaris systems. After MIT IS, Ts'o went to work for VA Linux Systems for two years. In late 2001 he joined IBM, where he worked on improvements in the Linux kernel's performance and scalability. After working on a real-time kernel at IBM, Ts'o joined the Linux Foundation in late 2007 for a two-year fellowship. He initially served as Chief Platform Strategist, before becoming Chief Technology Officer in 2008. Ts'o also served as Treasurer for USENIX until 2008, and has chaired the annual Linux Kernel Developers Summit. In 2010 Ts'o moved to Google, saying he would be working on "kernel, file system, and storage stuff". Ts'o is a Debian Developer, maintaining several packages, mostly filesystem-related ones, including e2fsprogs since March 2003. He was a member of the Security Area Directorate for the Internet Engineering Task Force, and was one of the chairs for the IPsec working group. He was one of the founding board members for the Free Standards Group. In July 2023, Ts'o joined RESF's Board of Directors, which encompasses the Rocky Linux project. Awards Ts'o was awarded the Free Software Foundation's 2006 Award for the Advancement of Free Software. References Further reading 1968 births American chief technology officers American people of Chinese descent American computer programmers Free software programmers Geeknet Google employees Linux kernel programmers Linux people Living people MIT School of Engineering alumni Open source people People in information technology
Theodore Ts'o
[ "Technology" ]
555
[ "People in information technology", "Information technology" ]
3,088,560
https://en.wikipedia.org/wiki/Quick%20ratio
In finance, the quick ratio, also known as the acid-test ratio, is a liquidity ratio that measures the ability of a company to use near-cash assets (or 'quick' assets) to extinguish or retire current liabilities immediately. It is the ratio between quick assets and current liabilities. A normal liquid ratio is considered to be 1:1. A company with a quick ratio of less than 1 cannot currently fully pay back its current liabilities. The quick ratio is similar to the current ratio, but it provides a more conservative assessment of the liquidity position of a firm as it excludes inventory, which it does not consider as sufficiently liquid. Formula Where quick assets can be defined as follows: Although the quick ratio is a test for the financial viability of a business, it does not give a complete picture of the business's health. For example, if a business has large amounts in accounts receivable due for payment after a long period, while also having larger accounts payable due for immediate payment, the quick ratio may look healthy when the business is actually about to run out of cash. In contrast, if a business has fast payment from customers, but long terms from suppliers, it may have a low quick ratio and yet be very healthy. Generally, the acid test ratio should be 1:1 or higher for a healthy company. However, this varies widely by industry. In general, the higher the ratio, the greater the company's accounting liquidity. See also Current ratio Financial Accounting References Financial ratios
Quick ratio
[ "Mathematics" ]
315
[ "Financial ratios", "Quantity", "Metrics" ]
3,088,675
https://en.wikipedia.org/wiki/Profinet
Profinet (usually styled as PROFINET, as a portmanteau for Process Field Network) is an industry technical standard for data communication over Industrial Ethernet, designed for collecting data from, and controlling equipment in industrial systems, with a particular strength in delivering data under tight time constraints. The standard is maintained and supported by Profibus and Profinet International, an umbrella organization headquartered in Karlsruhe, Germany. Functionalities Overview Profinet implements the interfacing to peripherals. It defines the communication with field connected peripheral devices. Its basis is a cascading real-time concept. Profinet defines the entire data exchange between controllers (called "IO-Controllers") and the devices (called "IO-Devices"), as well as parameter setting and diagnosis. IO-Controllers are typically a PLC, DCS, or IPC; whereas IO-Devices can be varied: I/O blocks, drives, sensors, or actuators. The Profinet protocol is designed for the fast data exchange between Ethernet-based field devices and follows the provider-consumer model. Field devices in a subordinate Profibus line can be integrated in the Profinet system seamlessly via an IO-Proxy (representative of a subordinate bus system). Conformance Classes (CC) Applications with Profinet can be divided according to the international standard IEC 61784-2 into four conformance classes: In Conformance Class A (CC-A), only the devices are certified. A manufacturer certificate is sufficient for the network infrastructure. This is why structured cabling or a wireless local area network for mobile subscribers can also be used. Typical applications can be found in infrastructure (e.g. motorway or railway tunnels) or in building automation. Conformance Class B (CC-B) stipulates that the network infrastructure also includes certified products and is structured according to the guidelines of Profinet. Shielded cables increase robustness and switches with management functions facilitate network diagnostics and allow the network topology to be captured as desired for controlling a production line or machine. Process automation requires increased availability, which can be achieved through media and system redundancy. For a device to adhere to Conformance Class B, it must communicate successfully via Profinet, have two ports (integrated switch), and support SNMP. With Conformance Class C (CC-C), positioning systems can be implemented with additional bandwidth reservation and application synchronization. Conformance Class C devices additionally communicate via Profinet IRT. For Conformance Class D (CC-D), Profinet is used via Time-Sensitive Networking (TSN). The same functions can be achieved as with CC-C. In contrast to CC-A and CC-B, the complete communication (cyclic and acyclic) between controller and device takes place on Ethernet layer 2. The Remote Service Interface (RSI) was introduced for this purpose. Device types A Profinet system consists of the following devices: The IO-Controller, which controls the automation task. The IO-Device, which is a field device, monitored and controlled by an IO-Controller. An IO-Device may consist of several modules and sub-modules. The IO-Supervisor is software typically based on a PC for setting parameters and diagnosing individual IO-Devices. System structure A minimal Profinet IO-System consists of at least one IO-Controller that controls one or more IO-Devices. In addition, one or more IO-Supervisors can optionally be switched on temporarily for the engineering of the IO-Devices if required. If two IO-Systems are in the same IP network, the IO-Controllers can also share an input signal as shared input, in which they have read access to the same submodule in an IO-Device. This simplifies the combination of a PLC with a separate safety controller or motion control. Likewise, an entire IO-Device can be shared as a shared device, in which individual submodules of an IO-Device are assigned to different IO-Controllers. Each automation device with an Ethernet interface can simultaneously fulfill the functionality of an IO-Controller and an IO-Device. If a controller for a partner controller acts as an IO-Device and simultaneously controls its periphery as an IO-Controller, the tasks between controllers can be coordinated without additional devices. Relations An Application Relation (AR) is established between an IO-Controller and an IO-Device. These ARs are used to define Communication Relations (CR) with different characteristics for the transfer of parameters, cyclic exchange of data and handling of alarms. Engineering The project engineering of an IO system is nearly identical to the Profibus in terms of "look and feel": The properties of an IO-Device are described by the device manufacturer in a GSD file (General Station Description). The language used for this is GSDML (GSD Markup Language) - an XML-based language. The GSD file serves an engineering environment as a basis for planning the configuration of a Profinet IO system. All Profinet field devices determine their neighbors. This means that field devices can be exchanged in the event of a fault without additional tools and prior knowledge. By reading out this information, the plant topology can be displayed graphically for better clarity. The engineering can be supported by tools such as PROFINET Commander or PRONETA. Dependability Profinet is also increasingly being used in critical applications. There is always a risk that the required functions cannot be fulfilled. This risk can be reduced by specific measures as identified by a dependability analyses. The following objectives are in the foreground: Safety: Ensuring functional safety. The system should go into a safe state in the event of a fault. Availability: Increasing the availability. In the event of a fault, the system should still be able to perform the minimum required function. Security: Information security is to ensure the integrity of the system. These goals can interfere with or complement each other. Functional safety: Profisafe Profisafe defines how safety-related devices (emergency stop buttons, light grids, overfill prevention devices, ...) communicate with safety controllers via Profinet in such a safe way that they can be used in safety-related automation tasks up to Safety Integrity Level 3 (SIL) according to IEC 61508, Performance Level "e" (PL) according to ISO 13849, or Category 4 according to EN 954-1. Profisafe implements safe communication via a profile, i.e. via a special format of the user data and a special protocol. It is designed as a separate layer on top of the fieldbus application layer to reduce the probability of data transmission errors. The Profisafe messages use standard fieldbus cables and messages. They do not depend on error detection mechanisms of underlying transmission channels, and thus supports securing of whole communication paths, including backplanes inside controllers or remote I/O. The Profisafe protocol uses error and failure detection mechanisms such as: Consecutive numbering Timeout monitoring Source/destination authentication Cyclic redundancy checking (CRC) and is defined in the IEC 61784-3-3 standard. Increased availability High availability is one of the most important requirements in industrial automation, both in factory and process automation. The availability of an automation system can be increased by adding redundancy for critical elements. A distinction can be made between system and media redundancy. System redundancy System redundancy can also be implemented with Profinet to increase availability. In this case, two IO-Controllers that control the same IO-Device are configured. The active IO-Controller marks its output data as primary. Output data that is not marked is ignored by an IO-Device in a redundant IO-System. In the event of an error, the second IO-Controller can therefore take control of all IO-Devices without interruption by marking its output data as primary. How the two IO-Controllers synchronize their tasks is not defined in Profinet and is implemented differently by the various manufacturers offering redundant control systems. Media redundancy Profinet offers two media redundancy solutions. The Media Redundancy Protocol (MRP) allows the creation of a protocol-independent ring topology with a switching time of less than 50 ms. This is often sufficient for standard real-time communication with Profinet. To switch over the redundancy in the event of an error without time delay, the "Media Redundancy for Planned Duplication" (MRPD) must be used as a seamless media redundancy concept. In the MRPD, the cyclic real-time data is transmitted in both directions in the ring-shaped topology. A time stamp in the data packet allows the receiver to remove the redundant duplicates. Security The IT security concept for Profinet assumes a defense-in-depth approach. In this approach, the production plant is protected against attacks, particularly from outside, by a multi-level perimeter, including firewalls. In addition, further protection is possible within the plant by dividing it into zones using firewalls. In addition, a security component test ensures that the Profinet components are resistant to overload to a defined extent. This concept is supported by organizational measures in the production plant within the framework of a security management system according to ISO 27001. Application Profiles For a smooth interaction of the devices involved in an automation solution, they must correspond in their basic functions and services. Standardization is achieved by "profiles" with binding specifications for functions and services. The possible functions of communication with Profinet are restricted and additional specifications regarding the function of the field device are prescribed. These can be cross-device class properties such as a safety-relevant behavior (Common Application Profiles) or device class specific properties (Specific Application Profiles). A distinction is made between Device profiles for e.g. robots, drives (PROFIdrive), process devices, encoders, pumps Industry Profiles for e.g. laboratory technology or rail vehicles Integration Profiles for the integration of subsystems such as IO-Link systems Drives PROFIdrive is the modular device profile for drive devices. It was jointly developed by manufacturers and users in the 1990s and since then, in conjunction with Profibus and, from version 4.0, also with Profinet, it has covered the entire range from the simplest to the most demanding drive solutions. Energy Another profile is PROFIenergy which includes services for real time monitoring of energy demand. This was requested in 2009 by the AIDA group of German automotive Manufacturers (Audi, BMW, Mercedes-Benz, Porsche and Volkswagen ) who wished to have a standardised way of actively managing energy usage in their plants. High energy devices and sub-systems such as robots, lasers and even paint lines are the target for this profile, which will help reduce a plant's energy costs by intelligently switching the devices into 'sleep' modes to take account of production breaks, both foreseen (e.g. weekends and shut-downs) and unforeseen (e.g. breakdowns). Process automation Modern process devices have their own intelligence and can take over part of the information processing or the overall functionality in automation systems. For integration into a Profinet system, a two-wire Ethernet is required in addition to increased availability. Process devices The profile PA Devices defines for different classes of process devices all functions and parameters typically used in process devices for the signal flow from the sensor signal from the process to the pre-processed process value, which is read out to the control system together with a measured value status. The PA Devices profile contains device data sheets for Pressure and differential pressure Level, temperature and flow rate Analog and digital inputs and outputs Valves and actuators Analysis equipment Advanced Physical Layer Ethernet Advanced Physical Layer (Ethernet-APL) describes a physical layer for the Ethernet communication technology which is especially developed for the requirements of the process industries. The development of Ethernet-APL was determined by the need for communication at high speeds and over long distances, the supply of power and communications signals via common single, twisted-pair (2-wire) cable as well as protective measures for the safe use within explosion hazardous areas. Ethernet APL opens the possibility for Profinet to be incorporated into process instruments. Technology Profinet protocols Profinet uses the following protocols in the different layers of the OSI model: Layers 1-2: Mainly full-duplex with 100 MBit/s electrical (100BASE-TX) or optical (100BASE-FX) according to IEEE 802.3 are recommended as device connections. Autocrossover is mandatory for all connections so that the use of crossover cables can be avoided. From IEEE 802.1Q the VLAN with priority tagging is used. All real-time data are thus given the highest possible priority 6 and are therefore forwarded by a switch with a minimum delay. The Profinet protocol can be recorded and displayed with any Ethernet analysis tool. Wireshark is capable of decoding Profinet telegrams. The Link Layer Discovery Protocol (LLDP) has been extended with additional parameters, so that in addition to the detection of neighbors, the propagation time of the signals on the connection lines can be communicated. Layers 3-6: Either the Remote Service Interface (RSI) protocol or the Remote Procedure Call (RPC) protocol is used for the connection setup and the acyclic services. The RPC protocol is used via User Datagram Protocol (UDP) and Internet Protocol (IP) with the use of IP addresses. The Address Resolution Protocol (ARP) is extended for this purpose with the detection of duplicate IP addresses. The Discovery and basic Configuration Protocol (DCP) is mandatory for the assignment of IP addresses. Optionally, the Dynamic Host Configuration Protocol (DHCP) can also be used for this purpose. No IP addresses are used with the RSI protocol. Thus, IP can be used in the operating system of the field device for other protocols such as OPC Unified Architecture (OPC UA). Layer 7: Various protocols are defined to access the services of the Fieldbus Application Layer (FAL). The RT (Real-Time) protocol for class A & B applications with cycle times in the range of 1 - 10 ms. The IRT (Isochronous Real-Time) protocol for application class C allows cycle times below 1 ms for drive technology applications. This can also be achieved with the same services via Time-Sensitive Networking (TSN). Technology of Conformance Classes The functionalities of Profinet IO are realized with different technologies and protocols: Technology of Class A (CC-A) The basic function of the Profinet is the cyclic data exchange between the IO-Controller as producer and several IO-Devices as consumers of the output data and the IO-Devices as producers and the IO-Controller as consumer of the input data. Each communication relationship IO data CR between the IO-Controller and an IO-Device defines the number of data and the cycle times. All Profinet IO-Devices must support device diagnostics and the safe transmission of alarms via the communication relation for alarms Alarm CR. In addition, device parameters can be read and written with each Profinet device via the acyclic communication relation Record Data CR. The data set for the unique identification of an IO-Device, the Identification and Maintenance Data Set 0 (I&M 0), must be installed by all Profinet IO-Devices. Optionally, further information can be stored in a standardized format as I&M 1-4. For real-time data (cyclic data and alarms), the Profinet Real-Time (RT) telegrams are transmitted directly via Ethernet. UDP/IP is used for the transmission of acyclic data. Management of the Application Relations (AR) The Application Relation (AR) is established between an IO-Controller and every IO-Device to be controlled. Inside the ARs are defined the required CRs. The Profinet AR life-cycle consists of address resolution, connection establishment, parameterization, process IO data exchange / alarm handling, and termination. Address resolution: A Profinet IO-Device is identified on the Profinet network by its station name. Connection establishment, parameterization and alarm handling are implemented with User Datagram Protocol (UDP), which requires that the device also be assigned an IP address. After identifying the device by its station name, the IO-Controller assigns the pre-configured IP address to the device. Connection establishment: Connection establishment starts with the IO-Controller sending a connect request to the IO-Device. The connect request establishes an Application Relationship (AR) containing a number of Communication Relationships (CRs) between the IO-Controller and IO-Device. In addition to the AR and CRs, the connect request specifies the modular configuration of the IO-Device, the layout of the process IO data frames, the cyclic rate of IO data exchange and the watchdog. Acknowledgement of the connect request by the IO-Device allows parameterization to follow. From this point forward, both the IO-Device and IO-Controller start exchanging cyclic process I/O data frames. The process I/O data frames don't contain valid data at this point, but they start serving as keep-alive to keep the watchdog from expiring. Parameterization: The IO-Controller writes parameterization data to each IO-Device sub-module in accordance with the General Station Description Mark-up Language (GSDML) file. Once all sub-modules have been configured, the IO-Controller signals that parameterization has ended. The IO-Device responds by signaling application readiness, which allows process IO data exchange and alarm handling to ensue. Process IO data exchange / alarm handling: The IO-Device followed by the IO-Controller start to cyclically refresh valid process I/O data. The IO-Controller processes the inputs and controls the outputs of the IO-Device. Alarm notifications are exchanged acyclically between the IO-Controller and IO-Device as events and faults occur. Termination: The connection between the IO-Device and IO-Controller terminates when the watchdog expires. Watchdog expiry is the result of a failure to refresh cyclic process I/O data by the IO-Controller or the IO-Device. Unless the connection was intentionally terminated at the IO-Controller, the IO-Controller will try to restart the Profinet Application Relation. Technology of Class B (CC-B) In addition to the basic Class A functions, Class B devices must support additional functionalities. These functionalities primarily support the commissioning, operation and maintenance of a Profinet IO system and are intended to increase the availability of the Profinet IO system. Support of network diagnostics with the Simple Network Management Protocol (SNMP) is mandatory. Likewise, the Link Layer Discovery Protocol (LLDP) for neighborhood detection including the extensions for Profinet must be supported by all Class B devices. This also includes the collection and provision of Ethernet port-related statistics for network maintenance. With these mechanisms, the topology of a Profinet IO network can be read out at any time and the status of the individual connections can be monitored. If the network topology is known, automatic addressing of the nodes can be activated by their position in the topology. This considerably simplifies device replacement during maintenance, since no more settings need to be made. High availability of the IO system is particularly important for applications in process automation and process engineering. For this reason, special procedures have been defined for Class B devices with the existing relationships and protocols. This allows system redundancy with two IO-Controllers accessing the same IO-Devices simultaneously. In addition, there is a prescribed procedure Dynamic Reconfiguration (DR), how the configuration of an IO-Device can be changed with the help of these redundant relationships without losing control over the IO-Device. Technology of Class C (CC-C) For the functionalities of Conformance Class C (CC-C) the Isochronous Real-Time (IRT) protocol is mainly used. With the bandwidth reservation, a part of the available transmission bandwidth of 100 MBit/s is reserved exclusively for real-time tasks. A procedure similar to a time multiplexing method is used. The bandwidth is divided into fixed cycle times, which in turn are divided into phases. The red phase is reserved exclusively for class C real-time data, in the orange phase the time-critical messages are transmitted and in the green phase the other Ethernet messages are transparently passed through. To ensure that maximum Ethernet telegrams can still be passed through transparently, the green phase must be at least 125 μs long. Thus, cycle times under 250 μs are not possible in combination with unchanged Ethernet. In order to achieve shorter cycle times down to 31.25 μs, the Ethernet telegrams of the green phase are optionally broken down into fragments. These short fragments are now transmitted via the green phase. This fragmentation mechanism is transparent to the other participants on the Ethernet and therefore not recognizable. In order to implement these bus cycles for bandwidth reservation, precise clock synchronization of all participating devices including the switches is required with a maximum deviation of 1 μs. This clock synchronization is implemented with the Precision Time Protocol (PTP) according to the IEEE 1588-2008 (1588 V2) standard. All devices involved in the bandwidth reservation must therefore be in the same time domain. For position control applications for several axes or for positioning processes according to the PROFIdrive drive profile of application classes 4 - 6, not only must communication be timely, but the actions of the various drives on a Profinet must also be coordinated and synchronized. The clock synchronization of the application program to the bus cycle allows control functions to be implemented that are executed synchronously on distributed devices. If several Profinet devices are connected in a line (daisy chain), it is possible to further optimise the cyclic data exchange with Dynamic Frame Packing (DFP). For this purpose, the controller puts all output data for all devices into a single IRT frame. At the passing IRT frame, each Device takes out the data intended for the device, i.e. the IRT frame becomes shorter and shorter. For the data from the different devices to the controller, the IRT frame is dynamically assembled. The great efficiency of the DFP lies in the fact that the IRT frame is always only as extensive as necessary and that the data from the controller to the devices can be transmitted in full duplex simultaneously with the data from the devices to the controller. Technology of Class D (CC-D) Class D offers the same services to the user as Class C, with the difference that these services are provided using the mechanisms of Time-Sensitive Networking (TSN) defined by IEEE. The Remote Service Interface (RSI) is used as a replacement for the Internet protocol suite. Thus, this application class D is implemented independently of IP addresses. The protocol stack will be smaller and independent of future Internet versions (IPv6). The TSN is not a consistent, self-contained protocol definition, but a collection of different protocols with different characteristics that can be combined almost arbitrarily for each application. For use in industrial automation, a subset is compiled in IEC/IEEE standard 60802 "Joint Profile TSN for Industrial Automation". A subset is used in the Profinet specification version 2.4 for implementing class D. In this specification, a distinction is made between two applications: isochronous, cyclic data exchange with short, limited latency time (Isochronous Cyclic Real Time) for applications in Motion Control and distributed control technology Cyclic data exchange with limited latency time (Cyclic Real Time) for general automation tasks For the isochronous data exchange the clocks of the participants must be synchronized. For this purpose, the specifications of the Precision Time Protocol according to IEC 61588 for time synchronization with TSN are adapted accordingly. The telegrams are arranged in queues according to the priorities provided in the VLAN tag. The Time-Aware Shaper (TAS) now specifies a clock pulse with which the individual queues are processed in a switch. This leads to a time-slot procedure where the isochronous, cyclical data is transmitted with the highest priority, the cyclical data with the second priority before all acyclic data. This reduces the latency time and also the jitter for the cyclic data. If a data telegram with low priority lasts too long, it can be interrupted by a cyclic data telegram with high priority and transmitted further afterwards. This procedure is called Frame Preemption and is mandatory for CC-D. Implementation of Profinet interface For the realization of a Profinet interface as controller or device, no additional hardware requirements are required for Profinet (CC-A and CC-B) that cannot be met by a common Ethernet interface (100BASE-TX or 100BASE-FX). To enable a simpler line topology, the installation of a switch with 2 ports in a device is recommended. For the realization of class C (CC-C) devices, an extension of the hardware with time synchronization with the Precision Time Protocol (PTP) and the functionalities of bandwidth reservation is required. For class D (CC-D) devices, the hardware must support the required functionalities of Time-Sensitive Networking (TSN) according to IEEE standards. The method of implementation depends on the design and performance of the device and the expected quantities. The alternatives are Development in-house or with a service provider Use of ready-made building blocks or individual design Execution in fixed design ASIC, reconfigurable in FPGA technology, as plug-in module or as software component. History At the general meeting of the Profibus user organisation in 2000, the first concrete discussions for a successor to Profibus based on Ethernet took place. Just one year later, the first specification of Component Based Automation (CBA) was published and presented at the Hanover Fair. In 2002, the Profinet CBA became part of the international standard IEC 61158 / IEC 61784-1. A Profinet CBA system consists of different automation components. One component comprises all mechanical, electrical and information technology variables. The component may have been created with the usual programming tools. To describe a component, a Profinet Component Description (PCD) file is created in XML. A planning tool loads these descriptions and allows the logical connections between the individual components to be created to implement a plant. The basic idea behind Profinet CBA was that in many cases it is possible to divide an entire automation system into autonomously operating - and thus manageable - subsystems. The structure and functionality may well be found in several plants in identical or slightly modified form. Such so-called Profinet components are normally controlled by a manageable number of input signals. Within the component, a control program written by the user executes the required functionality and sends the corresponding output signals to another controller. The communication of a component-based system is planned instead of programmed. Communication with Profinet CBA was suitable for bus cycle times of approx. 50 to 100 ms. Individual systems show how these concepts can be successfully implemented in the application. However, Profinet CBA does not find the expected acceptance in the market and will no longer be listed in the IEC 61784-1 standard from the 4th edition of 2014. In 2003 the first specification of Profinet IO (IO = Input Output) was published. The application interface of the Profibus DP (DP = Decentralized Periphery), which was successful on the market, was adopted and supplemented with current protocols from the Internet. In the following year, the extension with isochronous transmission follows, which makes Profinet IO suitable for motion control applications. Profisafe is adapted so that it can also be used via Profinet. With the clear commitment of AIDA to Profinet in 2004, acceptance in the market is given. In 2006 Profinet IO becomes part of the international standard IEC 61158 / IEC 61784-2. In 2007, according to the neutral count, 1 million Profinet devices have already been installed, in the following year this number doubles to 2 million. By 2019, a total of 26 million devices sold by the various manufacturers are reported. In 2019, the specification for Profinet was completed with Time-Sensitive Networking (TSN), thus introducing the CC-D conformance class. Further reading Notes References External links PROFIBUS & PROFINET International (PI) PROFINET Technology Page PROFIBUS International PROFIsafe web portal PROFINET University wireshark PROFINET Wiki PROFINET Community Stack p-net - An open-source PROFINET device stack Industrial Ethernet
Profinet
[ "Engineering" ]
5,961
[ "Industrial Ethernet" ]
3,088,777
https://en.wikipedia.org/wiki/Ministry%20of%20Climate%2C%20Energy%20and%20Utilities%20%28Denmark%29
The Danish Ministry of Climate, Energy and Utilities () is a governmental agency in Denmark. It is responsible for national climate policy and international cooperation on climate change, as well as energy issues, meteorology and national geological surveys in Denmark and Greenland. History The predecessor of the Ministry of Climate and Energy, the Ministry of Energy (), was created in 1979, from the energy department of the Ministry of Trade. In 1994, it was merged with Ministry of the Environment and in 2005 it was detached from that ministry, to be merged with Ministry of Transport and Energy. On 23 November 2007, the energy issues were de-merged from the Ministry of Transport and climate issues were de-merged from the Ministry of Environment and the Ministry of Climate and Energy was created. List of ministers Agencies A number of agencies belong to the ministry: Danmarks Meteorologiske Institut (DMI) Forsyningstilsynet (research board) Energinet Klimarådet (climate council) Styrelsen for Dataforsyning og Effektivisering De Nationale Geologiske Undersøgelser for Danmark og Grønland (GEUS) Energistyrelsen (Danish Energy Agency) Geodatastyrelsen The Danish Electricity Saving Trust (Elsparefonden) is an independent trust under the auspices of the Danish Ministry of Climate and Energy. The Trust works to promote energy savings and a more efficient use of electricity. See also Electricity sector in Denmark References External links Archived official website in English. The website of the current Danish Energy Agency is at http://ens.dk/en Ministry of Climate Denmark Climate Ministries established in 1979 Geology of Denmark Climate change ministries Climate
Ministry of Climate, Energy and Utilities (Denmark)
[ "Engineering" ]
352
[ "Energy organizations", "Energy ministries" ]
3,088,802
https://en.wikipedia.org/wiki/Oil%20Shockwave
The Oil Shockwave event was a policy wargaming scenario created by the joint effort of several energy policy think tanks, the National Commission on Energy Policy and Securing America's Future Energy. It outlined a series of hypothetical international events taking place in December 2005, all related to world supply and demand of petroleum. Participants in the scenario role-played Presidential Cabinet officials, who were asked to discuss and respond to the events. The hypothetical events included civil unrest in OPEC country Nigeria, and coordinated terrorist attacks on ports in Saudi Arabia and Alaska. In the original simulation, the participants had all previously held jobs closely related to their roles in the exercise. Jason Grumet, from the National Commission on Energy Policy, said that the message of the simulation was that, "very modest disruptions in oil supply, whether they're here at home or abroad can have truly devastating impacts on our nations economy and our overall security." Details of the scenario The original event was performed June 23, 2005, and was a simulation of December 2005, six months in the future. The first scenario involved civil unrest in Nigeria, a member of the Organization of Petroleum Exporting Countries, resulting in oil companies and the US government evacuating their personnel from the country. In the simulation, this led to decrease in oil supply and the price spikes causing a variety of negative effects on the United States economy. More events followed as the scenario progressed, including a very cold winter in the Northern hemisphere, terrorist attacks on Saudi Arabian and Alaskan oil ports, and Al-Qaeda cells hijacking oil tankers and crashing them into the docking facilities at the ports (which might effectively shut down such port for weeks, if not months). The scenarios were set up with pre-produced scripted news clips. Participants were also given briefing memos with background information related to their specific cabinet positions. The participants discussed and prepared policy recommendations for an unseen Chief Executive after each part of the scenario. Original participants The original event was a one-time exercise and used participants that held positions that were identical or closely related to their positions in the simulation. Participants included former administrator of the Environmental Protection Agency Carol Browner, former Director of Central Intelligence Robert Gates, former Marine Corps Commendant and member of the Joint Chiefs of Staff General P.X. Kelley USMC (Ret.), and former National Economic Advisor to the President, Gene Sperling. References External links Oil Shockwave College Curriculum Wargames Energy policy
Oil Shockwave
[ "Environmental_science" ]
499
[ "Environmental social science", "Energy policy" ]
3,088,803
https://en.wikipedia.org/wiki/Pickling%20%28metal%29
Pickling is a metal surface treatment used to remove impurities, such as stains, inorganic contaminants, and rust or scale from ferrous metals, copper, precious metals and aluminum alloys. A solution called pickle liquor, which usually contains acid, is used to remove the surface impurities. It is commonly used to descale or clean steel in various steelmaking processes. Process Metal surfaces can contain impurities that may affect usage of the product or further processing like plating with metal or painting. Various chemical solutions are usually used to clean these impurities. Strong acids, such as hydrochloric acid and sulfuric acid are common, but different applications use various other acids. Also alkaline solutions can be used for cleaning metal surfaces. Solutions usually also contain additives such as wetting agents and corrosion inhibitors. Pickling is sometimes called acid cleaning if descaling is not needed. Many hot working processes and other processes that occur at high temperatures leave a discoloring oxide layer or scale on the surface. In order to remove the scale the workpiece is dipped into a vat of pickle liquor. Prior to cold rolling operation, hot rolled steel is normally passed through a pickling line so as to eradicate the scale from the surface. The primary acid used in steelmaking is hydrochloric acid, although sulfuric acid was previously more common. Hydrochloric acid is more expensive than sulfuric acid, but it pickles much faster while minimizing base metal loss. The speed is a requirement for integration in automatic steel mills that run production at speeds as high as 800 ft/min (≈243 metres/min). Carbon steels, with an alloy content less than or equal to 6%, are often pickled in hydrochloric or sulfuric acid. Steels with an alloy content greater than 6% must be pickled in two steps and other acids are used, such as phosphoric, nitric and hydrofluoric acid. Rust- and acid-resistant chromium-nickel steels are pickled traditionally in a bath of hydrofluoric and nitric acid. Most copper alloys are pickled in dilute sulfuric acid, but brass is pickled in concentrated sulfuric and nitric acid mixed with sodium chloride and soot. In jewelry making, pickling is used to remove the copper oxide layer that results from heating copper and sterling silver during soldering and annealing. A diluted sulfuric acid pickling bath is traditionally used, but may be replaced with citric acid. Sheet steel that undergoes acid pickling will oxidize (rust) when exposed to atmospheric conditions of moderately high humidity. For this reason, a thin film of oil or similar waterproof coating is applied to create a barrier to moisture in the air. This oil film must later be removed for many fabrication, plating or painting processes. Disadvantages Acid cleaning has limitations in that it is difficult to handle because of its corrosiveness, and it is not applicable to all steels. Hydrogen embrittlement becomes a problem for some alloys and high-carbon steels. The hydrogen from the acid reacts with the surface and makes it brittle, causing cracks. Because of its high reactivity with treatable steels, acid concentrations and solution temperatures must be kept under control to ensure desired pickling rates. Waste products Pickling sludge is the waste product from pickling, and includes acidic rinse waters, iron chlorides, and metallic salts and waste acid. Spent pickle liquor is considered a hazardous waste by the EPA. Pickle sludge from steel processes is usually neutralized with lime and disposed of in a landfill since the EPA no longer deems it a hazardous waste after neutralization. The lime neutralization process raises the pH of the spent acid. The waste material is subject to a waste determination to ensure no characteristic or listed waste is present. Since the 1960s, hydrochloric pickling sludge is often treated in a hydrochloric acid regeneration system, which recovers some of the hydrochloric acid and ferric oxide. The rest must still be neutralized and disposed of in land fills or managed as a hazardous waste based on the waste profile analysis. The by-products of nitric acid pickling are marketable to other industries, such as fertilizer processors. Alternatives Smooth clean surface (SCS) and eco pickled surface (EPS) are more recent alternatives. In the SCS process, surface oxidation is removed using an engineered abrasive and the process leaves the surface resistant to subsequent oxidation without the need for oil film or other protective coating. EPS is a more direct replacement for acid pickling. Acid pickling relies on chemical reactions while EPS uses mechanical means. The EPS process is considered "environmentally friendly" compared with acid pickling and it imparts to carbon steel a high degree of rust resistance, eliminating the need to apply the oil coating that serves as a barrier to oxidation for acid-pickled carbon steel. Alternative methods include also mechanical cleaning such as abrasive blasting, grinding, wire brushing, hydrocleaning and Laser cleaning. These methods generally do not provide as clean a surface as pickling does. See also Cosmoline Hot-dip galvanization Jewelling Phosphate conversion coating Quench polish quench References Metalworking Steelmaking
Pickling (metal)
[ "Chemistry" ]
1,090
[ "Metallurgical processes", "Steelmaking" ]
3,088,890
https://en.wikipedia.org/wiki/Structural%20analog
A structural analog, also known as a chemical analog or simply an analog, is a compound having a structure similar to that of another compound, but differing from it in respect to a certain component. It can differ in one or more atoms, functional groups, or substructures, which are replaced with other atoms, groups, or substructures. A structural analog can be imagined to be formed, at least theoretically, from the other compound. Structural analogs are often isoelectronic. Despite a high chemical similarity, structural analogs are not necessarily functional analogs and can have very different physical, chemical, biochemical, or pharmacological properties. In drug discovery, either a large series of structural analogs of an initial lead compound are created and tested as part of a structure–activity relationship study or a database is screened for structural analogs of a lead compound. Chemical analogues of illegal drugs are developed and sold in order to circumvent laws. Such substances are often called designer drugs. Because of this, the United States passed the Federal Analogue Act in 1986. This bill banned the production of any chemical analogue of a Schedule I or Schedule II substance that has substantially similar pharmacological effects, with the intent of human consumption. Examples Neurotransmitter analog A neurotransmitter analog is a structural analogue of a neurotransmitter, typically a drug. Some examples include: Catecholamine analogue Serotonin analogue GABA analogue See also Derivative (chemistry) Federal Analogue Act, a United States bill banning chemical analogues of illegal drugs Functional analog, compounds with similar physical, chemical, biochemical, or pharmacological properties Homolog, a compound of a series differing only by repeated units Transition state analog References External links Analoging in ChEMBL, DrugBank and the Connectivity Map – a free web-service for finding structural analogs in ChEMBL, DrugBank, and the Connectivity Map Chemical nomenclature
Structural analog
[ "Chemistry" ]
406
[ "nan" ]
3,088,959
https://en.wikipedia.org/wiki/List%20of%20Andromeda%27s%20satellite%20galaxies
The Andromeda Galaxy (M31) has satellite galaxies just like the Milky Way. Orbiting M31 are at least 13 dwarf galaxies: the brightest and largest is M110, which can be seen with a basic telescope. The second-brightest and closest one to M31 is M32. The other galaxies are fainter, and were mostly discovered starting from the 1970s. On January 11, 2006, it was announced that Andromeda Galaxy's faint companion galaxies lie on or close to a single plane running through the Andromeda Galaxy's center. This unexpected distribution is not obviously understood in the context of current models for galaxy formation. The plane of satellite galaxies points toward a nearby group of galaxies (M81 Group), possibly tracing the large-scale distribution of dark matter. It is unknown whether the Triangulum Galaxy is a satellite of Andromeda. Table of known satellites Andromeda Galaxy's satellites are listed here by discovery (orbital distance is not known). Andromeda IV is not included in the list, as it was discovered to be roughly 10 times further than Andromeda from the Milky Way in 2014, and therefore a completely unrelated galaxy. * It is uncertain whether it is a companion galaxy of the Andromeda Galaxy. ** RA/DEC values marked in Italics are rough estimates. *** Martin et al. (2009) gave aliases to several satellite galaxies of the Andromeda Galaxy that are located in Pisces. However, the name Pisces II was later used for a different galaxy that is a satellite of the Milky Way, so it is not used here. See also Satellite galaxies of the Milky Way List of nearest galaxies Local Group References External links Andromeda's thin sheet of satellites – Dark matter filiments or galaxtic cannibalism? Strange Setup: Andromeda's Satellite Galaxies All Lined Up Local Group Andromeda (constellation) Andromeda Galaxy
List of Andromeda's satellite galaxies
[ "Astronomy" ]
400
[ "Andromeda (constellation)", "Constellations" ]
3,089,253
https://en.wikipedia.org/wiki/HP%20Pavilion
HP Pavilion is a line of consumer-oriented personal computers originally produced by Hewlett-Packard and later by its successor, HP Inc. Introduced in 1995, HP has used the name for both desktops and laptops for home and home office use. After acquiring Compaq in 2002, HP sold both HP- and Compaq-branded machines under the Pavilion and Presario names respectively from 2002 to 2013. History In August 1995, HP released the first computer in the Pavilion line known as the HP Pavilion 5030, an IBM PC–compatible desktop computer designed for multimedia use. While it was not the first multimedia PC the company made, it was the first computer made by HP that was designed specifically for the home market. The first multimedia PCs made by the company prior to the Pavilion 5030 were the HP Multimedia PC 6100, 6140S, and 6170S. As an entry-level model, the Pavilion 5030 featured a 75 MHz Intel Pentium processor, 8 MB RAM, an 850 MB hard drive, a quad-speed CD-ROM drive, Altec Lansing speakers, and includes some software for online service access. It came shipped with Windows 95 preinstalled, coinciding with the launch of Microsoft's then-new operating system at the time. Prior to the introduction of the Pavilion line in 1995, HP was known for their business-oriented models such as those from the HP Vectra series as well as the OmniBook (pre-2024) line of business notebooks. HP also produced a low-cost, high-speed infrared transceiver that allowed wireless data exchange in a range of portable computing applications, these included telephones, computers, printers, cash registers, automatic teller machines, and digital cameras. Around the same year the Pavilion was introduced, Dave Packard published The HP Way, a book that chronicled the rise of Hewlett-Packard and gave consumers insight into its business practices, culture, and management style. In May 2002, HP acquired Compaq, a former information technology company known for their Presario line of computers among other products. After acquiring the company, HP then took over Compaq's existing naming rights agreement and so sold both HP- and Compaq-branded machines until 2013. In May 2024, HP announced that the Pavilion name, along with multiple others like Envy and Spectre, would be gradually retired as part of a streamlining of brands that year, with new consumer computers (except for Omen) being released under the Omni branding, with OmniBook, OmniStudio and OmniDesk brandings. This rebranding also marked the return of the OmniBook brand back to HP after originally being discontinued in 2002 as part of the merger with Compaq that same year. The new Omni brand would consist of computers utilizing next-generation AI technologies. Desktops HP offers about 30 customizable desktops ; of these, 5 are standard HP Pavilion, 4 are Slimline, 6 are High Performance Edition (HPE), 5 are "Phoenix" HPE Gaming editions*, 5 are Touchsmart, and 5 are All-In-One. Introduced in the early 2020s, the HP Pavilion Gaming brand is a line of budget gaming computers offered in both desktop and laptop form factors. It succeeds the previous "Pheonix" HPE Gaming edition brand. Latest desktop models (Note: List is current as of November 2012) HP Pavilion: p7m, p7z, p7t, p7xt, p7qe HP Pavilion Slimline: s5m, s5t, s5z, s5xt HP Pavilion HPE (High Performance Edition): h8m, h8t, h8z, h8xt, h8qe, h8se HP Pavilion HPE (High Performance Edition) Phoenix (Gaming): h9-1100z, h9-1120t, h9-1150t, h9-1170t, h9-1135, h9-1200ex *(not customizable) HP Pavilion Wave: 600t HP Touchsmart PC: 310z, 610z, 610t, 610xt, 610 Quad HP Omni Series (All-In-One): 100z, 100t, 200t, 200xt, 200 Quad Past desktop models (Note that is a non-exhaustive list and may never satisfy completeness but shows some of the more or less recent models under the Pavilion brand.) HP Pavilion: a255c, a445c, a1740n, a6560t, a6560z, a6510t, a6500z, a6460t, a6450z, a6410t, a6400z, a6250z, a6250t, a6210z, a6205t, a6200t, a6600z, a6608f, a6610t, ?6617?, a6660t, a6660z, a6700z, a6750f, p6300z, p6310t, p6350z, p6370t, p6380t, a705w, a000 series - Panther / Jaguar a1000 series - Mojave / Gobi a6000 / p6000 series - Venus / Venus2 HP Pavilion Slimline: s3100n, s3200t, s3200z, s3400t, s3400z, s3500t, s3500z, s3600f, s3600t, s3600z, s3700f, s3700z, s3710t, s3750t, s5305z, s5310t, s5350z, s5370t, s5380t, s5730f, s7350n HP Pavilion Media Center: a1330n, a1410n, a1600n, m7580n(XP Only), m8300, m8100y, m8200n, t000, HP Pavilion Elite/HPE: m9350f, m9300t, m9300z, m9200t, m9200z, m9000t, m9000z, d5000z, d5000t, d5100t, m9400t, m9400z, d5200t, e9300z, HPE 110t, HPE 150t, HPE 170t, HPE 180t, HPE 190t HP Pavilion Ultimate: d4999t, d4999z HP Touchsmart: iq770t, iq772t, IQ504t, IQ506t, IQ804t, 300z, 600t, 600xt, 600 Quad HP Pavilion All in One/Omni: 23SE, MS220z, 200t Model number suffixes The suffix on the model number, if present, indicates special information such as processor or country. The following chart describes each suffix. t: Intel processor z: AMD processor qe: Quad Edition sb: Small Business Series se: Special Edition y: CTO – Configure To Order f: Unknown (possibly related to image media) Two-letter country codes such as us: United States ca: Canada br: Brazil la: Latin America ap: Asian Pacific au/ax/tu/tx: Asia/Australia ea/ec/ee/eo/(e plus a letter): Eastern & Western Europe sa/sc/se/so/(s plus a letter): Eastern & Western Europe na/nc/ne/no/(n plus a letter): Eastern & Western Europe qr: Russia jp: Japan etc. Overheating problems The HP Pavilion Slimline desktops are housed in small form factor cases. As a result, they can become very hot almost quickly, due to their small size. Notebooks HP has also produced laptops and notebooks under the Pavilion brand name. Up until 2013, some models of the Pavilion laptops were produced with Compaq Presario branding. The HP Pavilion laptops are only customizable in the United States. A variety of different models with different setups are available in other countries. Previous notebook models 20.1 inch: HDX9000 18.4 inch: HDX18t / dv8t 17.3 inch: ENVY 17 3D / ENVY 17 / dv7t / G72t / g7 17.0 inch: dv7 / g70t / dv9000 / dv8000 / zd8000 / zd7000 16.0 inch: HP G60-445DX 15.6 inch: Compaq Presario (CQ60 / CQ62z), dv6t / dv6z / dv6zae (Artist Edition 2) HDX16t / G60t / G62t / G62m / g6 / m6 / 15-p077tx / 15-p001tx / 15-ck069tx 15-p005x / 15-p073tx / 15-p045tx / 15-p085tx / 15-r022tx / 15-r014tx / 15-r022tx / 15-d103tx / 15-p207tx / 15-p209tx / 15-p210tx / 15-p029tx / 15-p028tx / 15-p027tx / 15-f233wm / 15-n096sa / 15-ab165us / 15-cc5xx 15.4 inch: dv5 / dv6000 / dv5000 / dv4000 / zv6000 / zv5000 / zx5000 / ze5000 / ze4000 / zt3000 15.0 inch: ze2000 / ze1000 / zt1000 / ze5170 14.5 inch: ENVY 14 14.3 inch: dv1658 14.1 inch: dv4t / dv4z / dv4tse / dv2000 series / dv1000 series 14.0 inch: dm4t / dm4x / G4t 13.3 inch: dm3t / Voodoo Envy 133 / dv3t / dv3z / dv3500t 12.1 inch: dv2z; Tablet PC: tx series / TouchSmart tx2z / tm2t 11.6 inch: dm1z HP Pavilion x2 The HP Pavilion x2 is a long-running family of devices; there are dozens of variants, across many generations of Intel processors. 10.1 inch: HP Pavilion x2 Detachable (1280 x 800 touchscreen) HP Mini 10.1 inch: HP Mini 1000 (Mi / XP / Mobile Broadband Wireless / Vivienne Tam) / HP Mini 210 / HP Mini 110 (Mi / XP) 8.9 inch: HP Mini 1000 (Mi / XP / Mobile Broadband Wireless) Model number suffixes The two or three letter suffix on the model number indicates special information like country or language (dv----xx). The following chart describes each suffix. t: Intel processor z: AMD processor ae: Artist Edition ("Artist Edition" imprint) bw: Broadband Wireless series sb: Small Business series se: Special Edition ("Intensity" dv4tse, "Renewal" dv5tse; "Special Edition" imprint) qe: Quad Edition (special quad-core processor, e.g. dv7tqe-6100 CTO with Intel i7) The HP Pavilion HDX was only sold with Intel processors but does not end with the suffix "t" (it has no suffix). Likewise, the HP Pavilion TX tablet PC series was only sold with AMD processors but still ended with the suffix "z". The following suffixes corresponds to the region where the notebook is sold. us: United States ca: Canada la: Latin America br: Brazil ea / ee / [e + other letter]: Europe / Middle East eo / so / no: Scandinavia ec / sc / nc: Czech Republic and Slovakia au / ax: Asia / Australia - AMD processor (AU = AMD + UMA graphics; AX = AMD + discrete graphics) tu / tx: Asia / Australia - Intel processor (TU = Intel + UMA; TX = Intel + discrete) ap: Asia Pacific Other suffixes include: nr: no rebate cl: club model, available only through discount shopping clubs such as Costco and Sam's Club wm: Walmart model dx: Best Buy model od: Office Depot model st: Staples model tg: Target model HP Imprint HP Imprint was a high-gloss finish for laptop and notebook computers developed by Nissha Printing Co. of Japan in cooperation with HP. It was first developed in May 2006 alongside a new line of HP Pavilion laptops, using an advanced molding technique commonly used in several products such as mobile phone cases, interiors for luxury automobiles, etc., providing a durable yet fashionable design. Each unique designs for HP Imprint was directly inlaid onto the moldings. An updated version of HP Imprint known as HP Imprint 2 was introduced in June 2008 alongside another new line of HP Pavilion laptops, featuring a liquid-metallic design. It continues to use the same advanced molding techniques as the original HP Imprint, as well as featuring several other unique designs not found in the original HP Imprint. HP Imprint was used for the following models produced between 2006 to 2009: HP Imprint (2006-2008) Wave: dv9000 / dv6000 / dv2000 / tx1000 Digi Code: Compaq Presario v3000 Trace: Compaq Presario v6700TX Radiance: dv9700 / dv9500 / dv6700 / dv6500 / dv2700 / dv2500 Influx: dv6700tse / dv6500tse / dv2842se Dragon: HDX9000 Verve: dv2700tse Echo: tx2000z / tx2500z Thrive: dv6800tse Artist Edition: dv2800tae / dv2890nr / dv2990nr HP Imprint 2 (2008-2009) Meshy: dv7 / dv6 / dv5 / dv4 / dv3000 Unity: Compaq Presario CQ20 Glossy Black Finish: Compaq Presario (CQ70 / CQ50 / CQ40) Fluid: HDX18t / HDX16t Intensity: dv4tse Renewal: dv5tse Intersect: dv7 / dv5 / dv4 / dv3500 / dv3 Swirl: HP Mini 1000 (Mi / XP / Mobile Broadband Wireless) Peony: HP Mini 1000 (Vivienne Tam) Reaction: HP TouchSmart tx2z Moonlight: dv7 / dv4 / dv2z / dv6 Espresso: dv7 / dv4 / dv2z / dv6 Notebook artwork competition In late 2007, HP held a contest in conjunction with MTV to help design a unique case artwork for a special edition HP notebook PC. The contest ran from September 5 to October 17, with over 8,500 designs from 112 countries submitted. The winner of the competition was João Oliveira of Porto, Portugal, who created a case design called "Asian Odyssey". The winning design was later implemented on the special "Artist Edition" HP Pavilion dv2800tae series notebook. In another competition, "Engine Room", a design made by Hisako Sakihama of Japan, was chosen to appear on another HP notebook. Specialized features Several models of the dv series of Pavilion laptops featured HP's Linux-based software called QuickPlay, which can be booted upon startup to play music or DVDs. It incorporates several multimedia features, such as pause playback within Windows via the included remote control. It also has a much faster load time upon startup (at about ~12 seconds). A Windows-based application of the same name was also developed, which includes the same features as the standard Linux-based version. Later models that were preinstalled with Windows Vista no longer had the option of booting into QuickPlay upon startup due to some unresolved compatibility issues, but still retained the multimedia features as a separate application that can be accessed from within Windows. QuickPlay has since been discontinued, being replaced with HP MediaSmart Software that was installed on all HP desktops and notebooks from 2009 onward. Overheating issues Many laptop and notebook owners experienced hardware failure in various Pavilion models produced during the late 2000s due to overheating. Symptoms of an overheating system include missing Wi-Fi, to the failure of the graphics card chipsets and booting problems. HP acknowledges this as a "hardware issue with certain HP Pavilion dv2000/dv6000/dv9000" notebooks, which is eligible for free repair. Other users have recommended a resoldering of the Nvidia GPUs on the motherboard due to the overheating causing the solder of the built-in GPU to liquify. In 2009, HP had to recall over 70,000 batteries that were defective as a result of overheating. Logo history References External links HP corporate homepage Pavilion Pavilion 2-in-1 PCs Consumer electronics brands Computer-related introductions in 1995
HP Pavilion
[ "Technology" ]
3,762
[ "Crossover devices", "2-in-1 PCs" ]
3,089,437
https://en.wikipedia.org/wiki/MacUpdate
MacUpdate is a Mac software download website founded in 1996. History In the Inc. 5000 list of private American companies with the fastest revenue growth, MacUpdate was listed 319th in 2008, 114th in 2009, and 233rd in 2010. MacUpdate has offered several "bundles" offering Mac software at a discounted price. The company offered an application called MacUpdate Desktop ($20/year with a 10 day trial) which automatically downloaded and installed updates to other installed applications on a user's Mac. MacUpdate Desktop has since been discontinued. In 2020, MacUpdate was acquired by Clario Tech ltd., a London-Kyiv based cybersecurity company. References External links MacUpdate website Macintosh websites Download websites
MacUpdate
[ "Technology" ]
155
[ "Macintosh websites", "Computing websites" ]
3,089,443
https://en.wikipedia.org/wiki/NGC%207510
NGC 7510 is an open cluster of stars located around 11,400 light years away in the constellation Cepheus, near the border with Cassiopeia. At this distance, the light from the cluster has undergone extinction from interstellar gas and dust equal to = magnitude in the UBV photometric system. Its brightest member is a giant star with a stellar classification of B1.5 III. This cluster forms part of the Perseus Spiral Arm. It has a Trumpler class rating of and is around 10 million years old. References External links Cepheus (constellation) 7510 Open clusters
NGC 7510
[ "Astronomy" ]
122
[ "Constellations", "Cepheus (constellation)" ]
3,089,478
https://en.wikipedia.org/wiki/Standard%20normal%20table
In statistics, a standard normal table, also called the unit normal table or Z table, is a mathematical table for the values of , the cumulative distribution function of the normal distribution. It is used to find the probability that a statistic is observed below, above, or between values on the standard normal distribution, and by extension, any normal distribution. Since probability tables cannot be printed for every normal distribution, as there are an infinite variety of normal distributions, it is common practice to convert a normal to a standard normal (known as a z-score) and then use the standard normal table to find probabilities. Normal and standard normal distribution Normal distributions are symmetrical, bell-shaped distributions that are useful in describing real-world data. The standard normal distribution, represented by , is the normal distribution having a mean of 0 and a standard deviation of 1. Conversion If is a random variable from a normal distribution with mean and standard deviation , its Z-score may be calculated from by subtracting and dividing by the standard deviation: If is the mean of a sample of size from some population in which the mean is and the standard deviation is , the standard error is If is the total of a sample of size from some population in which the mean is and the standard deviation is , the expected total is and the standard error is Reading a Z table Formatting / layout tables are typically composed as follows: The label for rows contains the integer part and the first decimal place of . The label for columns contains the second decimal place of . The values within the table are the probabilities corresponding to the table type. These probabilities are calculations of the area under the normal curve from the starting point (0 for cumulative from mean, negative infinity for cumulative and positive infinity for complementary cumulative) to . Example: To find 0.69, one would look down the rows to find 0.6 and then across the columns to 0.09 which would yield a probability of 0.25490 for a cumulative from mean table or 0.75490 from a cumulative table. To find a negative value such as -0.83, one could use a cumulative table for negative z-values which yield a probability of 0.20327. But since the normal distribution curve is symmetrical, probabilities for only positive values of are typically given. The user might have to use a complementary operation on the absolute value of , as in the example below. Types of tables tables use at least three different conventions: Cumulative from mean gives a probability that a statistic is between 0 (mean) and . Example: . Cumulative gives a probability that a statistic is less than . This equates to the area of the distribution below . Example: . Complementary cumulative gives a probability that a statistic is greater than . This equates to the area of the distribution above . Example: Find . Since this is the portion of the area above , the proportion that is greater than is found by subtracting from 1. That is or . Table examples Cumulative from minus infinity to Z This table gives a probability that a statistic is between minus infinity and . The values are calculated using the cumulative distribution function of a standard normal distribution with mean of zero and standard deviation of one, usually denoted with the capital Greek letter (phi), is the integral (z) is related to the error function, or . Note that for , one obtains (after multiplying by 2 to account for the interval) the results , characteristic of the 68–95–99.7 rule. Cumulative (less than Z) This table gives a probability that a statistic is less than (i.e. between negative infinity and ). Complementary cumulative This table gives a probability that a statistic is greater than . : This table gives a probability that a statistic is greater than Z, for large integer Z values. Examples of use A professor's exam scores are approximately distributed normally with mean 80 and standard deviation 5. Only a cumulative from mean table is available. What is the probability that a student scores an 82 or less? What is the probability that a student scores a 90 or more? What is the probability that a student scores a 74 or less? Since this table does not include negatives, the process involves the following additional step: What is the probability that a student scores between 74 and 82? What is the probability that an average of three scores is 82 or less? See also 68–95–99.7 rule t-distribution table References Normal distribution Mathematical tables
Standard normal table
[ "Mathematics" ]
915
[ "Mathematical tables" ]
3,089,791
https://en.wikipedia.org/wiki/NGC%20663
NGC 663 (also known as Caldwell 10) is a young open cluster in the constellation of Cassiopeia. It has an estimated 400 stars and spans about a quarter of a degree across the sky. It can reportedly be detected with the unaided eye, although a telescope is recommended for best viewing. The brightest members of the cluster can be viewed with binoculars. Although the listed visual magnitude is 7.1, several observers have reported higher estimates. After adjusting for reddening due to interstellar dust, the distance modulus is estimated as 11.6 magnitudes. It is located about 2,100 parsecs distant with an estimated age of 20–25 million years. This means that stars of spectral class B2 or higher (in the sense of higher mass), are reaching the end of their main sequence lifespan. This cluster appears to be located in front of a molecular cloud, although the two are not physically associated. This cloud has the effect of blocking background stars from the visual image of the cluster as it lies at a distance of 300 parsecs. This cluster is of interest because of the high number of Be stars, with a total of about 24 discovered. These are spectral class B stars that show prominent emission lines of hydrogen in their spectrum. Most of the Be stars in the cluster lie between spectral class B0 and B3. A candidate member of the cluster, LS I +61° 235, is a Be star with an X-ray binary component that has a period of about three years. There are at least five blue stragglers in the cluster. These are stars that formed by the merger of two other stars. Two of the cluster's star systems are likely eclipsing binaries with periods of 0.6 and 1.03 days. NGC 663 also has two red supergiant stars, both located on its periphery The star cluster is assumed to form part of the stellar association Cassiopeia OB8, that is located in the Perseus arm of the Milky Way, along with the open clusters M103, NGC 654, NGC 659, and some supergiant stars scattered between them, all of them having similar ages and distances. Image gallery References External links Open clusters Cassiopeia (constellation) 0663 010b
NGC 663
[ "Astronomy" ]
470
[ "Cassiopeia (constellation)", "Constellations" ]
3,089,867
https://en.wikipedia.org/wiki/NGC%20869
NGC 869 (also known as h Persei) is an open cluster located 7460 light years away in the constellation of Perseus. The cluster is about 14 million years old. It is the westernmost of the Double Cluster with NGC 884. NGC 869 and 884 are often designated h and χ (chi) Persei, respectively. Some confusion surrounds what Bayer intended by these designations. It is sometimes claimed that Bayer did not resolve the pair into two patches of nebulosity, and that χ refers to the Double Cluster and h to a nearby star. Bayer's Uranometria chart for Perseus does not show them as nebulous objects, but his chart for Cassiopeia does, and they are described as Nebulosa Duplex in Schiller's Coelum Stellatum Christianum, which was assembled with Bayer's help. The clusters are both located in the Perseus OB1 association, a few hundred light years apart from each other. The clusters were first recorded by Hipparchus, thus have been known since antiquity. The Double Cluster is often photographed and observed with small telescopes. The clusters are visible with the unaided eye between the constellations of Perseus and Cassiopeia as a brighter patch in the winter Milky Way. In small telescopes the cluster appears as an assemblage of bright stars located in a rich star field. Dominated by bright blue stars, the cluster also hosts a few orange stars. References External links NGC 869 at SEDS NGC 869 at Silicon Owl NGC 869 at Messier45 Open clusters Perseus (constellation) 0869 Persei, h 014b Astronomical objects known since antiquity
NGC 869
[ "Astronomy" ]
351
[ "Perseus (constellation)", "Constellations" ]
3,090,031
https://en.wikipedia.org/wiki/Ubuntu%20Forums
The Ubuntu Forums is the official forum for the Ubuntu operating system. As of May 2022, The Ubuntu Forums has 2.1 million registered members and more than 2.2 million threads. The Ubuntu Forums currently runs on the forum software vBulletin. On July 20, 2013 the site was compromised, with attacker(s) both defacing the site and gaining access to "all user email addresses and hashed passwords" The site was compromised once again on July 15, 2016. "Usernames, email addresses and IPs for 2 million users" were compromised but 'no active passwords' were accessed. History The Ubuntu Forums were created by Ryan Troy in October 2004. The forums became a popular resource for Ubuntu and were deemed the Official Ubuntu Forums in November 2004. The forums hosting continued to be paid for by Ryan and the occasional donations of forum members until March 2006, when Canonical offered to host the forums on its own servers. In June 2007, the forums' domain name, license, and assets were all transferred to Canonical, which now has sole ownership. Role The primary function of The Ubuntu Forums is for Ubuntu support, but it also has a popular community area where other topics may be discussed. Governance The Ubuntu Forums are governed by a moderation team made up of volunteers, often referred to as The Forum Staff. The Forum Staff have three ranks: Administrators, Super Moderators and Moderators. The Administrators serve on the Forum Council. See also List of Internet forums References Computing websites Internet properties established in 2004 Internet services supporting OpenID Knowledge markets Ubuntu
Ubuntu Forums
[ "Technology" ]
350
[ "Computing websites" ]
3,090,068
https://en.wikipedia.org/wiki/Pentacene
Pentacene () is a polycyclic aromatic hydrocarbon consisting of five linearly-fused benzene () rings. This highly conjugated compound is an organic semiconductor. The compound generates excitons upon absorption of ultra-violet (UV) or visible light; this makes it very sensitive to oxidation. For this reason, this compound, which is a purple powder, slowly degrades upon exposure to air and light. Structurally, pentacene is one of the linear acenes, the previous one being tetracene (four fused benzene rings) and the next one being hexacene (six fused benzene rings). In August 2009, a group of researchers from IBM published experimental results of imaging a single molecule of pentacene using an atomic force microscope. In July 2011, they used a modification of scanning tunneling microscopy to experimentally determine the shapes of the highest occupied and lowest unoccupied molecular orbitals. In 2012, pentacene-doped p-terphenyl was shown to be effective as the amplifier medium for a room-temperature maser. Synthesis The compound, originally called dinaphthanthracene after naphthalene and anthracene (modern nomenclature for polyacenes, including pentacene, was only introduced in 1939 by Erich Clar), was first synthesized in 1912 by British chemists William Hobson Mills and Mildred May Gostling. A classic method for pentacene synthesis is by the Elbs reaction. Pentacenes can also be prepared by extrusion of a small volatile component (carbon monoxide) from a suitable precursor at 150 °C. The precursor itself is prepared in three steps from two molecules of α,α,α',α'-tetrabromo-o-xylene with a 7-tert-butoxybicyclo[2.2.1]hepta-2,5-diene by first heating with sodium iodide in dimethylformamide to undergo a series of elimination and Diels–Alder reactions to form the ring system, then hydrolysing the tert-butoxy group to an alcohol and followed by its oxidation to the ketone. The product is reported to have some solubility in chloroform and is therefore amenable to spin coating. Pentacene is soluble in hot chlorinated benzenes, such as 1,2,4-trichlorobenzene, from which it can be recrystallized to form platelets. Pentacene derivatives Monomeric pentacene derivatives 6,13-Substituted pentacenes are accessible through pentacenequinone by reaction with an aryl or alkynyl nucleophile (for example Grignard or organolithium reagents) followed by reductive aromatization. Another method is based on homologization of diynes by transition metals (through zirconacyclopentadienes) Functionalization of pentacene has allowed for control of the solid-state packing of this chromophore. The choice of the substituents (both size and location of substitution on the pentacene) influences the solid-state packing and can be used to control whether the compound adopts 1-dimensional or 2-dimensional cofacial pi-stacking in the solid-state, as opposed to the herringbone packing observed for pentacene. Although pentacene's structure resembles that of other aromatic compounds like anthracene, its aromatic properties are poorly defined; as such, pentacene and its derivatives are the subject of much research. A tautomeric chemical equilibrium exists between 6-methylene-6,13-dihydropentacene and 6-methylpentacene. This equilibrium is entirely in favor of the methylene compound. Only by heating a solution of the compound to 200 °C does a small amount of the pentacene develop, as evidenced by the emergence of a red-violet color. According to one study the reaction mechanism for this equilibrium is not based on an intramolecular 1,5-hydride shift, but on a bimolecular free radical hydrogen migration. In contrast, isotoluenes with the same central chemical motif easily aromatize. Pentacene reacts with elemental sulfur in 1,2,4-trichlorobenzene to the compound hexathiapentacene. X-ray crystallography shows that all the carbon-to-sulfur bond lengths are roughly equal (170 pm); from this, it follows that resonance structures B and C with complete charge separation are more significant than structure A. In the crystal phase the molecules display aromatic stacking interactions, whereby the distance between some sulfur atoms on neighboring molecules can become less (337 pm) than the sum of two Van der Waals radii (180 pm) Like the related tetrathiafulvalene, this compound is studied in the field of organic semiconductors. The acenes may appear as planar and rigid molecules, but in fact they can be very distorted. The pentacene depicted below: has an end-to end twist of 144° and is sterically stabilized by the six phenyl groups. The compound can be resolved into its two enantiomers with an unusually high reported optical rotation of 7400° although racemization takes place with a chemical half-life of 9 hours. Oligomers and polymers of pentacene Oligomers and polymers based on pentacene have been explored both synthetically as well as in device application settings. Polymer light emitting diodes (PLEDs) have been constructed using conjugated copolymers (1a–b) containing fluorene and pentacene. A few other conjugated pentacene polymers (2a–b and 3) have been realized based on Sonogashira and Suzuki coupling reactions of a dibromopentacene monomer. Non-conjugated pentacene-based polymers have been synthesized via esterification of a pentacene diol monomer with bis-acid chlorides to form polymers 4a–b. Various synthetic strategies have been employed to form conjugated oligomers of pentacene 5a–c including a one-pot-four-bond forming procedure which provided a solution-processable conjugated pentacene dimer (5c) which exhibited photoconductive gain >10, placing its performance within the same order of magnitude as thermally evaporated films of non-functionalized pentacene which exhibited photoconductive gain >16 using analogous measurement techniques. A modular synthetic method to conjugated pentacene di-, tri- and tetramers (6–8) has been reported which is based on homo- and cross-coupling reactions of robust dehydropentacene intermediates. Non-conjugated oligomers 9–10 based on pentacene have been synthesized, including dendrimers 9–10 with up to 9 pentacene moieties per molecule with molar absorptivity for the most intense absorption > 2,000,000 M−1•cm−1. Dendrimers 11–12 were shown to have improved performance in devices compared to analogous pentacene-based polymers 4a–b in the context of photodetectors. Materials research Pentacenes have been examined as potential dichroic dyes. The pentacenoquinone displayed below is fluorescent and when mixed with liquid crystal E7 mixture a dichroic ratio of 8 is reached. Longer acenes align better in the nematic liquid crystal phase. Combined with buckminsterfullerene, pentacene is used in the development of organic photovoltaic prototypes. Organic photovoltaic cells are cheaper and more flexible than traditional inorganic cells, which could potentially open doors to solar cells in new markets. Pentacene is a popular choice for research on organic thin-film transistors and OFETs, being one of the most thoroughly investigated conjugated organic molecules with a high application potential due to a hole mobility in OFETs of up to 5.5 cm2/(V·s), which exceeds that of amorphous silicon. Pentacene, as well as other organic conductors, is subject to rapid oxidation in air, which precludes commercialization. If the pentacene is preoxidized, the pentacene-quinone is a potential gate insulator, then the mobility can approach that of rubrene – the highest-mobility organic semiconductor – namely, 40 cm2/(V·s). This pentacene oxidation technique is akin to the silicon oxidation used in the silicon electronics. See also Perfluoropentacene References External links facts about pentacene, retrieved Apr. 17, 2006 Organic transistor improves with age, New Scientist, 2 December 2007 Pentacene Imaged, IBM images Pentacene, the first molecule imaged in detail 29 August 2009 Organic semiconductors Acenes Polycyclic aromatic hydrocarbons Pentacyclic compounds
Pentacene
[ "Chemistry" ]
1,887
[ "Semiconductor materials", "Molecular electronics", "Organic semiconductors" ]
3,090,195
https://en.wikipedia.org/wiki/Houston%20Ship%20Channel
The Houston Ship Channel, in Houston, Texas, is part of the Port of Houston, one of the busiest seaports in the world. The channel is the conduit for ocean-going vessels between Houston-area terminals and the Gulf of Mexico, and it serves an increasing volume of inland barge traffic. Overview The channel is a widened and deepened natural watercourse created by dredging Buffalo Bayou and Galveston Bay. The channel's upstream terminus lies about four miles east of downtown Houston, at the Turning Basin, with its downstream terminus at a gateway to the Gulf of Mexico, between Galveston Island and the Bolivar Peninsula. Major products, such as petrochemicals and Midwestern grain, are transported in bulk together with general cargo. The original watercourse for the channel, Buffalo Bayou, has its headwaters to the west of the city of Houston. The navigational head of the channel, the most upstream point to which general cargo ships can travel, is at Turning Basin in east Houston. The channel has numerous terminals and berthing locations along Buffalo Bayou and Galveston Bay. The major public terminals include Turning Basin, Barbours Cut, and Bayport. Many private docks are there as well, including the ExxonMobil Baytown Complex and the Deer Park Complex. The channel, occasionally widened and deepened to accommodate ever-larger ships, is wide by deep by long. The islands in the ship channel are part of the ongoing widening and deepening project. The islands are formed from soil pulled up by dredging, and the salt marshes and bird islands are part of the Houston Port Authority's beneficial use and environmental mitigation responsibilities. The channel has five vehicle crossings: Washburn Tunnel, Sidney Sherman Bridge, Sam Houston Ship Channel Bridge, popularly known as the Beltway 8 Bridge. Two Dollar bridge is another local nickname; Fred Hartman Bridge connecting La Porte and Baytown, Texas; and Lynchburg Ferry. History John Richardson Harris platted the town of Harrisburg, Texas on Buffalo Bayou at the mouth of Brays Bayou in 1826. He established a steam mill there, while making Harrisburg into a logistical center for the Austin's Colony. He plied his schooner The Rights of Man through the waters of Galveston Bay and Buffalo Bayou, importing supplies from the United States, and exporting cotton and hides. However, fewer people settled Buffalo Bayou than the fertile Brazos Valley, so Harrisburg remained a remote overland location from the critical mass of farmlands: about 20 miles from Fort Bend, Texas and about 40 miles from San Felipe de Austin, Texas. Travelling the Brazos River presented several hazards, most of all, its shifting, shallow sandbars at its mouth. Despite several interventions, the river remained hostile to navigation. Nicholas Clopper acquired land downstream from Harrisburg, the eponymously named Clopper's Point. He recruited six men from Ohio to work as traders, who sailed the schooner Little Zoe from Cincinnati laden with supplies such as flour and spices, nails and other hardware, and whiskey and tobacco. Two of these hires were his sons, Edward and Joseph Clopper. They recorded their travels in a journal, reporting several hazards of Galveston Bay in route to Buffalo Bayou. They ran Little Zoe aground on Galveston Island and later observed two wrecked ships in the bay. They encountered the shallow Red Fish Bar, which they passed while dragging over it. The channel has been used to move goods to the sea since at least 1836. Buffalo Bayou and Galveston Bay were dredged during the late 19th and early 20th centuries to accommodate larger ships. In the wake of the 1900 Galveston hurricane, the inland Port of Houston was seen as a safer long-term option, and planning for a larger ship channel began. By the mid 1900s the Port of Houston had established itself as the leading port in Texas, eclipsing the natural harbors at Galveston and Texas City. The Turning Basin terminal in Harrisburg (now part of Houston) became the port's largest shipping point. On January 10, 1910, residents of Harris County voted 16 to 1 to fund dredging the Houston ship channel to a depth of 25 feet for the amount of $1,250,000, which was then matched by federal funds. On June 14, 1914 the first deepwater ship, steamship Satilla, arrived at the port of Houston, establishing steamboat service between New York City and Houston. On November 10, 1914, President Woodrow Wilson opened the Houston Ship Channel, part of the Port of Houston. The onset of World War I and the first mechanized war's thirst for oil greatly increased use of the ship channel. The United States Army Corps of Engineers increased the depth of the channel from 25 to 30 feet in 1922. In 1933, the United States Department of War and the United States House Committee on Rivers and Harbors approved a plan to increase the depth of the channel from 30 to 34 feet and widen the Galveston Bay section from 250 to 400 feet. The Public Works Administration provided $2,800,000 for the project, which was completed in late 1935. The proximity to Texas oilfields led to the establishment of numerous petrochemical refineries along the waterway, such as the ExxonMobil Baytown installation on the eastern bank of the San Jacinto River. Now the channel and surrounding area support the second-largest petrochemical complex in the world. While much of the Houston Ship Channel is associated with heavy industry, an icon of Texas history is also located along its length. The San Jacinto Monument commemorates the Battle of San Jacinto (1836) in which Texas won its independence from Mexico. From 1948 to 2022, also along the channel's path was the museum ship . She saw service during both world wars, and is the oldest remaining example of a dreadnought-era battleship in existence. In 2022, the USS Texas was permanently relocated from her berth along the channel. The US Army's San Jacinto Ordnance Depot was located on the channel from 1941–1964. During World War II, two large shipyards produced side-by-side at the confluence of Greens Bayou: Todd Houston Shipbuilding built mostly Liberty Ships and Brown Shipbuilding built a substantial number of destroyer escorts, submarine chasers and amphibious landing craft. Currently, the channel is dredged to a depth of 43–45 feet. The channel was designated a National Civil Engineering Landmark by the American Society of Civil Engineers (ASCE) in 1987. The "Texas chicken" maneuver is known to mariners who regularly navigate large vessels on the Houston Ship Channel. Pollution On December 25, 2007, the Houston Ship Channel was featured on the CNN Special, Planet in Peril, as a potential polluter of nearby neighborhoods. That year, the University of Texas released a study suggesting that children living within of the Houston Ship Channel were 56% more likely to become sick with leukemia than the national average. On March 22, 2014, a barge carrying nearly a million gallons of marine fuel oil collided with another ship in the Houston Ship Channel, causing the contents of one of the barge's 168,000-gallon tanks to leak into Galveston Bay. Gallery See also Phillips disaster of 1989 I-610 Ship Channel Bridge References External links Time-lapse video of a barge navigating a length of the Houston Ship Channel at night See historical photographs of the Houston Ship Channel, the Houston community, and more at the University of Houston Digital Library Ship canals Canals in Texas Greater Houston Galveston Bay Area Geography of Houston Transportation in Chambers County, Texas Transportation in Galveston County, Texas Historic Civil Engineering Landmarks Buildings and structures in Chambers County, Texas Buildings and structures in Galveston County, Texas Transportation buildings and structures in Harris County, Texas Canals opened in 1914 1914 establishments in Texas
Houston Ship Channel
[ "Engineering" ]
1,558
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
3,090,255
https://en.wikipedia.org/wiki/Cun%20%28unit%29
A cun ( ; Pinyin cùn IPA |mi=), often glossed as the Chinese inch, is a traditional Chinese unit of length. Its traditional measure is the width of a person's thumb at the knuckle, whereas the width of the two forefingers denotes 1.5 cun and the width of four fingers (except the thumb) side-by-side is 3 cuns. It continues to be used to chart acupuncture points on the human body, and, in various uses for traditional Chinese medicine. The cun was part of a larger decimal system. A cun was made up of 10 fen, which depending on the period approximated lengths or widths of millet grains, and represented one-tenth of a chi ("Chinese foot"). In time the lengths were standardized, although to different values in different jurisdictions. (See Chi (unit) for details.) In Hong Kong, using the traditional standard, it measures ~3.715 cm (~1.463 in) and is written "tsun". In the twentieth century in the Republic of China, the lengths were standardized to fit with the metric system, and in current usage in People's Republic of China and Taiwan it measures  cm (~1.312 in). In Japan, the corresponding unit, , was standardized at  mm (3. cm, ~1.193 in, or ~0.09942 ft). See also shaku References External links Cun measurements Units of length Human-based units of measurement
Cun (unit)
[ "Mathematics" ]
320
[ "Quantity", "Units of measurement", "Units of length" ]
3,090,294
https://en.wikipedia.org/wiki/Rubrene
Rubrene (5,6,11,12-tetraphenyltetracene) is the organic compound with the formula . It is a red colored polycyclic aromatic hydrocarbon. Because of its distinctive optical and electrical properties, rubrene has been extensively studied. It has been used as a sensitiser in chemoluminescence and as a yellow light source in lightsticks. Electronic properties As an organic semiconductor, the major application of rubrene is in organic light-emitting diodes (OLEDs) and organic field-effect transistors, which are the core elements of flexible displays. Single-crystal transistors can be prepared using crystalline rubrene, which is grown in a modified zone furnace on a temperature gradient. This technique, known as physical vapor transport, was introduced in 1998. Rubrene holds the distinction of being the organic semiconductor with the highest carrier mobility, reaching 40 cm2/(V·s) for holes. This value was measured in OFETs prepared by peeling a thin layer of single-crystalline rubrene and transferring to a Si/SiO2 substrate. Crystal structure Several polymorphs of rubrene are known. Crystals grown from vapor in vacuum can be monoclinic, triclinic, and orthorhombic motifs. Orthorhombic crystals (space group Bbam) are obtained in a closed system in a two-zone furnace at ambient pressure. Synthesis Rubrene is prepared by treating 1,1,3-Triphenyl-2-propyn-1-ol with thionyl chloride. The resulting chloroallene undergoes dimerization and dehydrochlorination to give rubrene. Redox properties Rubrene, like other polycyclic aromatic molecules, undergoes redox reactions in solution. It oxidizes and reduces reversibly at 0.95 V and −1.37 V, respectively vs SCE. When the cation and anion are co-generated in an electrochemical cell, they can combine with annihilation of their charges, but producing an excited rubrene molecule that emits at 540 nm. This phenomenon is called electrochemiluminescence. References Polycyclic aromatic hydrocarbons Organic semiconductors Fluorescent dyes
Rubrene
[ "Chemistry" ]
474
[ "Semiconductor materials", "Molecular electronics", "Organic semiconductors" ]
3,090,298
https://en.wikipedia.org/wiki/Trovafloxacin
Trovafloxacin (sold as Trovan by Pfizer and Turvel by Laboratorios Almirall) is a broad spectrum antibiotic that inhibits the uncoiling of supercoiled DNA in various bacteria by blocking the activity of DNA gyrase and topoisomerase IV. It was withdrawn from the market due to the risk of hepatotoxicity. It had better Gram-positive bacterial coverage but less Gram-negative coverage than the previous fluoroquinolones. Adverse reactions Trovafloxacin use is significantly restricted due to its high potential for inducing serious and sometimes fatal liver damage. Currently, the drug is not approved for use in the U.S. or the European Union due to association with cases of acute liver failure and death. Manufacturing The key reaction in building the ring consists of 1,3-Dipolar cycloaddition of ethyl diazoacetate to N-Cbz-3-pyrroline to afford the pyrrazolidine (3). Pyrolysis results in loss of nitrogen and formation of the cyclopropylpyrrolidine ring. The stereochemistry of the ring simply reflects the thermodynamics, since cis ring fusion is by far the most stable arrangement, as is the cis configuration of the ester group. The ester is then saponified to the corresponding carboxylic acid (5). The acid undergoes a version of the Curtius rearrangement when treated with diphenylphosphoryl azide (DPPA) to afford the transient isocyanate (6). The reactive function adds t-BuOH from the reaction medium to afford the product as its tert-Butyloxycarbonyl protecting group derivative (7). Catalytic hydrogenation then removes the carbobenzyloxy protecting group to afford the secondary amine (8). In a standard quinoline reaction, this amine is then used to displace the more reactive fluorine at the 7-position in Ethyl 1-(2,4-difluorophenyl)-6,7-difluoro-4-oxo-1,4-dihydro-1,8-naphthyridine-3-carboxylate (9). Society and culture Legal status The U.S. Food and Drug Administration approved trovafloxacin for therapeutic use in December 1997 for use in patients aged 18 years and older. In June 1999, the agency advised doctors to limit the prescription of trovafloxacin due to adverse events associated with the drug (over 100 cases of acute liver injury reported to FDA). In May 2000, the FDA withdrew marketing authorisation for trovafloxacin. Trovafloxacin received marketing authorisation in the European Union in October 1998. In June 1999, in view of reported adverse events, the Committee for Proprietary Medicinal Products recommended suspension of the marketing authorisation for a year. The suspension took effect in August 1999 and was renewed in September 2000. In October 2000, Pfizer notified the European Commission of its decision to voluntarily withdraw the marketing authorisation which was approved by EMA in March 2001. Economics Trovan sales during its first full year on the market contributed US$160 million of Pfizer's total revenue of US$12.6 billion. Investors expected it to eventually bring in US$1 billion per year. Nigerian clinical trial controversy In 1996, during a meningitis epidemic in Kano, Nigeria, the drug was administered to approximately 200 infected children. Eleven children died in the trial: five after taking Trovan and six after taking an older antibiotic used for comparison in the clinical trial. Others suffered blindness, deafness and brain damage, common consequences of meningitis that have not been seen in patients treated with trovafloxacin for other infection types. An investigation by the Washington Post concluded that Pfizer had administered the drug as part of an illegal clinical trial without authorization from the Nigerian government or consent from the children's parents. The case came to light in December 2000 as the result of an investigation by The Washington Post, and sparked significant public outcry. The most serious error was the falsification and backdating of an ethics approval letter by the lead investigator of the trial, Dr. Abdulhamid Isa Dutse. Dr. Dutse is now the chief medical officer of Aminu Kano Teaching Hospital. The result of the trial was that children treated with oral trovafloxacin had a 5% (5/100) mortality rate compared to a 6% (6/100) mortality rate with intramuscular ceftriaxone. Between 2002 and 2005 the victims of the Trovan tests in Nigeria filed a series of unsuccessful lawsuits in the United States. However, in January 2009, the United States Court of Appeals for the Second Circuit ruled that the Nigerian victims and their families were entitled to bring suit against Pfizer in the United States under the Alien Tort Statute. A US$75 million settlement with the State of Kano was reached on July 30, 2009. Additionally two lawsuits also remain pending in New York, United States. According to Wikileaked US embassy cables, Pfizer's country manager admitted that "Pfizer had hired investigators to uncover corruption links to federal attorney general Michael Aondoakaa to expose him and put pressure on him to drop the federal cases." See also Alatrofloxacin, a prodrug of trovafloxacin for intravenous administration Quinolone References External links Abdullahi v Pfizer. US Court of Appeals 2d Cir 30 Jan 2009 Fluoroquinolone antibiotics Hepatotoxins Withdrawn drugs Naphthyridines Alpha-keto acids Aromatic ketones Nitrogen heterocycles Heterocyclic compounds with 2 rings Cyclopropanes Amines Fluorobenzene derivatives
Trovafloxacin
[ "Chemistry" ]
1,243
[ "Drug safety", "Functional groups", "Amines", "Bases (chemistry)", "Withdrawn drugs" ]
3,090,377
https://en.wikipedia.org/wiki/9%2C10-Diphenylanthracene
9,10-Diphenylanthracene is a polycyclic aromatic hydrocarbon. It has the appearance of a slightly yellow powder. 9,10-Diphenylanthracene is used as a sensitiser in chemiluminescence. In lightsticks it is used to produce blue light. It is a molecular organic semiconductor, used in blue OLEDs and OLED-based displays. See also 2-Chloro-9,10-diphenylanthracene, a chlorinated derivative References External links Polycyclic aromatic hydrocarbons, Australian National Pollutant Inventory Organic semiconductors Anthracenes
9,10-Diphenylanthracene
[ "Chemistry" ]
136
[ "Semiconductor materials", "Molecular electronics", "Organic semiconductors" ]
3,090,379
https://en.wikipedia.org/wiki/Rubble
Rubble is broken stone, of irregular size, shape and texture; undressed especially as a filling-in. Rubble naturally found in the soil is known also as 'brash' (compare cornbrash). Where present, it becomes more noticeable when the land is ploughed or worked. Building "Rubble-work" is a name applied to several types of masonry. One kind, where the stones are loosely thrown together in a wall between boards and grouted with mortar almost like concrete, is called in Italian "muraglia di getto" and in French "bocage". In Pakistan, walls made of rubble and concrete, cast in a formwork, are called 'situ', which probably derives from Sanskrit (similar to the Latin 'in situ' meaning 'made on the spot'). Work executed with more or less large stones put together without any attempt at courses is called rubble walling. Where similar work is laid in courses, it is known as coursed rubble. Dry-stone walling is somewhat similar work done without the use of mortar. It is bound together by the fit of the stones and the regular placement of stones which extend through the thickness of the wall. A rubble wall built with mortar will be stronger if assembled in this way. Rubble walls in Malta Rubble walls () are found all over the island of Malta. Similar walls are also frequently found in Sicily and the Arab countries. The various shapes and sizes of the stones used to build these walls look like stones that were found in the area lying on the ground or in the soil. It is most probable that the practice of building these walls around the field was inspired by the Arabs during their rule in Malta, as in Sicily who were also ruled by the Arabs around the same period. The Maltese farmer found that the technique of these walls was very useful especially during an era where resources were limited. Rubble walls are used to serve as borders between the property of one farm from the other. A great advantage that rubble walls offered is that when heavy rain falls, their structure would allow excessive water to pass through and therefore, excess water will not ruin the products. Soil erosion is minimised as the wall structure allows the water to pass through but it traps the soil and prevents it from being carried away from the field. One can see many rubble walls on the side of the hills and in valleys where the land slopes down and consequently the soil is in greater danger of being carried away. Rubble in Britain In the British Islands, many mediaeval and post-mediaeval buildings are built of small natural stones, called rubble. As examples see the descriptions in two official list entries, provided by Historic England: No. 1191625 – Parish Church of the Holy Trinity, Chuckfield No. 1139238 – Church of St Mary, Longnewton, 1856/57 See also Core-and-veneer Ruin Rubble trench foundation References External links Example of a coursed rubble wall in Malta Building materials Building stone Natural materials Building engineering Stone (material)
Rubble
[ "Physics", "Engineering" ]
617
[ "Natural materials", "Building engineering", "Architecture", "Construction", "Materials", "Civil engineering", "Matter", "Building materials" ]
3,090,397
https://en.wikipedia.org/wiki/Radio%20over%20IP
Radio over Internet Protocol, or RoIP, is similar to Voice over IP (VoIP), but augments two-way radio communications rather than telephone calls. From the system point of view, it is essentially VoIP with push-to-talk. To the user it can be implemented like any other radio network. With RoIP, at least one node of a network is a radio (or a radio with an IP interface device) connected via IP to other nodes in the radio network. The other nodes can be two-way radios, but could also be dispatch consoles either traditional (hardware) or modern (software on a PC), POTS telephones, softphone applications running on a computer such as Skype phone, PDA, smartphone, or some other communications device accessible over IP. RoIP can be deployed over private networks as well as the public Internet. It is useful in land mobile radio systems used by public safety departments and fleets of utilities spread over a broad geographic area. Like other centralized radio systems such as trunked radio systems, issues of delay or latency and reliance on centralized infrastructure can be impediments to adoption by public safety agencies. RoIP is not a proprietary or protocol-limited construct but a basic concept that has been implemented in a number of ways. Several systems have been implemented in the amateur radio community such as Galaxy PTT Comms, AllStar Link, BroadNet, IRLP, and EchoLink that have demonstrated the utility of RoIP in a partly or entirely open-source environment. Many commercial radio systems vendors such as Persistent Systems, LLC., Motorola and Harris have adopted RoIP as part of their system designs. The motivation to deploy RoIP technology is usually driven by one of three factors: first, the need to span large geographic areas or operate in areas without sufficient coverage from radio towers; second, the desire to provide more reliable, or at least more repairable links in radio systems; and third, to support the use of many base station users, that is, voice communications from stationary users rather than mobile or handheld radios. Geographies may be more economically reliably served when spanned by the use of IP technology due to the constantly decreasing cost and increasing functionality of the evolving packet-switched network equipment and software (a track followed by Moore's law). Traditionally distant radio users have been linked via dedicated microwave equipment and/or leased telephone lines. Generally, the cost of operating a radio network is decreased by the adoption of IP technology, replacing the traditional microwave and leased telephone lines. Economical and reliable distant radio links such as those needed by state troopers, energy utilities, and Medivac helicopters are well served by RoIP technology (see Air Evac Lifeteam for an example of a 14-state radio system). U.S. military units are using RoIP to protect convoys spread out across large geographies The conversion to RoIP also drives the adoption of a network approach rather than hub and spoke architecture that is typical of the point-to-point links inherent in the legacy microwave and leased line technologies. Hub and spoke architectures are inherently fragile, while the network approach developed at the foundation of the public Internet by DARPA is generally more reliable, more adaptable, and faster to repair and restore in a wide area disaster such as Hurricane Katrina. The use of LMR (land mobile radio) equipment in both mobile and handheld forms, can be problematic for desk-bound users such as dispatchers, supervisors, and other users in large public safety agencies and energy/utilities, because such radios do not coexist well with computers (e.g. interference). Also, Emergency Operations Center (EOCs) are typically staffed with representatives from many different public safety agencies and other local government officials, each with a different radio. Such EOCs are more effectively (and quietly!) equipped when the radios for each of the different constituencies are made available in the center via RoIP at each user's computer, rather than via a handheld radio that may be out of range, difficult to hear, and out of batteries throughout the emergency. Finally, RoIP by its nature is inter-operable, as once any device whether radio, telephone, computer, or PDA is made part of the voice network enabled by IP, it is irrelevant what type of technology it utilizes. RoIP systems routinely combine VHF, UHF, POTS telephone, Cellular telephone, SATCOM, air-to-ground, and other technologies into a single voice conversation. This makes it especially valuable to the much-documented problems with communications interoperability. In order to minimize the growth of Radio over IP technologies that are incompatible with each other, the U.S. Department of Homeland Security and the National Institute of Standards and Technology are sponsoring BSI for ROIP, a draft standard for enabling different Radio over IP technologies to interoperate. Radio Control over IP (RCoIP) provides the essential signaling and management for voice messages required for Critical Communications and is a step up from Radio over IP (RoIP). RCoIP is designed so that essential messages get through by using confirmed signaling. Implementations is a client–server software program designed by amateur radio enthusiasts for linking amateur radio frequency gateways and repeaters via the internet by using a Voice over IP protocol. It is developed for licence free radios like Citizens Band, PMR446 and Family Radio Service. See also Bridging Systems Interface - a standard protocol from DHS OIC's SAFECOM program Cubic | Vocality - for Radio over IP gateway devices D-STAR EchoLink HamSphere Internet Radio Linking Project (IRLP) Midland Radio National Interop PLRI RIPRNet Wide-coverage Internet Repeater Enhancement System (WIRES) Audio Aggregator 25747 References Internet protocols Public safety communications Radio communications Interoperable communications Network appliances Radio hobbies Amateur radio software for Windows Amateur radio software for Linux
Radio over IP
[ "Engineering" ]
1,197
[ "Telecommunications engineering", "Radio communications" ]
3,090,561
https://en.wikipedia.org/wiki/Tetracene
Tetracene, also called naphthacene, is a polycyclic aromatic hydrocarbon. It has the appearance of a pale orange powder. Tetracene is the four-ringed member of the series of acenes. Tetracene is a molecular organic semiconductor, used in organic field-effect transistors (OFETs) and organic light-emitting diodes (OLEDs). Tetracene can be used as a gain medium in dye lasers as a sensitiser in chemoluminescence. Napthacene is the main component of the tetracycline class of antibiotics. History and Synthesis In 1884, W. Roser attempted to synthesize a compound called "Aethindiphtalyls" (literally "ethyne diphthalyl") by heating 3 parts of phthalic anhydride, 3 parts of succinic acid and one part of sodium acetate according to Siegmund Gabriel's procedure. And then he found that there was a brick-red byproduct was produced in a large amount in the reaction, which was called "Isoäthindiphtalid" (literally "Isoethyne diphthalide") and founded to be an isomer of "Aethindiphtalyls". In 1898, Gabriel and Ernst Leupold did a study on the byproduct and confirmed it was a new class of compound containing 4 rings. In the same document, Gabriel and Leupold reported their synthesis of tetracene by condensating two moles of phthalic anhydride with a mole of succinic acid into a quinone then reduced with zinc dust. They named in naphthacene, likely as portmanteau of naphthalene and anthracene. Modern nomenclature for polyacenes, including tetracene, was introduced by Erich Clar in 1939. Clar also developed a new route to synthesize tetracene from the Friedel-Crafts acrylation between phthalic anhydride and tetralin catalyzed by AlCl3, ZnCl2 and NaCl involving Clemmensen reduction, forming 5,12-dihydrotetracene then dehydrogenated by chloranil to form tetracene. German physicist Jan Hendrik Schön claimed to have developed an electrically pumped laser based on tetracene during his time at Bell Labs (1997–2002). However, his results could not be reproduced, and this is considered to be a scientific fraud. In May 2007, Japanese researchers from Tohoku University and Osaka University reported an ambipolar light-emitting transistor made of a single tetracene crystal. Ambipolar means that the electric charge is transported by both positively charged holes and negatively charged electrons. In 2024, it was used to produce lower-energy excitations in solar cells in a process known as singlet fission. An interface layer between tetracene and silicon transfers them into the silicon layer, where most of their energy can be converted into electricity. See also Tetraphene, also known as benz[a]anthracene Doxycycline Notes Daniel Oberhaus, New Designs Could Boost Solar Cells Beyond Their Limits, Wired, July 11th 2019 References Polycyclic aromatic hydrocarbons Organic semiconductors Laser gain media Tetracyclic compounds Acenes
Tetracene
[ "Chemistry" ]
710
[ "Semiconductor materials", "Molecular electronics", "Organic semiconductors" ]
3,090,626
https://en.wikipedia.org/wiki/Index%20to%20Marine%20%26%20Lacustrine%20Geological%20Samples
The Index to Marine & Lacustrine Geological Samples is a collaboration between multiple institutions and agencies that operate geological sample repositories. The purpose of the database is to help researchers locate sea floor and lakebed cores, grabs, dredges, and drill samples in their collections. Sample material is available from participating institutions unless noted as unavailable. Data include basic collection and storage information. Lithology, texture, age, principal investigator, province, weathering/metamorphism, glass remarks, and descriptive comments are included for some samples. Links are provided to related data and information at the institutions and at NCEI. Data are coded by individual institutions, several of which receive funding from the US National Science Foundation. For more information see the NSF Division of Ocean Sciences Data and Sample Policy. The Index is endorsed by the Intergovernmental Oceanographic Commission, Committee on International Oceanographic Data and Information Exchange (IODE-XIV.2). The index is maintained by the National Centers for Environmental Information (NCEI), formerly the National Geophysical Data Center (NGDC), and collocated World Data Center for Geophysics, Boulder, Colorado. NCEI is part of the National Environmental Satellite, Data and Information Service of the National Oceanic & Atmospheric Administration, U. S. Department of Commerce. Searches and data downloads are available via a JSP and an ArcIMS interface. Data selections can be downloaded in tab-delimited or shapefile form, depending on the interface used. Both WMS and WFS interfaces are also available. The Index was created in 1977 in response to a meeting of Curators of Marine Geological Samples, sponsored by the U.S. National Science Foundation. The Curators' group continues to meet every 2–3 years. Dataset Digital Object Identifier DOI:10.7289/V5H41PB8 Web site The Index to Marine and Lacustrine Geological Samples Participating Institutions Antarctic Research Facility, Florida State University Geological Survey of Canada, Atlantic BPCRC Polar Rock Repository, Ohio State University BPCRC Sediment Repository, Ohio State University Lamont–Doherty Earth Observatory, Columbia University National Lacustrine Core Repository, University of Minnesota Ocean Drilling Program/Deep Sea Drilling Project Oregon State University, College of Ocean and Atmospheric Sciences Scripps Institution of Oceanography University of Rhode Island, Graduate School of Oceanography USGS West Coast Repository USGS East Coast Repository Woods Hole Oceanographic Institution Complete list of Participants References https://www.re3data.org/repository/r3d100011045 Moore, C.J. and R.E. Habermann, 2006, Core data stewardship: A long-term perspective. In, Rothwell, Guy, ed., New Techniques in Sediment Core Analysis, Geological Society of London Special Publication 267, pp. 241–251 (DOI: 10.1144/GSL.SP.2006.267.01.18). Mix, A., Conard, B., Broda, J., Carey, S., Firth, J., Janecek, T., Lotti-Bond, R., Moore, C., Norris, R., and D. Schnurrenberger. Curators of Sea Floor and Lakebed Samples Celebrate 25 Years of Service. EOS, Vol., 84, No. 20, 20 May 2003. (DOI: 10.1029/2003EO200005) Moore, C. J., Curators of Marine Geological Samples Gather at NGDC; Twenty years of cooperation results in worldwide access to global ocean floor samples, Earth System Monitor, Vol. 6, No. 4, June, 1996. Potter, C.J., 1979, Marine Geological Data and the Core Curators' File, EDIS Magazine. Geology literature Oceanography Marine geology
Index to Marine & Lacustrine Geological Samples
[ "Physics", "Environmental_science" ]
786
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
3,090,820
https://en.wikipedia.org/wiki/Pad%C3%A9%20approximant
In mathematics, a Padé approximant is the "best" approximation of a function near a specific point by a rational function of given order. Under this technique, the approximant's power series agrees with the power series of the function it is approximating. The technique was developed around 1890 by Henri Padé, but goes back to Georg Frobenius, who introduced the idea and investigated the features of rational approximations of power series. The Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. For these reasons Padé approximants are used extensively in computer calculations. They have also been used as auxiliary functions in Diophantine approximation and transcendental number theory, though for sharp results ad hoc methods—in some sense inspired by the Padé theory—typically replace them. Since a Padé approximant is a rational function, an artificial singular point may occur as an approximation, but this can be avoided by Borel–Padé analysis. The reason the Padé approximant tends to be a better approximation than a truncating Taylor series is clear from the viewpoint of the multi-point summation method. Since there are many cases in which the asymptotic expansion at infinity becomes 0 or a constant, it can be interpreted as the "incomplete two-point Padé approximation", in which the ordinary Padé approximation improves on the method of truncating a Taylor series. Definition Given a function and two integers and , the Padé approximant of order is the rational function which agrees with to the highest possible order, which amounts to Equivalently, if is expanded in a Maclaurin series (Taylor series at 0), its first terms would equal the first terms of , and thus When it exists, the Padé approximant is unique as a formal power series for the given m and n. The Padé approximant defined above is also denoted as Computation For given , Padé approximants can be computed by Wynn's epsilon algorithm and also other sequence transformations from the partial sums of the Taylor series of , i.e., we have can also be a formal power series, and, hence, Padé approximants can also be applied to the summation of divergent series. One way to compute a Padé approximant is via the extended Euclidean algorithm for the polynomial greatest common divisor. The relation is equivalent to the existence of some factor such that which can be interpreted as the Bézout identity of one step in the computation of the extended greatest common divisor of the polynomials and . Recall that, to compute the greatest common divisor of two polynomials p and q, one computes via long division the remainder sequence with , until . For the Bézout identities of the extended greatest common divisor one computes simultaneously the two polynomial sequences to obtain in each step the Bézout identity For the approximant, one thus carries out the extended Euclidean algorithm for and stops it at the last instant that has degree or smaller. Then the polynomials give the Padé approximant. If one were to compute all steps of the extended greatest common divisor computation, one would obtain an anti-diagonal of the Padé table. Riemann–Padé zeta function To study the resummation of a divergent series, say it can be useful to introduce the Padé or simply rational zeta function as where is the Padé approximation of order of the function . The zeta regularization value at is taken to be the sum of the divergent series. The functional equation for this Padé zeta function is where and are the coefficients in the Padé approximation. The subscript '0' means that the Padé is of order [0/0] and hence, we have the Riemann zeta function. DLog Padé method Padé approximants can be used to extract critical points and exponents of functions. In thermodynamics, if a function behaves in a non-analytic way near a point like , one calls a critical point and the associated critical exponent of . If sufficient terms of the series expansion of are known, one can approximately extract the critical points and the critical exponents from respectively the poles and residues of the Padé approximants , where . Generalizations A Padé approximant approximates a function in one variable. An approximant in two variables is called a Chisholm approximant (after J. S. R. Chisholm), in multiple variables a Canterbury approximant (after Graves-Morris at the University of Kent). Two-points Padé approximant The conventional Padé approximation is determined to reproduce the Maclaurin expansion up to a given order. Therefore, the approximation at the value apart from the expansion point may be poor. This is avoided by the 2-point Padé approximation, which is a type of multipoint summation method. At , consider a case that a function which is expressed by asymptotic behavior : and at , additional asymptotic behavior : By selecting the major behavior of , approximate functions such that simultaneously reproduce asymptotic behavior by developing the Padé approximation can be found in various cases. As a result, at the point , where the accuracy of the approximation may be the worst in the ordinary Padé approximation, good accuracy of the 2-point Padé approximant is guaranteed. Therefore, the 2-point Padé approximant can be a method that gives a good approximation globally for . In cases where are expressed by polynomials or series of negative powers, exponential function, logarithmic function or , we can apply 2-point Padé approximant to . There is a method of using this to give an approximate solution of a differential equation with high accuracy. Also, for the nontrivial zeros of the Riemann zeta function, the first nontrivial zero can be estimated with some accuracy from the asymptotic behavior on the real axis. Multi-point Padé approximant A further extension of the 2-point Padé approximant is the multi-point Padé approximant. This method treats singularity points of a function which is to be approximated. Consider the cases when singularities of a function are expressed with index by Besides the 2-point Padé approximant, which includes information at , this method approximates to reduce the property of diverging at . As a result, since the information of the peculiarity of the function is captured, the approximation of a function can be performed with higher accuracy. Examples Jacobi Bessel Fresnel See also References Literature Baker, G. A., Jr.; and Graves-Morris, P. Padé Approximants. Cambridge U.P., 1996. Baker, G. A., Jr. Padé approximant, Scholarpedia, 7(6):9756. Brezinski, C.; Redivo Zaglia, M. Extrapolation Methods. Theory and Practice. North-Holland, 1991. . Frobenius, G.; , [Journal für die reine und angewandte Mathematik (Crelle's Journal)]. Volume 1881, Issue 90, Pages 1–17. Gragg, W. B.; The Pade Table and Its Relation to Certain Algorithms of Numerical Analysis [SIAM Review], Vol. 14, No. 1, 1972, pp. 1–62. Padé, H.; , Thesis, [Ann. École Nor. (3), 9, 1892, pp. 1–93 supplement. . External links Padé Approximants, Oleksandr Pavlyk, The Wolfram Demonstrations Project. Data Analysis BriefBook: Pade Approximation, Rudolf K. Bock European Laboratory for Particle Physics, CERN. Sinewave, Scott Dattalo, last accessed 2010-11-11. MATLAB function for Padé approximation of models with time delays. Sequences and series Numerical analysis Rational functions
Padé approximant
[ "Mathematics" ]
1,665
[ "Sequences and series", "Mathematical analysis", "Mathematical structures", "Computational mathematics", "Mathematical objects", "Mathematical relations", "Numerical analysis", "Approximations" ]
3,090,886
https://en.wikipedia.org/wiki/Proofs%20of%20quadratic%20reciprocity
In number theory, the law of quadratic reciprocity, like the Pythagorean theorem, has lent itself to an unusually large number of proofs. Several hundred proofs of the law of quadratic reciprocity have been published. Proof synopsis Of the elementary combinatorial proofs, there are two which apply types of double counting. One by Gotthold Eisenstein counts lattice points. Another applies Zolotarev's lemma to , expressed by the Chinese remainder theorem as and calculates the signature of a permutation. The shortest known proof also uses a simplified version of double counting, namely double counting modulo a fixed prime. Eisenstein's proof Eisenstein's proof of quadratic reciprocity is a simplification of Gauss's third proof. It is more geometrically intuitive and requires less technical manipulation. The point of departure is "Eisenstein's lemma", which states that for odd prime p and positive integer a not divisible by p, where denotes the floor function (the largest integer less than or equal to x), and where the sum is taken over the even integers u = 2, 4, 6, ..., p−1. For example, This result is very similar to Gauss's lemma, and can be proved in a similar fashion (proof given below). Using this representation of (q/p), the main argument is quite elegant. The sum counts the number of lattice points with even x-coordinate in the interior of the triangle ABC in the following diagram: Because each column has an even number of points (namely q−1 points), the number of such lattice points in the region BCYX is the same modulo 2 as the number of such points in the region CZY: Then by flipping the diagram in both axes, we see that the number of points with even x-coordinate inside CZY is the same as the number of points inside AXY having odd x-coordinates. This can be justified mathematically by noting that . The conclusion is that where μ is the total number of lattice points in the interior of AXY. Switching p and q, the same argument shows that where ν is the number of lattice points in the interior of WYA. Since there are no lattice points on the line AY itself (because p and q are relatively prime), and since the total number of points in the rectangle WYXA is we obtain Proof of Eisenstein's lemma For an even integer u in the range 1 ≤ u ≤ p−1, denote by r(u) the least positive residue of au modulo p. (For example, for p = 11, a = 7, we allow u = 2, 4, 6, 8, 10, and the corresponding values of r(u) are 3, 6, 9, 1, 4.) The numbers (−1)r(u)r(u), again treated as least positive residues modulo p, are all even (in our running example, they are 8, 6, 2, 10, 4.) Furthermore, they are all distinct, because if (−1)r(u)r(u) ≡ (−1)r(t)r(t) (mod p), then we may divide out by a to obtain u ≡ ±t (mod p). This forces u ≡ t (mod p), because both u and t are even, whereas p is odd. Since there exactly (p−1)/2 of them and they are distinct, they must be simply a rearrangement of the even integers 2, 4, ..., p−1. Multiplying them together, we obtain Dividing out successively by 2, 4, ..., p−1 on both sides (which is permissible since none of them are divisible by p) and rearranging, we have On the other hand, by the definition of r(u) and the floor function, and since p is odd and u is even, implies that and r(u) are congruent modulo 2. Finally this shows that We are finished because the left hand side is just an alternative expression for (a/p), per Euler's criterion. Addendum to the lemma This lemma essentially states that the number of least residues after doubling that are odd gives the value of (q/p). This follows easily from Gauss' lemma. Also, implies that and r(u) are either congruent modulo 2, or incongruent, depending solely on the parity of u. This means that the residues are (in)congruent to , and so where . For example, using the previous example of , the residues are and the floor function gives . The pattern of congruence is . Proof using quadratic Gauss sums The proof of Quadratic Reciprocity using Gauss sums is one of the more common and classic proofs. These proofs work by comparing computations of single values in two different ways, one using Euler's Criterion and the other using the Binomial theorem. As an example of how Euler's criterion is used, we can use it to give a quick proof of the first supplemental case of determining for an odd prime p: By Euler's criterion , but since both sides of the equivalence are ±1 and p is odd, we can deduce that . The second supplemental case Let , a primitive 8th root of unity and set . Since and we see that . Because is an algebraic integer, if p is an odd prime it makes sense to talk about it modulo p. (Formally we are considering the commutative ring formed by factoring the algebraic integers with the ideal generated by p. Because is not an algebraic integer, 1, 2, ..., p are distinct elements of .) Using Euler's criterion, it follows that We can then say that But we can also compute using the binomial theorem. Because the cross terms in the binomial expansion all contain factors of p, we find that . We can evaluate this more exactly by breaking this up into two cases . . These are the only options for a prime modulo 8 and both of these cases can be computed using the exponential form . We can write this succinctly for all odd primes p as Combining these two expressions for and multiplying through by we find that . Since both and are ±1 and 2 is invertible modulo p, we can conclude that The general case The idea for the general proof follows the above supplemental case: Find an algebraic integer that somehow encodes the Legendre symbols for p, then find a relationship between Legendre symbols by computing the qth power of this algebraic integer modulo q in two different ways, one using Euler's criterion the other using the binomial theorem. Let where is a primitive pth root of unity. This is a quadratic Gauss sum. A fundamental property of these Gauss sums is that where . To put this in context of the next proof, the individual elements of the Gauss sum are in the cyclotomic field but the above formula shows that the sum itself is a generator of the unique quadratic field contained in L. Again, since the quadratic Gauss sum is an algebraic integer, we can use modular arithmetic with it. Using this fundamental formula and Euler's criterion we find thatThereforeUsing the binomial theorem, we also find that , If we let a be a multiplicative inverse of , then we can rewrite this sum as using the substitution , which doesn't affect the range of the sum. Since , we can then writeUsing these two expressions for , and multiplying through by givesSince is invertible modulo q, and the Legendre symbols are either ±1, we can then conclude that Proof using algebraic number theory The proof presented here is by no means the simplest known; however, it is quite a deep one, in the sense that it motivates some of the ideas of Artin reciprocity. Cyclotomic field setup Suppose that p is an odd prime. The action takes place inside the cyclotomic field where ζp is a primitive pth root of unity. The basic theory of cyclotomic fields informs us that there is a canonical isomorphism which sends the automorphism σa satisfying to the element In particular, this isomorphism is injective because the multiplicative group of a field is a cyclic group: . Now consider the subgroup H of squares of elements of G. Since G is cyclic, H has index 2 in G, so the subfield corresponding to H under the Galois correspondence must be a quadratic extension of Q. (In fact it is the unique quadratic extension of Q contained in L.) The Gaussian period theory determines which one; it turns out to be , where At this point we start to see a hint of quadratic reciprocity emerging from our framework. On one hand, the image of H in consists precisely of the (nonzero) quadratic residues modulo p. On the other hand, H is related to an attempt to take the square root of p (or possibly of −p). In other words, if now q is a prime (different from p), we have shown that The Frobenius automorphism In the ring of integers , choose any unramified prime ideal β of lying over q, and let be the Frobenius automorphism associated to β; the characteristic property of is that (The existence of such a Frobenius element depends on quite a bit of algebraic number theory machinery.) The key fact about that we need is that for any subfield K of L, Indeed, let δ be any ideal of OK below β (and hence above q). Then, since for any , we see that is a Frobenius for δ. A standard result concerning is that its order is equal to the corresponding inertial degree; that is, The left hand side is equal to 1 if and only if φ fixes K, and the right hand side is equal to one if and only q splits completely in K, so we are done. Now, since the pth roots of unity are distinct modulo β (i.e. the polynomial Xp − 1 is separable in characteristic q), we must have that is, coincides with the automorphism σq defined earlier. Taking K to be the quadratic field in which we are interested, we obtain the equivalence Completing the proof Finally we must show that Once we have done this, the law of quadratic reciprocity falls out immediately since and for . To show the last equivalence, suppose first that In this case, there is some integer x (not divisible by q) such that say for some integer c. Let and consider the ideal of K. It certainly divides the principal ideal (q). It cannot be equal to (q), since is not divisible by q. It cannot be the unit ideal, because then is divisible by q, which is again impossible. Therefore (q) must split in K. Conversely, suppose that (q) splits, and let β be a prime of K above q. Then so we may choose some Actually, since elementary theory of quadratic fields implies that the ring of integers of K is precisely so the denominators of a and b are at worst equal to 2. Since q ≠ 2, we may safely multiply a and b by 2, and assume that where now a and b are in Z. In this case we have so However, q cannot divide b, since then also q divides a, which contradicts our choice of Therefore, we may divide by b modulo q, to obtain as desired. References Every textbook on elementary number theory (and quite a few on algebraic number theory) has a proof of quadratic reciprocity. Two are especially noteworthy: has many proofs (some in exercises) of both quadratic and higher-power reciprocity laws and a discussion of their history. Its immense bibliography includes literature citations for 196 different published proofs. also has many proofs of quadratic reciprocity (and many exercises), and covers the cubic and biquadratic cases as well. Exercise 13.26 (p 202) says it all Count the number of proofs to the law of quadratic reciprocity given thus far in this book and devise another one. External links F. Lemmermeyer's chronology and bibliography of proofs of the Quadratic Reciprocity Law (332 proofs) Algebraic number theory Article proofs
Proofs of quadratic reciprocity
[ "Mathematics" ]
2,628
[ "Article proofs", "Algebraic number theory", "Number theory" ]
3,090,998
https://en.wikipedia.org/wiki/Glan%E2%80%93Taylor%20prism
A Glan–Taylor prism is a type of prism which is used as a polarizer or polarizing beam splitter. It is one of the most common types of modern polarizing prism. It was first described by Archard and Taylor in 1948. The prism is made of two right-angled prisms of calcite (or sometimes other birefringent materials) separated on their long faces with an air gap. The optical axes of the calcite crystals are aligned parallel to the plane of reflection. Total internal reflection of s-polarized light at the air gap ensures that only p-polarized light is transmitted by the device. Because the angle of incidence at the gap can be reasonably close to Brewster's angle, unwanted reflection of p-polarized light is reduced, giving the Glan–Taylor prism better transmission than the Glan–Foucault design. Note that while the transmitted beam is completely polarized, the reflected beam is not. The sides of the crystal can be polished to allow the reflected beam to exit or can be blackened to absorb it. The latter reduces unwanted Fresnel reflection of the rejected beam. A variant of the design exists called a Glan–laser prism. This is a Glan–Taylor prism with a steeper angle for the cut in the prism, which decreases reflection loss at the expense of reduced angular field of view. These polarizers are also typically designed to tolerate very high beam intensities, such those produced by a laser. The differences may include using calcite selected for low scattering loss, improved polish quality on the faces and especially on the sides of the crystal, and better antireflection coatings. Prisms with irradiance damage thresholds greater than 1 GW/cm2 are commercially available. See also Glan–Foucault prism Glan–Thompson prism References Polarization (waves) Prisms (optics)
Glan–Taylor prism
[ "Physics" ]
390
[ "Polarization (waves)", "Astrophysics" ]
3,091,094
https://en.wikipedia.org/wiki/Phantom%20time%20conspiracy%20theory
The phantom time conspiracy theory is a pseudohistorical conspiracy theory first asserted by Heribert Illig in 1991. It hypothesizes a conspiracy by the Holy Roman Emperor Otto III, Pope Sylvester II, and possibly the Byzantine Emperor Constantine VII, to fabricate the Anno Domini dating system retroactively, in order to place them at the special year of AD 1000, and to rewrite history to legitimize Otto's claim to the Holy Roman Empire. Illig believed that this was achieved through the alteration, misrepresentation and forgery of documentary and physical evidence. According to this scenario, the entire Carolingian period, including the figure of Charlemagne, is a fabrication, with a "phantom time" of 297 years (AD 614–911) added to the Early Middle Ages. Evidence contradicts the hypothesis and it failed to gain the support of historians, and calendars in other European countries, most of Asia and parts of pre-Columbian America contradict this. Heribert Illig Illig was born in 1947 in Vohenstrauß, Bavaria. He was active in an association dedicated to Immanuel Velikovsky, catastrophism and historical revisionism, the Gesellschaft zur Rekonstruktion der Menschheits- und Naturgeschichte (English: Society for the Reconstruction of Human and Natural History). From 1989 to 1994 he acted as editor of the journal Vorzeit-Frühzeit-Gegenwart (English: Prehistory-Proto-History-Present). Since 1995, he has worked as a publisher and author under his own publishing company, Mantis-Verlag, and publishing his own journal, Zeitensprünge (English: Leaps in Time). Outside of his publications related to revised chronology, he has edited the works of Egon Friedell. Before focusing on the early medieval period, Illig published various proposals for revised chronologies of prehistory and of Ancient Egypt. His proposals received prominent coverage in German popular media in the 1990s. His 1996 Das erfundene Mittelalter (English: The Invented Middle Ages) also received scholarly recensions, but was universally rejected as fundamentally flawed by historians. In 1997, the journal Ethik und Sozialwissenschaften (English: Ethics and Social Sciences) offered a platform for critical discussion to Illig's proposal, with a number of historians commenting on its various aspects. After 1997, there has been little scholarly reception of Illig's ideas, although they continued to be discussed as pseudohistory in German popular media. Illig continued to publish on the "phantom time hypothesis" until at least 2013. Also in 2013, he published on an unrelated topic of art history, on German Renaissance master Anton Pilgram, but again proposing revisions to conventional chronology, and arguing for the abolition of the art historical category of Mannerism. Claims Illig's claims include: That there is a scarcity of archaeological evidence that can be reliably dated to the period AD 614–911. That the dating methods used for such recent periods, radiometry and dendrochronology, are inaccurate. That medieval historians rely too much on written sources. That the presence of Romanesque architecture in tenth-century Western Europe suggests that the Roman era was not as long ago as conventionally thought. That at the time of the introduction of the Gregorian calendar in AD 1582, there should have been a discrepancy of thirteen days between the Julian calendar and the real (or tropical) calendar, when the astronomers and mathematicians working for Pope Gregory XIII had found that the civil calendar needed to be adjusted by only ten days. From this, Illig concludes that the AD era had counted roughly three centuries which never existed. Refutation Observations in ancient astronomy, especially those of solar eclipses cited by European sources prior to 600 AD (when phantom time would have distorted the chronology), agree with the usual chronology and not with Illig's. Besides several others that are perhaps too vague to disprove the phantom time hypothesis, two in particular are dated with enough precision to question the hypothesis. One is reported by Pliny the Elder in 59 AD. This date has a confirmed eclipse. In addition, observations during the Tang dynasty in China, and Halley's Comet, for example, are consistent with current astronomy with no "phantom time" added. Archaeological remains and dating methods such as dendrochronology (tree-ring dating) refute, rather than support, "phantom time". The Gregorian reform was never purported to bring the calendar in line with the Julian calendar as it had existed at the time of its institution in 45 BC, but as it had existed in 325 AD, the time of the Council of Nicaea, which had established a method for determining the date of Easter Sunday by fixing the vernal equinox on March 21 in the Julian calendar. By 1582, the astronomical equinox was occurring on March 10 in the Julian calendar, but Easter was still being calculated from a nominal equinox on March 21. In 45 BC the astronomical vernal equinox took place around March 23. Illig's "three missing centuries" thus correspond to the 369 years between the institution of the Julian calendar in 45 BC, and the fixing of the Easter Date at the Council of Nicaea in 325 AD. If Charlemagne and the Carolingian dynasty were fabricated, there would have to be a corresponding fabrication of the history of the rest of Europe during the same era, including Anglo-Saxon England, the Papacy, and the Byzantine Empire. The "phantom time" period also encompasses the life of Muhammad and the Islamic expansion into the areas of the former Western Roman Empire, including the conquest of Visigothic Iberia. This history too would have to be forged or drastically misdated. It would also have to be reconciled with the history of the Tang dynasty of China and its contact with the Islamic world, such as at the Battle of Talas. Bibliography Publications by Illig: Egon Friedell und Immanuel Velikovsky. Vom Weltbild zweier Außenseiter, Basel 1985. Die veraltete Vorzeit, Heribert Illig, Eichborn, 1988 with Gunnar Heinsohn: Wann lebten die Pharaonen?, Mantis, 1990, revised 2003 Karl der Fiktive, genannt Karl der Große, 1992 Hat Karl der Große je gelebt? Bauten, Funde und Schriften im Widerstreit, 1994 Hat Karl der Große je gelebt?, Heribert Illig, Mantis, 1996 Das erfundene Mittelalter. Die größte Zeitfälschung der Geschichte, Heribert Illig, Econ 1996, (revised ed. 1998) Das Friedell-Lesebuch, Heribert Illig, C.H. Beck 1998, Heribert Illig, with Franz Löhner: Der Bau der Cheopspyramide, Mantis 1998, Wer hat an der Uhr gedreht?, Heribert Illig, Ullstein 2003, Heribert Illig, with Gerhard Anwander: Bayern in der Phantomzeit. Archäologie widerlegt Urkunden des frühen Mittelalters., Mantis 2002, See also Cultural depictions of Otto III, Holy Roman Emperor Historical negationism The Chronology of Ancient Kingdoms Amended Glasgow Chronology New Chronology (Fomenko) New Chronology (Rohl) Revised chronology of Immanuel Velikovsky Jean Hardouin Historicity of Muhammad Simulation hypothesis References Illig, Heribert: Enthält das frühe Mittelalter erfundene Zeit? and subsequent discussion, in: Ethik und Sozialwissenschaften 8 (1997), pp. 481–520. Schieffer, Rudolf: Ein Mittelalter ohne Karl den Großen, oder: Die Antworten sind jetzt einfach, in: Geschichte in Wissenschaft und Unterricht 48 (1997), pp. 611–17. Matthiesen, Stephan: Erfundenes Mittelalter – fruchtlose These!, in: Skeptiker 2 (2002). External links Explanation of the "phantom time hypothesis" in English (pdf) Critique of Illig personal interactions, not his hypothesis in English A short explanation of the "phantom time hypothesis" Historical revisionism Pseudohistory Chronology Conspiracy theories 1991 introductions Alternative chronologies Otto III, Holy Roman Emperor Constantine VII Charlemagne et:Heribert Illig
Phantom time conspiracy theory
[ "Physics" ]
1,811
[ "Spacetime", "Chronology", "Physical quantities", "Time" ]
3,091,760
https://en.wikipedia.org/wiki/Cycler
A cycler is a potential spacecraft on a closed transfer orbit that would pass close to two celestial bodies at regular intervals. Cyclers could be used for carrying heavy supplies, life support and radiation shielding. Concept A cycler encounters two or more bodies regularly by employing a free-return trajectory, this trajectory was analysed by Arthur Schwaniger in 1963 with a symmetrical orbit past the Moon and Earth. Once the orbit is established, no propulsion is required to shuttle between the two, although some minor corrections may be necessary due to small perturbations in the orbit. The use of cyclers was considered in 1969 by Walter M. Hollister, who examined the case of an Earth–Venus cycler. Hollister did not have any particular mission in mind, but posited their use for both regular communication between two planets, and for multi-planet flyby missions. Triple cycler An extension of a cycler is the triple cycler like an Earth-Venus-Mars cycler, or a Jovian system moon to moon cycler. Types of cyclers by purpose Venus cycler Walter M. Hollister considered in 1969 the concept of a cycler and examined the case of an Earth–Venus cycler. Lunar cycler A lunar cycler or Earth–Moon cycler is a cycler orbit, or spacecraft therein, which periodically passes close by the Earth and the Moon, using gravity assists and occasional propellant-powered corrections to maintain its trajectories between the two. If the fuel required to reach a particular cycler orbit from both the Earth and the Moon is modest, and the travel time between the two along the cycler is reasonable, then having a spacecraft in the cycler can provide an efficient and regular method for space transportation. Mars cycler A Mars cycler or Earth–Mars cycler is a spacecraft trajectory that encounters the Earth and Mars on a regular basis, or a spacecraft on such a trajectory Interstellar cycler An interstellar cycler or Schroeder cycler, a theoretical spacecraft trajectory that encounters two or more stars on a regular basis, or a spacecraft on such a trajectory. An interstellar cycler would never slow down and use Lorentz force for turning. The envisioned benefit is that the life support for an interstellar vehicle wouldn't have to be accelerated, only the payload, allowing more to be carried for a given energy budget. As an idea it was considered by P.C. Norem in a 1969 paper and popularized by Karl Schroeder in his 2002 novel Permanence. References Additional references } Space Spacecraft
Cycler
[ "Physics", "Astronomy", "Mathematics" ]
527
[ "Spacecraft stubs", "Astronomy stubs", "Space", "Geometry", "Spacetime" ]
3,091,815
https://en.wikipedia.org/wiki/Strecker%20amino%20acid%20synthesis
The Strecker amino acid synthesis, also known simply as the Strecker synthesis, is a method for the synthesis of amino acids by the reaction of an aldehyde with cyanide in the presence of ammonia. The condensation reaction yields an α-aminonitrile, which is subsequently hydrolyzed to give the desired amino acid. The method is used for the commercial production of racemic methionine from methional. Primary and secondary amines also give N-substituted amino acids. Likewise, the usage of ketones, instead of aldehydes, gives α,α-disubstituted amino acids. Reaction mechanism In the first part of the reaction process, the carbonyl is converted to an [[[iminium ion|iminium]], to which a cyanide ion adds. First, the carbonyl oxygen of an aldehyde is protonated, followed by a nucleophilic attack of ammonia to the carbonyl carbon. After subsequent proton exchange, water is cleaved to form the iminium ion intermediate. A cyanide ion then attacks the iminium carbon yielding an aminonitrile. In the second part of the reaction process, the nitrile is hydrolzed. First, the nitrile nitrogen of the aminonitrile is protonated, and the nitrile carbon is attacked by a water molecule. A 1,2-diamino-diol is then formed after proton exchange and a nucleophilic attack of water to the former nitrile carbon. Ammonia is subsequently eliminated after the protonation of the amino group, and finally the deprotonation of a hydroxyl group produces an amino acid. Asymmetric Strecker reactions One example of the Strecker synthesis is a multikilogram scale synthesis of an L-valine derivative starting from Methyl isopropyl ketone: The initial reaction product of 3-methyl-2butanone with sodium cyanide and ammonia is resolved by application of L-tartaric acid. In contrast, asymmetric Strecker reactions require no resolving agent. By replacing ammonia with (S)-alpha-phenylethylamine as chiral auxiliary the ultimate reaction product was chiral alanine. Catalytic asymmetric Strecker reaction can be effected using thiourea-derived catalysts. In 2012, a BINOL-derived catalyst was employed to generate chiral cyanide anion (see figure). History The German chemist Adolph Strecker discovered the series of chemical reactions that produce an amino acid from an aldehyde or ketone. Using ammonia or ammonium salts in this reaction gives unsubstituted amino acids. In the original Strecker reaction acetaldehyde, ammonia, and hydrogen cyanide combined to form after hydrolysis alanine. Using primary and secondary amines in place of ammonium was shown to yield N-substituted amino acids. The classical Strecker synthesis gives racemic mixtures of α-amino acids as products, but several alternative procedures using asymmetric auxiliaries or asymmetric catalysts have been developed. The asymmetric Strecker reaction was reported by Harada in 1963. The first reported asymmetric synthesis via a chiral catalyst was published in 1996. However, this was retracted in 2023. Commercial syntheses of amino acids Several methods exist to synthesize amino acids aside from the Strecker synthesis. The commercial production of amino acids, however, usually relies on mutant bacteria that overproduce individual amino acids using glucose as a carbon source. Otherwise amino acids are produced by enzymatic conversions of synthetic intermediates. 2-Aminothiazoline-4-carboxylic acid is an intermediate in one industrial synthesis of L-cysteine. Aspartic acid is produced by the addition of ammonia to fumarate using a lyase. References See also Bucherer–Bergs reaction Multiple component reactions Substitution reactions Name reactions Chemical synthesis of amino acids
Strecker amino acid synthesis
[ "Chemistry" ]
833
[ "Name reactions" ]
3,092,216
https://en.wikipedia.org/wiki/Ultrasonic%20testing
Ultrasonic testing (UT) is a family of non-destructive testing techniques based on the propagation of ultrasonic waves in the object or material tested. In most common UT applications, very short ultrasonic pulse waves with centre frequencies ranging from 0.1-15 MHz and occasionally up to 50 MHz, are transmitted into materials to detect internal flaws or to characterize materials. A common example is ultrasonic thickness measurement, which tests the thickness of the test object, for example, to monitor pipework corrosion and erosion. Ultrasonic testing is extensively used to detect flaws in welds. Ultrasonic testing is often performed on steel and other metals and alloys, though it can also be used on concrete, wood and composites, albeit with less resolution. It is used in many industries including steel and aluminium construction, metallurgy, manufacturing, aerospace, automotive and other transportation sectors. History The first efforts to use ultrasonic testing to detect flaws in solid material occurred in the 1930s. On May 27, 1940, U.S. researcher Dr. Floyd Firestone of the University of Michigan applies for a U.S. invention patent for the first practical ultrasonic testing method. The patent is granted on April 21, 1942 as U.S. Patent No. 2,280,226, titled "Flaw Detecting Device and Measuring Instrument". Extracts from the first two paragraphs of the patent for this entirely new nondestructive testing method succinctly describe the basics of such ultrasonic testing. "My invention pertains to a device for detecting the presence of inhomogeneities of density or elasticity in materials. For instance, if a casting has a hole or a crack within it, my device allows the presence of the flaw to be detected and its position located, even though the flaw lies entirely within the casting and no portion of it extends out to the surface. ... The general principle of my device consists of sending high frequency vibrations into the part to be inspected and the determination of the time intervals of the arrival of the direct and reflected vibrations at one or more stations on the surface of the part." James F. McNulty (U.S. radio engineer) of Automation Industries, Inc., then, in El Segundo, California, an early improver of the many foibles and limits of this and other nondestructive testing methods, teaches in further detail on ultrasonic testing in his U.S. Patent 3,260,105 (application filed December 21, 1962, granted July 12, 1966, titled “Ultrasonic Testing Apparatus and Method”) that “Basically ultrasonic testing is performed by applying to a piezoelectric crystal transducer periodic electrical pulses of ultrasonic frequency. The crystal vibrates at the ultrasonic frequency and is mechanically coupled to the surface of the specimen to be tested. This coupling may be effected by immersion of both the transducer and the specimen in a body of liquid or by actual contact through a thin film of liquid such as oil. The ultrasonic vibrations pass through the specimen and are reflected by any discontinuities which may be encountered. The echo pulses that are reflected are received by the same or by a different transducer and are converted into electrical signals which indicate the presence of the defect.” To characterize microstructural features in the early stages of fatigue or creep damage, more advanced nonlinear ultrasonic tests should be employed. These nonlinear methods are based on the fact that an intensive ultrasonic wave is getting distorted as it faces micro damages in the material. The intensity of distortion is correlated with the level of damage. This intensity can be quantified by the acoustic nonlinearity parameter (β). β is related to first and second harmonic amplitudes. These amplitudes can be measured by harmonic decomposition of the ultrasonic signal through fast Fourier transformation or wavelet transformation. How it works In ultrasonic testing, an ultrasound transducer connected to a diagnostic machine is passed over the object being inspected. The transducer is typically separated from the test object by a couplant such as a gel, oil or water, as in immersion testing. However, when ultrasonic testing is conducted with an Electromagnetic Acoustic Transducer (EMAT) the use of couplant is not required. There are two methods of receiving the ultrasound waveform: reflection and attenuation. In reflection (or pulse-echo) mode, the transducer performs both the sending and the receiving of the pulsed waves as the "sound" is reflected back to the device. Reflected ultrasound comes from an interface, such as the back wall of the object or from an imperfection within the object. The diagnostic machine displays these results in the form of a signal with an amplitude representing the intensity of the reflection and the distance, representing the arrival time of the reflection. In attenuation (or through-transmission) mode, a transmitter sends ultrasound through one surface, and a separate receiver detects the amount that has reached it on another surface after travelling through the medium. Imperfections or other conditions in the space between the transmitter and receiver reduce the amount of sound transmitted, thus revealing their presence. Using the couplant increases the efficiency of the process by reducing the losses in the ultrasonic wave energy due to separation between the surfaces. Examples One of the example that utilize ultrasound for proving material property is the measurement of grain size of specific material. Unlike destructive measurement, ultrasound offers methods to measure grain size in non-destructive way with even higher detection efficiency. Measurement of grain size using ultrasound can be accomplished through evaluating ultrasonic velocities, attenunations, and backscatter feature. Theoretical foundation for scattering attenunation model was developed by Stanke, Kino, and Weaver. With constant frequency, the scattering attenuation coefficient depends mainly on the grain size; Zeng et al, figured out that in pure Niobium, attenuation is linearly correlated with grain size through grain boundary scattering. This concepts of ultrasonic proving can be used to inversely resolve the grain size in the time domain when the scattering attenuation coefficient is measured from testing data, providing the non-destructive way to predict material's property with rather simple instruments. Features Advantages High penetrating power allows the detection of flaws deep in the part. High sensitivity, permitting the detection of extremely small flaws. Greater accuracy than other non-destructive methods in determining the depth of internal flaws and the thickness of parts with parallel surfaces. Some capability of estimating the size, orientation, shape and nature of defects. Some capability of estimating the structure of alloys of components with different acoustic properties. Non-hazardous to operations or to nearby personnel and has no effect on equipment and materials in the vicinity. Capable of portable, highly automated or remote operation. Results are immediate, allowing on-the-spot decisions to be made. It needs to access only one surface of the product that is being inspected. Disadvantages Manual operation requires careful attention by experienced technicians. The transducers alert to both normal structure of some materials, tolerable anomalies of other specimens (both termed “noise”) and to faults therein severe enough to compromise specimen integrity. These signals must be distinguished by a skilled technician, possibly requiring follow up with other nondestructive testing methods. Extensive technical knowledge is required for the development of inspection procedures. Rough surface finish, irregular geometry, small parts, thin thicknesses, or un-homogeneous material composition can make testing difficult. Surface must be prepared by cleaning and removing loose scale, paint, etc., although paint that is properly bonded to a surface, may not need to be removed. Couplants are needed to effectively transfer ultrasonic wave energy between transducers and parts being inspected unless a non-contact technique is used. Non-contact techniques include Laser and Electro Magnetic Acoustic Transducers (EMAT). Equipment can be expensive. Requires reference standards and calibration. Standards International Organization for Standardization (ISO) ISO 2400: Non-destructive testing - Ultrasonic testing - Specification for calibration block No. 1 (2012) ISO 7963: Non-destructive testing — Ultrasonic testing — Specification for calibration block No. 2 (2006) ISO 10863: Non-destructive testing of welds -- Ultrasonic testing -- Use of time-of-flight diffraction technique (TOFD) (2011) ISO 11666: Non-destructive testing of welds — Ultrasonic testing — Acceptance levels (2010) ISO 16809: Non-destructive testing -- Ultrasonic thickness measurement (2012) ISO 16831: Non-destructive testing -- Ultrasonic testing -- Characterization and verification of ultrasonic thickness measuring equipment (2012) ISO 17640: Non-destructive testing of welds - Ultrasonic testing - Techniques, testing levels, and assessment (2010) ISO 22825, Non-destructive testing of welds - Ultrasonic testing - Testing of welds in austenitic steels and nickel-based alloys (2012) ISO 5577: Non-destructive testing -- Ultrasonic inspection -- Vocabulary (2000) European Committee for Standardization (CEN) EN 583, Non-destructive testing - Ultrasonic examination EN 1330-4, Non destructive testing - Terminology - Part 4: Terms used in ultrasonic testing EN 12668-1, Non-destructive testing - Characterization and verification of ultrasonic examination equipment - Part 1: Instruments EN 12668-2, Non-destructive testing - Characterization and verification of ultrasonic examination equipment - Part 2: Probes EN 12668-3, Non-destructive testing - Characterization and verification of ultrasonic examination equipment - Part 3: Combined equipment EN 12680, Founding - Ultrasonic examination EN 14127, Non-destructive testing - Ultrasonic thickness measurement (Note: Part of CEN standards in Germany accepted as DIN EN, in Czech Republic as CSN EN.) See also Non-Contact Ultrasound Phased array ultrasonics Time-of-flight diffraction ultrasonics (TOFD) Time-of-flight ultrasonic determination of 3D elastic constants (TOF) Internal rotary inspection system (IRIS) ultrasonics for tubes EMAT Electromagnetic Acoustic Transducer ART (Acoustic Resonance Technology) References Further reading Albert S. Birks, Robert E. Green, Jr., technical editors; Paul McIntire, editor. Ultrasonic testing, 2nd ed. Columbus, OH : American Society for Nondestructive Testing, 1991. . Josef Krautkrämer, Herbert Krautkrämer. Ultrasonic testing of materials, 4th fully rev. ed. Berlin; New York: Springer-Verlag, 1990. . J.C. Drury. Ultrasonic Flaw Detection for Technicians, 3rd ed., UK: Silverwing Ltd. 2004. (See Chapter 1 online (PDF, 61 kB)). Nondestructive Testing Handbook, Third ed.: Volume 7, Ultrasonic Testing. Columbus, OH: American Society for Nondestructive Testing. Detection and location of defects in electronic devices by means of scanning ultrasonic microscopy and the wavelet transform measurement, Volume 31, Issue 2, March 2002, Pages 77–91, L. Angrisani, L. Bechou, D. Dallet, P. Daponte, Y. Ousten Nondestructive testing Ultrasound Welding
Ultrasonic testing
[ "Materials_science", "Engineering" ]
2,325
[ "Nondestructive testing", "Materials testing", "Welding", "Mechanical engineering" ]
3,092,253
https://en.wikipedia.org/wiki/SGI%20Prism
The Silicon Graphics Prism is a series of visualization computer systems developed and manufactured by Silicon Graphics (SGI). Released in April 2005, the Prism's basic system architecture is based on the Altix 3000 servers, but with graphics hardware. The Prism uses the Linux operating system and the OpenGL software library. Three models of the SGI Prism are Power, Team and Extreme levels. The Power level supports two to eight Itanium 2 processors, up to 96 GB of memory and two to four graphics pipelines. The Team level supports 8 to 16 Itanium 2 processors, up to 192 GB of memory and four to eight graphics pipelines. The Extreme level supports 16 to 256 Itanium 2 processors, up to 3 TB of memory and 4 to 16 graphics pipelines. The graphics pipelines for the Prism are ATI FireGL cards based on either the R350 or R420 GPUs. References Prism Prism Very long instruction word computing 64-bit computers
SGI Prism
[ "Technology" ]
200
[ "Computing stubs", "Computer hardware stubs" ]
3,092,512
https://en.wikipedia.org/wiki/Beach%20house
A beach house is a house on or near a beach, sometimes used as a vacation or second home for people who commute to the house on weekends or during vacation periods. Beach houses are often designed to weather the type of climate they are built in and the building materials and construction methods used in beach housing vary widely around the world. Beach houses require special paint to protect them from the salt water. If a property is built on sand, it needs foundation with special requirements. Beach houses are often associated with beach gardens with a special planting and a particular type of leisure use. One of the most famous twentieth century beach gardens was constructed by Derek Jarman at Dungeness, England. It celebrated local materials, native plants and the openness of the site. Other beach gardens have tried to create an isolated microclimate. American architect Andrew Geller designed sculptural beach houses in the coastal regions of New England during the 1950s and 1960s. See also List of real estate topics Niche real estate Beach hut References External links House types Coastal construction Luxury real estate Water and the environment
Beach house
[ "Engineering" ]
218
[ "Construction", "Coastal construction" ]
3,092,530
https://en.wikipedia.org/wiki/Mother%20Box
Mother Boxes are fictional devices in Jack Kirby's Fourth World setting in the DC Universe. The Mother Boxes appeared in the feature films Justice League and Zack Snyder's Justice League of the DC Extended Universe. History Created by Apokoliptian scientist Himon using the mysterious Element X, they are generally thought to be sentient, miniaturized, portable supercomputers, although their true nature and origins are unknown. They possess various powers, including teleportation, energy manipulation, and healing. Despite their name, Mother Boxes are not always box-shaped. Additionally, the New Gods of Apokolips use equivalents of Mother Boxes called Father Boxes. Interpretation In a 2008 article, John Hodgman observed: "Mister Miracle, a warrior of Apokolips who flees to Earth to become a 'super escape artist', keeps a 'Mother Box' up his sleeve — a small, living computer that can enable its user to do almost anything, so long as it is sufficiently loved. In Kirby's world, all machines are totems: weapons and strange vehicles fuse technology and magic, and the Mother Box in particular uncannily anticipates the gadget fetishism that infects our lives today. The Bluetooth headset may well be a Kirby creation". Similarly, Mike Cecchini of Den of Geek described the Mother Box as "an alien smartphone that can do anything from heal the injured to teleport you across time and space", and Christian Holub in Entertainment Weekly called it "basically a smartphone, as designed by gods". Mother Boxes have also been interpreted as a symbol of the "ideal mother" and an example of the role of motherhood in Jack Kirby's Fourth World stories. In other media Television Mother Boxes appear in series set in the DC Animated Universe (DCAU). Mother Boxes and Father Boxes appear in Young Justice. Mother Boxes appear in Justice League Action. A Mother Box appears in DC Super Hero Girls: Super Hero High. A Mother Box appears in the Harley Quinn episode "Inner (Para) Demons". Film DC Extended Universe In Batman v Superman: Dawn of Justice, a Mother Box appears briefly in footage that Batman obtained from Lex Luthor. The Box is the final component that transforms Victor Stone into Cyborg, thus saving his life in the process. Additionally, Steppenwolf and his Mother Boxes appear in a post-credits scene in the Ultimate Edition of the film. In Justice League, Steppenwolf is in search of three Mother Boxes hidden away on Earth. Two are located in Themyscira and Atlantis, while the third is the one that had been seen in Batman v Superman and was used to activate Cyborg. Previously, Steppenwolf had used the Boxes in his original invasion of Earth, intending to use them to terraform the planet before being driven off by the combined force of the Olympian Gods, Atlanteans, Amazons, humans, and Yalan Gur of the Green Lantern Corps. After the war, the boxes were left on Earth, and the Amazons, Atlanteans, and humans each took custody of one of them. When all three boxes awaken after years of dormancy, Steppenwolf returns seeking to use them to finish what he had started. Eventually, after the Justice League defeat Steppenwolf, the first two boxes are each returned to their respective custodies, while Silas Stone begins researching the third box with his son to explore the extent of its powers. Zack Snyder's Justice League depicts the Mother Boxes generally the same as in the theatrical version. After a failed invasion of Earth by Darkseid thousands of years ago, the Mother Boxes are separated and hidden away as in the theatrical release. The Amazonian Mother Box "awakens" upon Superman's death at the end of Batman v Superman, and alerts Steppenwolf to its location. He escapes with it after a short battle with the Amazonians and proceeds to search for the other two by capturing and interrogating Atlanteans and S.T.A.R. Labs scientists. Steppenwolf seizes the Atlantean Mother Box after a fight with Aquaman and Mera. The protagonists resurrect Superman with the third Mother Box, and Steppenwolf is able to claim it after an amnesiac Superman attacks the other superheroes. The superheroes locate Steppenwolf's fortress in Russia thanks to Silas Stone's self-sacrifice which allows them to detect the third Mother Box's location. They launch an attack on the fortress so Cyborg can interface with the Boxes and prevent the Unity. After they fail and Earth is destroyed, the Flash travels back in time to enable Cyborg to successfully deactivate the Boxes, preventing the Unity and defeating Steppenwolf, who is subsequently killed through the combined efforts of Aquaman, Superman, and Wonder Woman. In the aftermath, DeSaad informs Darkseid that the Mother Boxes are now destroyed, forcing Darkseid to conquer Earth using "the old ways", through military conquest. In the Blu-Ray release of Wonder Woman, the epilogue Etta's Mission is included as an additional detailing of the events that transpired after the events of the film's story. Etta Candy's titular mission involves her, Diana Prince, and Steve Trevor retrieving one of the three Mother Boxes. Animation Two Mother Boxes appear in Superman/Batman: Apocalypse. Numerous Mother Boxes appear in Justice League: War, being used to transport Parademons to Earth. When the Mother Boxes were activated, one of them was in Victor Stone's possession and badly wounded him, leading to his transformation into Cyborg. His newfound cybernetics gave him an intimate link to machinery that allowed him to communicate with Mother Boxes. Ultimately, he uses several Boom Tubes to repel the Apokoliptian invasion forces. In Reign of the Supermen, Lex Luthor uses the Mother Box to free the Justice League, who were imprisoned in another dimension, and help Steel and Superboy defeat the drones. Video games A Mother Box is central in the plot of Justice League Heroes as it is coveted by Brainiac and used as a way to transform Earth into a "New Apokolips" by Darkseid. In Injustice 2, Mother Boxes serve as the game's loot box rewards system, offering differing rewards depending on the rarity. Additionally, Cyborg utilizes them in gameplay to create drones that can target the opponent from multiple directions. In Lego DC Super-Villains, a Mother Box is stolen from Wayne Tech and owned by Harley Quinn, who names it "Boxy". It is also revealed that the Mother Box contains a fragment of the Anti-Life Equation, which is then absorbed by the Rookie. References Fictional computers Fictional elements introduced in 1971 Fourth World (comics)
Mother Box
[ "Technology" ]
1,407
[ "Fictional computers", "Computers" ]
3,092,841
https://en.wikipedia.org/wiki/Flattery
Flattery, also called adulation or blandishment, is the act of giving excessive compliments, generally for the purpose of ingratiating oneself with the subject. It is also used in pick-up lines when attempting to initiate sexual or romantic courtship. Historically, flattery has been used as a standard form of discourse when addressing a king or queen. In the Renaissance, it was a common practice among writers to flatter the reigning monarch, as Edmund Spenser flattered Queen Elizabeth I in The Faerie Queene, William Shakespeare flattered King James I in Macbeth, Niccolò Machiavelli flattered Lorenzo II de' Medici in The Prince and Jean de La Fontaine flattered Louis XIV of France in his Fables. Many associations with flattery are negative. Negative descriptions of flattery range at least as far back in history as the Bible. In the Divine Comedy, Dante depicts flatterers wading in human excrement, stating that their words were the equivalent of excrement, in the second bolgia of 8th Circle of Hell. An insincere flatterer is a stock character in many literary works. Examples include Wormtongue from J. R. R. Tolkien's The Lord of the Rings, Goneril and Regan from King Lear, and Iago from Othello. Historians and philosophers have paid attention to flattery as a problem in ethics and politics. Plutarch wrote an essay on "How to Tell a Flatterer from a Friend". Julius Caesar was notorious for his flattery. In his In Praise of Folly, Erasmus commended flattery because it "raises downcast spirits, comforts the sad, rouses the apathetic, stirs up the stolid, cheers the sick, restrains the headstrong, brings lovers together and keeps them united." "To flatter" is also used to refer to artwork or clothing that makes the subject or wearer appear more attractive, as in: The king was pleased with the portrait, as it was very flattering of his girth. I think I'll wear the green dress because it flatters my legs. See also References External links Quotes about flattery Interpersonal relationships Persuasion techniques
Flattery
[ "Biology" ]
451
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
3,092,932
https://en.wikipedia.org/wiki/Aragh%20Sagi
Aragh sagi (, doggy [metaphor for extreme] distilled [beverage]) is a type of Iranian moonshine. This distilled alcoholic beverage usually contains around 50% alcohol. However, since it was produced without much quality control, it may have contained more or less alcohol, at times even reaching 80%. A high-quality aragh sagi tastes similar to grappa. Some Western sources call it Persian or Iranian Vodka. Etymology Aragh (, "Arak") are aromatic liquids that are produced by distillation from herbs and seeds, for example mint or anise. Traditional Aragh Sagi that was made in Iran is only with Raisins, like the Premium Arak(Saggi) from CyrusProducts distilled in the Netherlands. See also below. Aragh Sagi literally means "doggy distilled [beverage]", from sag ( "dog" in Persian being a metaphor for extreme). Back in 1960s, the Meikadeh Company produced aragh with a picture of a dog (a beagle) on the bottle label as a logo, and soon public started referring to it as aragh sagi or "doggy aragh", and the name stuck. Legality Since the Iranian revolution in 1979, alcohol is illegal in Iran. As such, homemade Aragh sagi in Iran is produced illegally. History It is usually produced in homes from fermented raisins. Its production and possession by ordinary citizens is considered illegal in Iran (which is the case for all alcoholic beverages in Iran). Prior to 1979 revolution in Iran, this product had been produced traditionally in several cities, such as Yazd. Since it was outlawed after 1979, it became a black market and underground business. Today, aragh sagi is widely considered a cheap alcoholic beverage that consumers choose due to lack of other available alternative options. Cyrus Premium Arak Cyrus Premium Arak is produced by Cyrus Company in the Netherlands. Cyrus Arak Saggi is made in small batches using copper pot stills. Fermented raisins are distilled to create a clear arak with an alcohol content of 40%. Raisins are sourced from Iran and Turkey. References Distilled drinks Iranian drinks Iranian distilled drinks
Aragh Sagi
[ "Chemistry" ]
471
[ "Distillation", "Distilled drinks" ]
3,092,994
https://en.wikipedia.org/wiki/Intermediate%20Jacobian
In mathematics, the intermediate Jacobian of a compact Kähler manifold or Hodge structure is a complex torus that is a common generalization of the Jacobian variety of a curve and the Picard variety and the Albanese variety. It is obtained by putting a complex structure on the torus for n odd. There are several different natural ways to put a complex structure on this torus, giving several different sorts of intermediate Jacobians, including one due to and one due to . The ones constructed by Weil have natural polarizations if M is projective, and so are abelian varieties, while the ones constructed by Griffiths behave well under holomorphic deformations. A complex structure on a real vector space is given by an automorphism I with square . The complex structures on are defined using the Hodge decomposition On the Weil complex structure is multiplication by , while the Griffiths complex structure is multiplication by if and if . Both these complex structures map into itself and so defined complex structures on it. For the intermediate Jacobian is the Picard variety, and for it is the Albanese variety. In these two extreme cases the constructions of Weil and Griffiths are equivalent. used intermediate Jacobians to show that non-singular cubic threefolds are not rational, even though they are unirational. See also Deligne cohomology References Hodge theory
Intermediate Jacobian
[ "Engineering" ]
274
[ "Tensors", "Differential forms", "Hodge theory" ]
3,093,327
https://en.wikipedia.org/wiki/Caesium-137
Caesium-137 (), cesium-137 (US), or radiocaesium, is a radioactive isotope of caesium that is formed as one of the more common fission products by the nuclear fission of uranium-235 and other fissionable isotopes in nuclear reactors and nuclear weapons. Trace quantities also originate from spontaneous fission of uranium-238. It is among the most problematic of the short-to-medium-lifetime fission products. Caesium-137 has a relatively low boiling point of and easily becomes volatile when released suddenly at high temperature, as in the case of the Chernobyl nuclear accident and with atomic explosions, and can travel very long distances in the air. After being deposited onto the soil as radioactive fallout, it moves and spreads easily in the environment because of the high water solubility of caesium's most common chemical compounds, which are salts. Caesium-137 was discovered by Glenn T. Seaborg and Margaret Melhase. Decay Caesium-137 has a half-life of about 30.05 years. About 94.6% decays by beta emission to a metastable nuclear isomer of barium: barium-137m (137mBa, Ba-137m). The remainder directly populates the ground state of 137Ba, which is stable. Barium-137m has a half-life of about 153 seconds, and is responsible for all of the gamma ray emissions in samples of 137Cs. Barium-137m decays to the ground state by emission of photons having energy 0.6617 MeV. A total of 85.1% of 137Cs decay generates gamma ray emission in this manner. One gram of 137Cs has an activity of 3.215 terabecquerel (TBq). Uses Caesium-137 has a number of practical uses. In small amounts, it is used to calibrate radiation-detection equipment. In medicine, it is used in radiation therapy. In industry, it is used in flow meters, thickness gauges, moisture-density gauges (for density readings, with americium-241/beryllium providing the moisture reading), and in borehole logging devices. Caesium-137 is not widely used for industrial radiography because it is hard to obtain a very high specific activity material with a well defined (and small) shape as caesium from used nuclear fuel contains stable caesium-133 and also long-lived caesium-135. Isotope separation is too costly compared to cheaper alternatives. Also the higher specific activity caesium sources tend to be made from very soluble caesium chloride (CsCl), as a result if a radiography source was damaged it would increase the spread of the contamination. It is possible to make water insoluble caesium sources (with various ferrocyanide compounds such as , and ammonium ferric hexacyano ferrate (AFCF), Giese salt, ferric ammonium ferrocyanide but their specific activity will be much lower. Other chemically inert caesium compounds include caesium-aluminosilicate-glasses akin to the natural mineral pollucite. The latter has been used in demonstration of chemically stable water-insoluble forms of nuclear waste for disposal in deep geological repositories. A large emitting volume will harm the image quality in radiography. The isotopes and are preferred for radiography, since iridium and cobalt are chemically non-reactive metals and can be obtained with much higher specific activities by the activation of stable and in high flux reactors. However, while is a waste product produced in great quantities in nuclear fission reactors, and are specifically produced in commercial and research reactors and their life cycle entails the destruction of the involved high-value elements. Cobalt-60 decays to stable nickel, whereas iridium-192 can decay to either stable osmium or platinum. Due to the residual radioactivity and legal hurdles, the resulting material is not commonly recovered even from "spent" radioactive sources, meaning in essence that the entire mass is "lost" for non-radioactive uses. As an almost purely synthetic isotope, caesium-137 has been used to date wine and detect counterfeits and as a relative-dating material for assessing the age of sedimentation occurring after 1945. Caesium-137 is also used as a radioactive tracer in geologic research to measure soil erosion and deposition; its affinity for fine sediments is useful in this application. Health risks Caesium-137 reacts with water, producing a water-soluble compound (caesium hydroxide). The biological behaviour of caesium is similar to that of potassium and rubidium. After entering the body, caesium gets more or less uniformly distributed throughout the body, with the highest concentrations in soft tissue. However, unlike group 2 radionuclides like radium and strontium-90, caesium does not bioaccumulate and is excreted relatively quickly. The biological half-life of caesium is about 70 days. A 1961 experiment showed that mice dosed with 21.5 μCi/g had a 50% fatality within 30 days (implying an LD50 of 245 μg/kg). A similar experiment in 1972 showed that when dogs are subjected to a whole body burden of 3800 μCi/kg (140 MBq/kg, or approximately 44 μg/kg) of caesium-137 (and 950 to 1400 rads), they die within 33 days, while animals with half of that burden all survived for a year. Important researches have shown a remarkable concentration of 137Cs in the exocrine cells of the pancreas, which are those most affected by cancer. In 2003, in autopsies performed on 6 children who died in the polluted area near Chernobyl (of reasons not directly linked to the Chernobyl disaster; mostly sepsis), where they also reported a higher incidence of pancreatic tumors, Bandazhevsky found a concentration of 137Cs 3.9 times higher than in their livers (1359 vs 347 Bq/kg, equivalent to 36 and 9.3 nCi/kg in these organs, 600 Bq/kg = 16 nCi/kg in the body according to measurements), thus demonstrating that pancreatic tissue is a strong accumulator and secretor in the intestine of radioactive cesium. Accidental ingestion of caesium-137 can be treated with Prussian blue (Fe[Fe(CN)]), which binds to it chemically and reduces the biological half-life to 30 days. Environmental contamination Caesium-137, along with other radioactive isotopes caesium-134, iodine-131, xenon-133, and strontium-90, were released into the environment during nearly all nuclear weapon tests and some nuclear accidents, most notably the Chernobyl disaster and the Fukushima Daiichi disaster. Caesium-137 in the environment is substantially anthropogenic (human-made). Caesium-137 is produced from the nuclear fission of plutonium and uranium, and decays into barium-137. By observing the characteristic gamma rays emitted by this isotope, one can determine whether the contents of a given sealed container were made before or after the first atomic bomb explosion (Trinity test, 16 July 1945), which spread some of it into the atmosphere, quickly distributing trace amounts of it around the globe. This procedure has been used by researchers to check the authenticity of certain rare wines, most notably the purported "Jefferson bottles". Surface soils and sediments are also dated by measuring the activity of 137Cs. Nuclear bomb fallout Bombs in the arctic area of Novaja Zemlja and bombs detonated in or near the stratosphere released cesium-137 that landed in upper Lapland, Finland. Measurements of cesium-137 in 1960's was reportedly 45,000 becquerels. Figures from 2011 have a mid range of about 1,100 becquerels, but strangely, cancer cases are no more common there than elsewhere. Chernobyl disaster As of today and for the next few hundred years or so, caesium-137 and strontium-90 continue to be the principal source of radiation in the zone of alienation around the Chernobyl nuclear power plant, and pose the greatest risk to health, owing to their approximately 30 year half-life and biological uptake. The mean contamination of caesium-137 in Germany following the Chernobyl disaster was 2000 to 4000 Bq/m2. This corresponds to a contamination of 1 mg/km2 of caesium-137, totaling about 500 grams deposited over all of Germany. In Scandinavia, some reindeer and sheep exceeded the Norwegian legal limit (3000 Bq/kg) 26 years after Chernobyl. As of 2016, the Chernobyl caesium-137 has decayed by half, but could have been locally concentrated by much larger factors. Fukushima Daiichi disaster In April 2011, elevated levels of caesium-137 were also being found in the environment after the Fukushima Daiichi nuclear disasters in Japan. In July 2011, meat from 11 cows shipped to Tokyo from Fukushima Prefecture was found to have 1,530 to 3,200 becquerels per kilogram of 137Cs, considerably exceeding the Japanese legal limit of 500 becquerels per kilogram at that time. In March 2013, a fish caught near the plant had a record 740,000 becquerels per kilogram of radioactive caesium, above the 100 becquerels per kilogram government limit. A 2013 paper in Scientific Reports found that for a forest site 50 km from the stricken plant, 137Cs concentrations were high in leaf litter, fungi and detritivores, but low in herbivores. By the end of 2014, "Fukushima-derived radiocaesium had spread into the whole western North Pacific Ocean", transported by the North Pacific current from Japan to the Gulf of Alaska. It has been measured in the surface layer down to 200 meters and south of the current area down to 400 meters. Cesium-137 is reported to be the major health concern in Fukushima. A number of techniques are being considered that will be able to strip out 80% to 95% of the caesium from contaminated soil and other materials efficiently and without destroying the organic material in the soil. These include hydrothermal blasting. The caesium precipitated with ferric ferrocyanide (Prussian blue) would be the only waste requiring special burial sites. The aim is to get annual exposure from the contaminated environment down to 1 mSv above background. The most contaminated area where radiation doses are greater than 50 mSv/year must remain off limits, but some areas that are currently less than 5 mSv/year may be decontaminated, allowing 22,000 residents to return. Incidents and accidents Caesium-137 gamma sources have been involved in several radiological accidents and incidents. 1987 Goiânia, Goiás, Brazil In the Goiânia accident of 1987, an improperly disposed of radiation therapy system from an abandoned clinic in Goiânia, Brazil, was removed, then cracked to be sold in junkyards. The glowing caesium salt was then to be sold to curious, unadvised buyers. This led to four confirmed deaths and several serious injuries from radiation contamination. 1989 Kramatorsk, Ukraine The Kramatorsk radiological accident happened in 1989 when a small capsule 8x4 mm in size of caesium-137 was found inside the concrete wall of an apartment building in Kramatorsk, Ukrainian SSR. It is believed that the capsule, originally a part of a measurement device, was lost in the late 1970s and ended up mixed with gravel used to construct the building in 1980. Over 9 years, two families had lived in the apartment. By the time the capsule was discovered, 6 residents of the building had died, 4 from leukemia and 17 more receiving varying doses of radiation. 1994 Tammiku, Estonia The 1994 Tammiku incident involved the theft of radioactive material from a nuclear waste storage facility in Männiku, Saku Parish, Harju County, Estonia. Three brothers, unaware of the facility's nature, broke into a shed while scavenging for scrap metal. One of the brothers received a 4,000 rad whole-body dose from a caesium-137 source that had been released from a damaged container, succumbing to radiation poisoning 12 days later. 1997 Georgia In 1997, several Georgian soldiers suffered radiation poisoning and burns. They were eventually traced back to training sources left abandoned, forgotten, and unlabeled after the dissolution of the Soviet Union. One was a caesium-137 pellet in a pocket of a shared jacket that released about 130,000 times the level of background radiation at 1 meter distance. 1998 Los Barrios, Cádiz, Spain In the Acerinox accident of 1998, the Spanish recycling company Acerinox accidentally melted down a mass of radioactive caesium-137 that came from a gamma-ray generator. 2009 Tongchuan, Shaanxi, China In 2009, a Chinese cement company (in Tongchuan, Shaanxi Province) was demolishing an old, unused cement plant and did not follow standards for handling radioactive materials. This caused some caesium-137 from a measuring instrument to be included with eight truckloads of scrap metal on its way to a steel mill, where the radioactive caesium was melted down into the steel. 2015 University of Tromsø, Norway In March 2015, the Norwegian University of Tromsø lost 8 radioactive samples, including samples of caesium-137, americium-241, and strontium-90. The samples were moved out of a secure location to be used for education. When the samples were supposed to be returned, the university was unable to find them. , the samples are still missing. 2016 Helsinki, Finland On 3 and 4 March 2016, unusually high levels of caesium-137 were detected in the air in Helsinki, Finland. According to STUK, the country's nuclear regulator, measurements showed 4,000 μBq/m3 – about 1,000 times the usual level. An investigation by the agency traced the source to a building from which STUK and a radioactive waste treatment company operate. 2019 Seattle, Washington, United States Thirteen people were exposed to caesium-137 in May 2019 at the Research and Training building in the Harborview Medical Center complex. A contract crew was transferring the caesium from the lab to a truck when the powder was spilled. Five people were decontaminated and released, but 8 who were more directly exposed were taken to the hospital while the research building was evacuated. 2023 Western Australia, Australia Public health authorities in Western Australia issued an emergency alert for a stretch of road measuring about 1,400 km after a capsule containing caesium-137 was lost in transport on 25 January 2023. The 8 mm capsule contained a small quantity of the radioactive material when it disappeared from a truck. The State Government immediately launched a search, with the WA Department of Health's chief health officer Andrew Robertson warning an exposed person could expect to receive the equivalent of "about 10 X-rays an hour". Experts warned, if the capsule were found, the public should stay at least 5 metres away. The capsule was found on 1 February 2023. 2023 Prachin Buri, Thailand A caesium-137 capsule went missing from a steam power plant in Prachin Buri province, Thailand on 23 February 2023, triggering a search by officials from Thailand's Office of Atoms for Peace (OAP) and the Prachin Buri provincial administration. However, the Thai public was not notified until 14 March. On 20 March, the Secretary-General of the OAP and the governor of Prachin Buri held a press conference stating that they had found caesium-137 contaminated furnace dust at a steel melting plant in Kabin Buri district. 2024 Khabarovsk, Russia On Friday, 5 April an emergency regime was introduced in the Russian city of Khabarovsk after a local resident accidentally discovered that radiation levels had jumped sharply in one of the industrial areas of the city. According to volunteers of the dosimetric control group, the dosimeter at the NP site showed up to 800 microsieverts, which is 1600 times the safe value. Employees of the Ministry of Emergency Situations fenced off the area of , where they found a capsule with caesium from a defectoscope. The find was placed in a protective container and taken away for disposal. This was first reported by the Novaya Gazeta. See also Commonly used gamma-emitting isotopes References Bibliography External links NLM Hazardous Substances Databank – Cesium, Radioactive Cesium-137 dirty bombs by Theodore Liolios Isotopes of caesium Fission products Radioisotope fuels Radioactive contamination
Caesium-137
[ "Chemistry", "Technology" ]
3,537
[ "Isotopes of caesium", "Nuclear fission", "Radioactive contamination", "Isotopes", "Fission products", "Nuclear fallout", "Environmental impact of nuclear power" ]
3,093,466
https://en.wikipedia.org/wiki/Bit%20slicing
Bit slicing is a technique for constructing a processor from modules of processors of smaller bit width, for the purpose of increasing the word length; in theory to make an arbitrary n-bit central processing unit (CPU). Each of these component modules processes one bit field or "slice" of an operand. The grouped processing components would then have the capability to process the chosen full word-length of a given software design. Bit slicing more or less died out due to the advent of the microprocessor. Recently it has been used in arithmetic logic units (ALUs) for quantum computers and as a software technique, e.g. for cryptography in x86 CPUs. Operational details Bit-slice processors (BSPs) usually include 1-, 2-, 4-, 8- or 16-bit arithmetic logic unit (ALU) and control lines (including carry or overflow signals that are internal to the processor in non-bitsliced CPU designs). For example, two 4-bit ALU chips could be arranged side by side, with control lines between them, to form an 8-bit ALU (result need not be power of two, e.g. three 1-bit units can make a 3-bit ALU, thus 3-bit (or n-bit) CPU, while 3-bit, or any CPU with higher odd number of bits, hasn't been manufactured and sold in volume). Four 4-bit ALU chips could be used to build a 16-bit ALU. It would take eight chips to build a 32-bit word ALU. The designer could add as many slices as required to manipulate longer word lengths. A microsequencer or control ROM would be used to execute logic to provide data and control signals to regulate function of the component ALUs. Known bit-slice microprocessors: 2-bit slice: Intel 3000 family (1974, now discontinued), e.g. Intel 3002 with Intel 3001, second-sourced by Signetics and Intersil Signetics 8X02 family (1977, now discontinued) 4-bit slice: National IMP family, consisting primarily of the IMP-00A/520 RALU (also known as MM5750) and various masked ROM microcode and control chips (CROMs, also known as MM5751) National GPC/P / IMP-4 (1973), second-sourced by Rockwell National IMP-8, an 8-bit processor based on the IMP chipset, using two RALU chips and one CROM chip National IMP-16, a 16-bit processor based on the IMP chipset, e.g. four RALU chips with one each IMP16A/521D and IMP16A/522D CROM chips (additional optional CROM chips could provide instruction set additionis) AMD Am2900 family (1975), e.g. AM2901, AM2901A, AM2903 Monolithic Memories 5700/6700 family (1974) e.g. MMI 5701 / MMI 6701, second-sourced by ITT Semiconductors Texas Instruments SBP0400 (1975) and SBP0401, cascadable up to 16 bits Texas Instruments SN74181 (1970) Texas Instruments SN74S281 with SN74S282 Texas Instruments SN74S481 with SN74S482 (1976) Fairchild 33705 Fairchild 9400 (MACROLOGIC), 4700 Motorola M10800 family (1979), e.g. MC10800 Raytheon RP-16, a 16-bit processor consisting of seven integrated circuits, using four RALU chips and three CROM chips. 8-bit slice: Four-Phase Systems AL1 (1969, considered to be the first microprocessor used in a commercial product, now discontinued) Texas Instruments SN54AS888 / SN74AS888 Fairchild 100K ZMD (1978/1981), cascadable up to 32 bit 16-bit slice: AMD Am29100 family Synopsys 49C402 ZFT Robotron/ZFTM Dresden (1979/1982), unreleased Historical necessity Bit slicing, although not called that at the time, was also used in computers before large-scale integrated circuits (LSI, the predecessor to today's VLSI, or very-large-scale integration circuits). The first bit-sliced machine was EDSAC 2, built at the University of Cambridge Mathematical Laboratory in 1956–1958. Prior to the mid-1970s and late 1980s there was some debate over how much bus width was necessary in a given computer system to make it function. Silicon chip technology and parts were much more expensive than today. Using multiple simpler, and thus less expensive, ALUs was seen as a way to increase computing power in a cost-effective manner. While 32-bit microprocessors were being discussed at the time, few were in production. The UNIVAC 1100 series mainframes (one of the oldest series, originating in the 1950s) has a 36-bit architecture, and the 1100/60 introduced in 1979 used nine Motorola MC10800 4-bit ALU chips to implement the needed word width while using modern integrated circuits. At the time 16-bit processors were common but expensive, and 8-bit processors, such as the Z80, were widely used in the nascent home-computer market. Combining components to produce bit-slice products allowed engineers and students to create more powerful and complex computers at a more reasonable cost, using off-the-shelf components that could be custom-configured. The complexities of creating a new computer architecture were greatly reduced when the details of the ALU were already specified (and debugged). The main advantage was that bit slicing made it economically possible in smaller processors to use bipolar transistors, which switch much faster than NMOS or CMOS transistors. This allowed much higher clock rates, where speed was needed for example, for DSP functions or matrix transformation or, as in the Xerox Alto, the combination of flexibility and speed, before discrete CPUs were able to deliver that. Modern use Software use on non-bit-slice hardware In more recent times, the term bit slicing was reused by Matthew Kwan to refer to the technique of using a general-purpose CPU to implement multiple parallel simple virtual machines using general logic instructions to perform single-instruction multiple-data (SIMD) operations. This technique is also known as SIMD within a register (SWAR). This was initially in reference to Eli Biham's 1997 article A Fast New DES Implementation in Software, which achieved significant gains in performance of DES by using this method. Bit-sliced quantum computers To simplify the circuit structure and reduce the hardware cost of quantum computers (proposed to run the MIPS32 instruction set) a 50 GHz superconducting "4-bit bit-slice arithmetic logic unit (ALU) for 32-bit rapid single-flux-quantum microprocessors was demonstrated". See also Bit-serial architecture References Further reading External links a bitslicing primer presenting a pedagogical bitsliced implementation of the Tiny Encryption Algorithm (TEA), a block cipher Digital electronics Central processing unit University of Cambridge Computer Laboratory Bit-slice chips
Bit slicing
[ "Engineering" ]
1,528
[ "Electronic engineering", "Digital electronics" ]
3,093,634
https://en.wikipedia.org/wiki/Global%20Forecast%20System
The Global Forecast System (GFS) is a global numerical weather prediction system containing a global computer model and variational analysis run by the United States' National Weather Service (NWS). Operation The mathematical model is run four times a day, and produces forecasts for up to 16 days in advance, but with decreased spatial resolution after 10 days. The forecast skill generally decreases with time (as with any numerical weather prediction model) and for longer term forecasts, only the larger scales retain significant accuracy. It is one of the predominant synoptic scale medium-range models in general use. Principles The GFS model has a finite volume cubed sphere (FV3) dynamical core with an approximate horizontal resolution of 13 km for the days 0-16 days. In the vertical, the model is divided into 127 layers and extends to the mesopause (roughly ~80 km). It produces forecast output every hour for the first 120 hours, three hourly through day 10, and 12 hourly through day 16. The output from the GFS is also used to produce model output statistics. Variants In addition to the main model, the GFS is also the basis of a lower-resolution 30-member (31, counting the control and operational members) ensemble that runs concurrently with the operational GFS and is available on the same time scales. This ensemble is referred to as the "Global Ensemble Forecast System" (GEFS). The GFS ensemble is combined with Canada's Global Environmental Multiscale Model ensemble to form the North American Ensemble Forecast System (NAEFS). Usage As with most works of the U.S. government, GFS data is not copyrighted and is available for free in the public domain under provisions of U.S. law. Because of this, the model serves as the basis for the forecasts of numerous private, commercial, and foreign weather companies. Accuracy By 2015, the GFS model had fallen behind the accuracy of other global weather models. This was most notable in the GFS model incorrectly predicting Hurricane Sandy turning out to sea until four days before landfall, while the European Centre for Medium-Range Weather Forecasts' model predicted landfall correctly at 7 days. Much of this was suggested to be due to limits in computational resources within the National Weather Service. In response, the NWS purchased new supercomputers, increasing processing power from 776 teraflops to 5.78 petaflops. As of the 12z run on 19 July 2017, the GFS model has been upgraded. Unlike the recently-upgraded ECMWF, the new GFS behaves a bit differently in the tropics and in other regions compared to the previous version. This version accounts more accurately for variables such as the Madden–Julian oscillation and the Saharan Air Layer. In 2018, the processing power was increased again to 8.4 petaflops, The agency also tested a potential replacement model with different mechanics, the flow-following, finite-volume icosahedral model (FIM), in the early 2010s; it abandoned that model around 2016, after it did not show substantial improvement over the GFS. In 2019, a major upgrade was held for the GFS, converting it from the GSM (Global Spectral Model) to the new FV3 dycore. Horizontal and vertical resolution remained the same but this set the foundation for what is now known as the UFS (Unified Forecast System). On March 22, 2021, the NOAA upgraded the GFS model, coupling it with the WaveWatch III global wave model, which will increase the GFS's resolution from 64 to 127 vertical levels, while extending the WaveWatch III forecasting window from 10 to 16 days. This left some meteorologists hopeful that the GFSv16 upgrade would be enough to close the accuracy gap with the ECMWF's model, which was considered to be the most accurate global weather model at the time. Upgraded dynamical core On June 12, 2019, after several years of testing, NOAA upgraded the GFS with a new dynamical core, the GFDL Finite-Volume Cubed-Sphere Dynamical Core (FV3), which uses the finite volume method instead of the spectral method used by earlier versions of the GFS. The resulting model, initially developed under the name FV3GFS, inherited the GFS moniker, with the legacy GFS continuing to be run until September 2019. Initial testing of the FV3-based GFS showed promise, improving upon the large-scale prediction skill and hurricane track accuracy of the legacy GFS. Planned improvements With the initial operational implementation of FV3GFS now accomplished, NOAA's Environmental Modeling Center (EMC) global modeling focus has turned towards development of the next GFS (v16) upgrade, which will include doubled vertical resolution (64 to 127 layers), more advanced physics, data assimilation system upgrades, and coupling to a NCEP's Global Wave Model using the Unified Forecast System (UFS) community model. GFSv16 was implemented on March 22, 2021. On 23 September 2020, the first global UFS application at NCEP was implemented in the Global Ensemble Forecast System (GEFS v12). The components of this upgrade include: Use of the FV3 global model (same version as GFS v15) as the atmospheric component of GEFS Increase in horizontal resolution to ~25 km Forecast length increased from 10 to 16 days Increased from 21 to 31 members Coupling the GEFS atmospheric component to the NCEP Global Wave model Run a 32nd member to 5 days (GEFS-Aero) for aerosol prediction, inline aerosol representation based on GOCART (GSD-Chem). This implementation is the first global-scale coupled system at NCEP, and replaces the previous standalone Global Wave Ensemble and the NEMS GFS Aerosol Component (NGAC) systems. More details can be found at the EMC Model Evaluation Group’s GEFS v12 web site, the EMC GEFS web page, and the EMC GEFS-Aerosol web page. See also Integrated Forecast System – The ECMWF's global weather forecasting system (the "European Model") Weather Research and Forecasting Model Numerical weather prediction Ensemble forecasting Rapid Refresh (weather prediction) North American Mesoscale Model Geostationary Operational Environmental Satellite NEXRAD - network of weather radars ran by the National Weather Service References External links NCEP/EMC GFS Model Website NOAA GFS Model Information Website Weather prediction National Weather Service numerical models
Global Forecast System
[ "Physics" ]
1,352
[ "Weather", "Weather prediction", "Physical phenomena" ]
3,093,672
https://en.wikipedia.org/wiki/Common%20beta%20emitters
Various radionuclides emit beta particles, high-speed electrons or positrons, through radioactive decay of their atomic nucleus. These can be used in a range of different industrial, scientific, and medical applications. This article lists some common beta-emitting radionuclides of technological importance, and their properties. Fission products Strontium Strontium-90 is a commonly used beta emitter used in industrial sources. It decays to yttrium-90, which is itself a beta emitter. It is also used as a thermal power source in radioisotope thermoelectric generator (RTG) power packs. These use heat produced by radioactive decay of strontium-90 to generate heat, which can be converted to electricity using a thermocouple. Strontium-90 has a shorter half-life, produces less power, and requires more shielding than plutonium-238, but is cheaper as it is a fission product and is present in a high concentration in nuclear waste and can be relatively easily chemically extracted. Strontium-90 based RTGs have been used to power remote lighthouses. As strontium is water-soluble, the perovskite form strontium titanate is usually employed as it is not water-soluble and has a high melting point. Strontium-89 is a short-lived beta emitter which has been used as a treatment for bone tumors, this is used in palliative care in terminal cancer cases. Both strontium-89 and strontium-90 are fission products. Neutron activation products Tritium Tritium is a low-energy beta emitter commonly used as a radiotracer in research and in traser self-powered lightings. The half-life of tritium is 12.3 years. The electrons from beta emission from tritium are so low in energy (average decay energy 5.7 keV) that a Geiger counter cannot be used to detect them. An advantage of the low energy of the decay is that it is easy to shield, since the low energy electrons penetrate only to shallow depths, reducing the safety issues in deal with the isotope. Tritium can also be found in metal work in the form of a tritiated rust, this can be treated by heating the steel in a furnace to drive off the tritium-containing water. Tritium can be made by the neutron irradiation of lithium. Carbon Carbon-14 is also commonly used as a beta source in research, it is commonly used as a radiotracer in organic compounds. While the energy of the beta particles is higher than those of tritium they are still quite low in energy. For instance the walls of a glass bottle are able to absorb it. Carbon-14 is made by the np reaction of nitrogen-14 with neutrons. It is generated in the atmosphere by the action of cosmic rays on nitrogen. Also a large amount was generated by the neutrons from the air bursts during nuclear weapons testing conducted in the 20th century. The specific activity of atmospheric carbon increased as a result of the nuclear testing but due to the exchange of carbon between the air and other parts of the carbon cycle it has now returned to a very low value. For small amounts of carbon-14, one of the favoured disposal methods is to burn the waste in a medical incinerator, the idea is that by dispersing the radioactivity over a very wide area the threat to any one human is very small. Phosphorus Phosphorus-32 is a short-lived high energy beta emitter, which is used in research in radiotracers. It has a half-life of 14 days. It can be used in DNA research. Phosphorus-32 can be made by the neutron irradiation (np reaction) of sulfur-32 or from phosphorus-31 by neutron capture. Nickel Nickel-63 is a radioisotope of nickel that can be used as an energy source in Radioisotope Piezoelectric Generators. It has a half-life of 100.1 years. It can be created by irradiating nickel-62 with neutrons in a nuclear reactor. See also Commonly used gamma-emitting isotopes Betavoltaics References External links List of Pure Beta Emitters, (U. Wisconsin Madison) Nuclear physics Nuclear chemistry Radioactivity Isotopes Nuclear materials
Common beta emitters
[ "Physics", "Chemistry" ]
891
[ "Nuclear chemistry", "Isotopes", "Materials", "Nuclear materials", "Radioactivity", "Nuclear physics", "nan", "Matter" ]
3,093,747
https://en.wikipedia.org/wiki/Oxygen%20isotope%20ratio%20cycle
Oxygen isotope ratio cycles are cyclical variations in the ratio of the abundance of oxygen with an atomic mass of 18 to the abundance of oxygen with an atomic mass of 16 present in some substances, such as polar ice or calcite in ocean core samples, measured with the isotope fractionation. The ratio is linked to ancient ocean temperature which in turn reflects ancient climate. Cycles in the ratio mirror climate changes in the geological history of Earth. Isotopes of oxygen Oxygen (chemical symbol O) has three naturally occurring isotopes: 16O, 17O, and 18O, where the 16, 17 and 18 refer to the atomic mass. The most abundant is 16O, with a small percentage of 18O and an even smaller percentage of 17O. Oxygen isotope analysis considers only the ratio of 18O to 16O present in a sample. The calculated ratio of the masses of each present in the sample is then compared to a standard, which can yield information about the temperature at which the sample was formed - see Proxy (climate) for details. Connection between isotopes and temperature/weather 18O is two neutrons heavier than 16O and causes the water molecule in which it occurs to be heavier by that amount. The additional mass changes the hydrogen bonds so that more energy is required to vaporize H218O than H216O, and H218O liberates more energy when it condenses. In addition, H216O tends to diffuse more rapidly. Because H216O requires less energy to vaporize, and is more likely to diffuse to the liquid phase, the first water vapor formed during evaporation of liquid water is enriched in H216O, and the residual liquid is enriched in H218O. When water vapor condenses into liquid, H218O preferentially enters the liquid, while H216O is concentrated in the remaining vapor. As an air mass moves from a warm region to a cold region, water vapor condenses and is removed as precipitation. The precipitation removes H218O, leaving progressively more H216O-rich water vapor. This distillation process causes precipitation to have lower 18O/16O as the temperature decreases. Additional factors can affect the efficiency of the distillation, such as the direct precipitation of ice crystals, rather than liquid water, at low temperatures. Due to the intense precipitation that occurs in hurricanes, the H218O is exhausted relative to the H216O, resulting in relatively low 18O/16O ratios. The subsequent uptake of hurricane rainfall in trees, creates a record of the passing of hurricanes that can be used to create a historical record in the absence of human records. In laboratories, the temperature, humidity, ventilation and so on affect the accuracy of oxygen isotope measurements. Solid samples (organic and inorganic) for oxygen isotope measurements are usually stored in silver cups and measured with pyrolysis and mass spectrometry. Researchers need to avoid improper or prolonged storage of the samples for accurate measurements. Connection between temperature and climate The 18O/16O ratio provides a record of ancient water temperature. Water 10 to 15 °C (18 to 27 °F) cooler than present represents glaciation. As colder temperatures spread toward the equator, water vapor rich in 18O preferentially rains out at lower latitudes. The remaining water vapor that condenses over higher latitudes is subsequently rich in 16O. Precipitation and therefore glacial ice contain water with a low 18O content. Since large amounts of 16O water are being stored as glacial ice, the 18O content of oceanic water is high. Water up to 5 °C (9 °F) warmer than today represents an interglacial, when the 18O content of oceanic water is lower. A plot of ancient water temperature over time indicates that climate has varied cyclically, with large cycles and harmonics, or smaller cycles, superimposed on the large ones. This technique has been especially valuable for identifying glacial maxima and minima in the Pleistocene. Connection between calcite and water Limestone is deposited from the calcite shells of microorganisms. Calcite, or calcium carbonate, chemical formula CaCO3, is formed from water, H2O, and carbon dioxide, CO2, dissolved in the water. The carbon dioxide provides two of the oxygen atoms in the calcite. The calcium must rob the third from the water. The isotope ratio in the calcite is therefore the same, after compensation, as the ratio in the water from which the microorganisms of a given layer extracted the material of the shell. A higher abundance of 18O in calcite is indicative of colder water temperatures, since the lighter isotopes are all stored in the glacial ice. The microorganism most frequently referenced for identifying marine isotope stages is foraminifera. Research Earth's dynamic oxygenation evolution is recorded in ancient sediments from the Republic of Gabon from between about 2,150 and 2,080 million years ago. Responsible for these fluctuations in oxygenation were likely driven by the Lomagundi carbon isotope excursion. See also δ18O Isotope fractionation References Encyclopædia Britannica under Climate and Weather, Pleistocene Climatic Change External links NASA Earth Observatory: The Oxygen Balance Scripps O2 Global Oxygen Measurements Paleoclimatology Geochronological dating methods Oxygen Isotope excursions
Oxygen isotope ratio cycle
[ "Chemistry" ]
1,096
[ "Isotope excursions", "Isotopes" ]
3,093,902
https://en.wikipedia.org/wiki/Dolores%20Project
The Dolores Project, located in the Dolores and San Juan River basins in southwestern Colorado, uses water from the Dolores River for irrigation, municipal and industrial use, recreation, fish and wildlife, and production of hydroelectric power. It also provides flood control and aids in economic redevelopment. The primary storage of Dolores River flows for all project purposes is provided by the McPhee Reservoir. Service is provided to the northwest Dove Creek area, central Montezuma Valley area, and south to the Towaoc area on the Ute Mountain Ute Indian Reservation. Irrigation water is available for . External links U.S. Department of the Interior, Bureau of Reclamation United States Bureau of Reclamation Colorado River Storage Project
Dolores Project
[ "Engineering" ]
140
[ "Colorado River Storage Project" ]
7,349,608
https://en.wikipedia.org/wiki/Dennis%20Sullivan
Dennis Parnell Sullivan (born February 12, 1941) is an American mathematician known for his work in algebraic topology, geometric topology, and dynamical systems. He holds the Albert Einstein Chair at the Graduate Center of the City University of New York and is a distinguished professor at Stony Brook University. Sullivan was awarded the Wolf Prize in Mathematics in 2010 and the Abel Prize in 2022. Early life and education Sullivan was born in Port Huron, Michigan, on February 12, 1941. His family moved to Houston soon afterwards. He entered Rice University to study chemical engineering but switched his major to mathematics in his second year after encountering a particularly motivating mathematical theorem. The change was prompted by a special case of the uniformization theorem, according to which, in his own words: He received his Bachelor of Arts degree from Rice University in 1963. He obtained his Doctor of Philosophy from Princeton University in 1966 with his thesis, Triangulating homotopy equivalences, under the supervision of William Browder. Career Sullivan worked at the University of Warwick on a NATO Fellowship from 1966 to 1967. He was a Miller Research Fellow at the University of California, Berkeley from 1967 to 1969 and then a Sloan Fellow at Massachusetts Institute of Technology from 1969 to 1973. He was a visiting scholar at the Institute for Advanced Study in 1967–1968, 1968–1970, and again in 1975. Sullivan was an associate professor at Paris-Sud University from 1973 to 1974, and then became a permanent professor at the Institut des Hautes Études Scientifiques (IHÉS) in 1974. In 1981, he became the Albert Einstein Chair in Science (Mathematics) at the Graduate Center of the City University of New York and reduced his duties at the IHÉS to a half-time appointment. He joined the mathematics faculty at Stony Brook University in 1996 and left the IHÉS the following year. Sullivan was involved in the founding of the Simons Center for Geometry and Physics and is a member of its board of trustees. Research Topology Geometric topology Along with Browder and his other students, Sullivan was an early adopter of surgery theory, particularly for classifying high-dimensional manifolds. His thesis work was focused on the Hauptvermutung. In an influential set of notes in 1970, Sullivan put forward the radical concept that, within homotopy theory, spaces could directly "be broken into boxes" (or localized), a procedure hitherto applied to the algebraic constructs made from them. The Sullivan conjecture, proved in its original form by Haynes Miller, states that the classifying space BG of a finite group G is sufficiently different from any finite CW complex X, that it maps to such an X only 'with difficulty'; in a more formal statement, the space of all mappings BG to X, as pointed spaces and given the compact-open topology, is weakly contractible. Sullivan's conjecture was also first presented in his 1970 notes. Sullivan and Daniel Quillen (independently) created rational homotopy theory in the late 1960s and 1970s. It examines "rationalizations" of simply connected topological spaces with homotopy groups and singular homology groups tensored with the rational numbers, ignoring torsion elements and simplifying certain calculations. Kleinian groups Sullivan and William Thurston generalized Lipman Bers' density conjecture from singly degenerate Kleinian surface groups to all finitely generated Kleinian groups in the late 1970s and early 1980s. The conjecture states that every finitely generated Kleinian group is an algebraic limit of geometrically finite Kleinian groups, and was independently proven by Ohshika and Namazi–Souto in 2011 and 2012 respectively. Conformal and quasiconformal mappings The Connes–Donaldson–Sullivan–Teleman index theorem is an extension of the Atiyah–Singer index theorem to quasiconformal manifolds due to a joint paper by Simon Donaldson and Sullivan in 1989 and a joint paper by Alain Connes, Sullivan, and Nicolae Teleman in 1994. In 1987, Sullivan and Burton Rodin proved Thurston's conjecture about the approximation of the Riemann map by circle packings. String topology Sullivan and Moira Chas started the field of string topology, which examines algebraic structures on the homology of free loop spaces. They developed the Chas–Sullivan product to give a partial singular homology analogue of the cup product from singular cohomology. String topology has been used in multiple proposals to construct topological quantum field theories in mathematical physics. Dynamical systems In 1975, Sullivan and Bill Parry introduced the topological Parry–Sullivan invariant for flows in one-dimensional dynamical systems. In 1985, Sullivan proved the no-wandering-domain theorem. This result was described by mathematician Anthony Philips as leading to a "revival of holomorphic dynamics after 60 years of stagnation." Awards and honors 1971 Oswald Veblen Prize in Geometry 1981 Prix Élie Cartan, French Academy of Sciences 1983 Member, National Academy of Sciences 1991 Member, American Academy of Arts and Sciences 1994 King Faisal International Prize for Science 2004 National Medal of Science 2006 Steele Prize for lifetime achievement 2010 Wolf Prize in Mathematics, for "his contributions to algebraic topology and conformal dynamics" 2012 Fellow of the American Mathematical Society 2014 Balzan Prize in Mathematics (pure or applied) 2022 Abel Prize Personal life Sullivan is married to fellow mathematician Moira Chas. See also Assembly map Double bubble conjecture Flexible polyhedron Formal manifold Loch Ness monster surface Normal invariant Ring lemma Rummler–Sullivan theorem Ruziewicz problem References External links Sullivan's homepage at the City University of New York Sullivan's homepage at Stony Brook University Dennis Sullivan International Balzan Prize Foundation 1941 births Living people 20th-century American mathematicians 21st-century American mathematicians Abel Prize laureates Dynamical systems theorists CUNY Graduate Center faculty Fellows of the American Mathematical Society Homotopy theory Mathematicians from Michigan Members of the United States National Academy of Sciences National Medal of Science laureates Princeton University alumni Recipients of the Great Cross of the National Order of Scientific Merit (Brazil) Rice University alumni Stony Brook University faculty American topologists Wolf Prize in Mathematics laureates
Dennis Sullivan
[ "Mathematics" ]
1,239
[ "Dynamical systems theorists", "Dynamical systems" ]
7,349,689
https://en.wikipedia.org/wiki/Satellite%20Catalog%20Number
The Satellite Catalog Number (SATCAT), also known as NORAD Catalog Number, NORAD ID, USSPACECOM object number, is a sequential nine-digit number assigned by the United States Space Command (USSPACECOM), and previously the North American Aerospace Defense Command (NORAD), in the order of launch or discovery to all artificial objects in the orbits of Earth and those that left Earth's orbit. For example, catalog number 1 is the Sputnik 1 launch vehicle, with the Sputnik 1 satellite having been assigned catalog number 2. Objects that fail to orbit or orbit for a short time are not catalogued. The minimum object size in the catalog is in diameter. , the catalog listed 58,010 objects, including 16,645 satellites that had been launched into orbit since 1957 of which 8,936 were still active. 25,717 of the objects were well tracked while 2,055 were lost. In addition USSPACECOM was also tracking 16,600 analyst objects. Analyst objects are variably tracked and in constant flux, so their catalog and element set data are not published. ESA estimated there were about 36,500 pieces of orbiting debris that are large enough for USSPACECOM to track. Space Command shares the catalog via space-track.org, which is maintained by the 18th Space Defense Squadron (18 SDS). History Initially, the catalog was maintained by NORAD. From 1985 onwards, USSPACECOM was tasked to detect, track, identify, and maintain a catalog of all human-made objects in Earth orbit. In 2002, USSPACECOM was disestablished and merged with the United States Strategic Command (USSTRATCOM). However, USSPACECOM was reestablished in 2019. Before 2020, the catalog number was limited to five digits due to the TLE format limitation. In 2020, Space-Track started to provide data in CCSDS OMM (Orbit Mean-Elements Message) format, which increased the maximum catalog number to 999,999,999. See also International Designator, also known as a COSPAR ID Space debris Two-line element set (TLE) United States Space Surveillance Network References External links The catalog: Space-Track.org CelesTrak Satellite Catalog (a partial copy of Space-Track.org catalog) Identifiers Satellites United States Strategic Command
Satellite Catalog Number
[ "Astronomy" ]
482
[ "Satellites", "Outer space" ]
7,350,394
https://en.wikipedia.org/wiki/Electroneutral%20cation-Cl
In molecular biology, the electroneutral cation-Cl (electroneutral potassium chloride cotransporter) family of proteins are a family of solute carrier proteins. This family includes the products of the Human genes: SLC12A1, SLC12A1, SLC12A2, SLC12A3, SLC12A4, SLC12A5, SLC12A6, SLC12A7, SLC12A8 and SLC12A9. The K-Cl co-transporter (KCC) mediates the coupled movement of K+ and Cl− ions across the plasma membrane of many animal cells. This transport is involved in the regulatory volume decrease in response to cell swelling in red blood cells, and has been proposed to play a role in the vectorial movement of Cl− across kidney epithelia. The transport process involves one for one electroneutral movement of K+ together with Cl−, and, in all known mammalian cells, the net movement is outward. The neuronal KCC subtype KCC2 is cell-volume insensitive and plays a unique role in maintaining low intracellular Cl−concentration, which is required in neurones for the functioning of Cl− dependent fast synaptic inhibition, mediated by certain neurotransmitters, such as gamma-aminobutyric acid (GABA) and glycine. Three isoforms of the K-Cl co-transporter have been described, termed KCC1 (SLC12A4), KCC2 (SLC12A5), and KCC3 (SLC12A6), containing 1085, 1116 and 1150 amino acids, respectively. They are predicted to have 12 transmembrane (TM) regions in a central hydrophobic domain, together with hydrophilic N- and C-termini that are likely cytoplasmic. Comparison of their sequences with those of other ion-transporting membrane proteins reveals that they are part of a new superfamily of cation-chloride co-transporters, which includes the Na-Cl and Na-K-2Cl co-transporters. KCC1 and KCC3 are widely expressed in human tissues, while KCC2 is expressed only in brain neurons, making it likely that this is the isoform responsible for maintaining low Cl− concentration in neurons. A study in the model organism C. elegans found that the KCC3 ortholog functions in glial cells to regulate animal behavior. KCC1 is widely expressed in human tissues, and when heterologously expressed, possesses the functional characteristics of the well-studied red blood cell K-Cl co-transporter, including stimulation by both swelling and N-ethylmaleimide. Several splice variants have also been identified. KCC3 is widely expressed in human tissues and, like KCC1, is stimulated by both swelling and N-ethylmaleimide. The induction of KCC3 is up-regulated by vascular endothelial growth factor and down-regulated by tumour necrosis factor. Defects in KCC3 are linked to agenesis of the corpus callosum with peripheral neuropathy. This disorder is characterised by severe progressive sensorimotor neuropathy, mental retardation, dysmorphic features and complete or partial agenesis of the corpus callosum. References Protein families
Electroneutral cation-Cl
[ "Biology" ]
709
[ "Protein families", "Protein classification" ]
7,350,848
https://en.wikipedia.org/wiki/Proximodistal%20trend
The proximodistal trend is the tendency for more general functions of limbs to develop before more specific or fine motor skills. It comes from the Latin words proxim- which means "close" and "-dis-" meaning "away from", because the trend essentially describes a path from the center outward. References Motor skills Medical terminology
Proximodistal trend
[ "Biology" ]
73
[ "Behavior", "Motor skills", "Motor control" ]
7,350,907
https://en.wikipedia.org/wiki/Spare%20Rib
Spare Rib was a second-wave feminist magazine, founded in 1972 in the United Kingdom, that emerged from the counterculture of the late 1960s as a consequence of meetings involving, among others, Rosie Boycott and Marsha Rowe. Spare Rib is now recognised as having shaped debates about feminism in the UK, and as such it was digitised by the British Library in 2015. The magazine contained new writing and creative contributions that challenged stereotypes and supported collective solutions that related to feminist issues. It was published between 1972 and 1993. The title derives from the Biblical reference to Eve, the first woman, created from Adam's rib. History The first issue of Spare Rib was published in London in June 1972. It was distributed by Seymour Press to big chains including W. H. Smith & Son and Menzies — although the former refused to stock issue 13, due to the use of an expletive on the issue's back cover. Selling at first around 20,000 copies a month, it was circulated more widely through women's groups and networks. From 1976, Spare Rib was distributed by Publications Distribution Cooperative to a network of radical and alternative bookshops. The magazine's purpose, as described in its editorial, was to investigate and present alternatives to the traditional gendered roles of virgin, wife, or mother. The name Spare Rib started as a joke referring to biblical Eve being fashioned out of Adam's rib, implying that a woman had no independence from the beginning of time. The Spare Rib manifesto stated: Early articles were linked closely with left-leaning political theories of the time, especially anti-capitalism and the exploitation of women as consumers through fashion. As the women's movement evolved during the 1970s, the magazine became a forum for debate among members of the different streams that emerged within the movement, such as socialist feminism, radical feminism, lesbian feminism, liberal feminism, and black feminism. Spare Rib included contributions from well-known international feminist writers, activists, and theorists, as well as stories about ordinary women in their own words. Subjects included the "liberating orgasm", "kitchen sink racism", anorexia, and female genital mutilation. The magazine reflected debates about how best to tackle issues such as sexuality and racism. Due to falling subscriptions and low advertising revenue, Spare Rib ceased publication in 1993. Editors Spare Rib became a collective by the end of 1973. The collective editorial policy was to "collectively decide on articles that they publish, and work closely with the contributors. Accept articles from men only when there is no other resource available." Drew Howie was a consultant through 1990-1993. Design According to Marsha Rowe, one of the original magazine designers, the "look" of Spare Rib was born out of necessity: it had to look like a women's magazine, yet with contents that did not reflect conformist stereotyping of women. Spare Rib covers were often controversial. The design had to be both stable and flexible to allow for future change while retaining the basic identity. Integral to every decision was cost. Finding non-sexist advertising in accordance with the values of the magazine was another challenge. Legacy Scholar Laurel Foster wrote in 2022 for the 50th anniversary of Spare Ribs first issue: "The self-expression and persuasive writing of the pioneering magazine have their legacy in feminist media today. [...] Because of its standing in feminist history, Spare Rib has become a touchstone for later feminist magazines." In their 2017 book Re-reading Spare Rib, Angela Smith and Sheila Quaid wrote that Spare Rib played a key role in the development of second-wave feminist thought and its spread into the collective consciousness. It was reported by The Guardian in April 2013 that the magazine was due to be relaunched, with journalist Charlotte Raven at the helm. It was subsequently announced that while a magazine and website were to be launched, they would have a different name. In May 2015, the British Library put its complete archive of Spare Rib online. The project was led by Polly Russell, the curator behind an oral history of the women's liberation movement. The archive was presented with new views on the subject matter and themes, curated by expert commentators. The British Library website describes the value of Spare Rib for current readers and researchers: In February 2019, the British Library announced a possible suspension of access to the archive in the event of a no-deal Brexit, due to problems relating to copyright. It was announced in December 2020 that access would be withdrawn at the end of the transition period. References Sources Spare Rib collection at the LSE Women's Library History of Spare Rib from the Bristol University History Department Interview with Marsha Rowe. 31 January 2008. Retrieved June 2008. Spare Rib by Hazel K. Bell. The National Housewives Register's Newsletter no. 19, Autumn 1975, pp. 10–11. Retrieved June 2008. Further reading External links British Library archive The Spare Rib Manifesto at the British Library archive Full, free-to-access, online archive hosted by the JISC Journal Archive The Reunion. Marsha Rowe, Rosie Boycott, Angela Phillips, Marion Fudger and Anna Raeburn with Sue MacGregor. BBC Radio 4, September 2013. Photo of Marsha Rowe and Rosie Boycott at the magazine's offices, 19 June 1972. 1972 establishments in the United Kingdom 1993 disestablishments in the United Kingdom Anti-capitalism British design Monthly magazines published in the United Kingdom Defunct political magazines published in the United Kingdom Defunct feminist magazines published in the United Kingdom Design magazines Magazines established in 1972 Magazines disestablished in 1993 Magazines published in London Second-wave feminism
Spare Rib
[ "Engineering" ]
1,141
[ "Design magazines", "Design" ]
7,351,897
https://en.wikipedia.org/wiki/Monica%20Rappaccini
Monica Rappaccini is a supervillain appearing in American comic books published by Marvel Comics. Created by Fred Van Lente and Leonard Kirk, the character first appeared in Amazing Fantasy vol. 2 #7 (2005). Monica Rappaccini is a genius-level biochemist and the Scientist Supreme of the supervillain organization A.I.M. Publication history Monica Rappaccini debuted in Amazing Fantasy vol. 2 #7 (2005), created by Fred Van Lente and Leonard Kirk. She appeared in the 2007 Super-Villain Team-Up MODOK's 11 series. She appeared in the 2017 The Unstoppable Wasp series. She appeared in the 2020 Ravencroft series. Fictional character biography Monica Rappaccini went to New Mexico's Desert State University to study and shared a brief relationship with physics student Bruce Banner while enrolled as a biochemistry student at the University of Padua. She used their relationship to exploit Banner's radiation expertise for her own research. Upon attaining her doctorate, Rappaccini quickly became a world-renowned innovator of antitoxins and antidotes for various environmental poisons and nearly won a Nobel Prize. Recognizing the many environmental and political failings of Western civilization, Rappaccini decided that it was too corrupt to exist. She joined a series of terrorist organizations, such as the pan-European leftist group the Black Orchestra, and then Advanced Idea Mechanics (A.I.M.), where she had a brief relationship with fellow agent George Tarleton. Making poisons instead of curing them, Rappaccini's expertise with toxins allowed her to rise quickly through A.I.M.'s ranks. She implanted her own daughter and several other newborn children of A.I.M. members with memetic antibodies. She released them into the world as A.I.M. Waker agents with no knowledge of their heritage. They programmed to travel instinctively to the nearest A.I.M. biohaven when their antibodies activated at age 16. Her daughter was raised in Vermont by undercover A.I.M. agents as Carmilla Black. Monica Rappaccini went underground for nearly two decades and studied potential power sources such as the sentient Uni-Power. She orchestrated attacks on capitalism, such as the dioxin-based gas attack on Hong Kong. When the A.I.M. Scientist Supreme was slain by renegade A.I.M. creation MODOK, Rappaccini became head of a splinter faction of A.I.M. that remained independent from MODOK's control. Following his numerous defeats, Rappaccini's splinter group absorbed more cells into a sizable rival faction. She was made Scientist Supreme of this "true" version of A.I.M. She rarely did field work as A.I.M.'s leader, preferring to act through agents and proxies. When she led an A.I.M. attack on the United States Army Medical Research Institute for Infectious Diseases, it was thwarted by her estranged 19-year-old daughter, who had since joined forces with S.H.I.E.L.D. and became the costumed superheroine the Scorpion II. Monica Rappaccini eluded capture and soon attempted to harness the malfunctioning Uni-Power, but her plans were thwarted by the Scorpion II and several superheroes who bonded with the Uni-Power. Her A.I.M. faction was involved in an A.I.M. civil war against MODOK's faction that drew in several of the Marvel superheroes, prominent among were Ms. Marvel and the Hulk. Following Ms. Marvel's thwarting of a plan to turn MODOK into a bomb, Rappaccini reunited the organization under her control. She infiltrated the supervillain group MODOK's 11 with A.I.M.'s new robot the Ultra-Adaptoid, which was impersonating the Chameleon. She attempted to prevent MODOK from obtaining a weapon called the Hypernova and using it to erase all life on Earth. She had a stated aim to stop A.I.M. from creating "inventions that turn around and try to destroy us." In the end, MODOK gained the Hypernova, and Monica gave him $1 billion dollars in exchange for it - which, unknown to her, had been MODOK's plan all along, as he had already worked out that the Hypernova would grow unstable and explode anyway. A.I.M.'s base was destroyed in the explosion and MODOK believed Monica was dead. During the "Dark Reign" storyline, it is revealed that Rappaccini survived and came into conflict with Mockingbird and Ronin. She also hired Deadpool to retrieve a batch of baby M.O.D.O.C.'s enhanced to warp reality from H.A.M.M.E.R. headquarters. After a failed attempt to persuade Hank Pym to join A.I.M., she and her followers were stranded on Earth-Charnel when the Wasp deactivated her facility's dimensional screen. Monica and A.I.M. later sided with Norman Osborn after he escaped from prison and reformed H.A.M.M.E.R. Following Osborn's defeat, she and A.I.M. end up retreating. During the Avengers vs. X-Men storyline, Noh-Varr located a secret A.I.M. base where Rappaccini and the A.I.M. agents that escaped following Osborn's defeat were hiding out. The Avengers raided the base and arrested Monica and the other A.I.M. members that were present at the time. She then escaped from prison and fought the new Wasp. Monica Rappaccini was later seen as a member of J.A.N.U.S. During the "Stark-Roxxon War" storyline, Monica leads A.I.M. in collaborating with Roxxon in wanting to do a merger with Stark Industries. When Iron Man arrived at A.I.M.'s facility in Caspen, Colorado and fought the second Force, Monica sent her scheduler out to break up the fight and invite Iron Man to a luncheon. While trying to get Stark to allow the merger to happen, Monica states that she did not sanction the revived Justine Hammer into attacking him and rigging his armors. In addition, she has also enlisted the services of Doctor Druid who states to Tony that he is now working for them by choice. Monica orders Doctor Druid to do something scienceless to Iron Man as he is subjected to illusions of being called a failure by Howard Stark, Captain America, and Emma Frost. She also allowed Force to delay Iron Man when he recovers so that the merger can happen. Monica Rappaccini was present at the board meeting to vote on the merger. After the fight between Iron Man and Justine Hammer's Iron Monger form, Doctor Druid to them, Monica, and the board members to meet Belasco in Limbo. Because of Belasco's involvement, the merger deal fell through after Iron Man persuaded the board members to change their votes in exchange that Belasco doesn't claim their souls. Powers and abilities Monica Rappaccini is an expert in robotics, chemistry, physics, engineering, biotoxins, and biochemistry. Her inventions include the enhanced lymphatic system of the A.I.M. Waker agents that granted them total immunity to all biological, chemical and radiological weapons, memetic antibodies, synthetic microbes that attack the human psyche and trigger pre-coded memories and impulses, hallucinogenic drugs that deliver programmed hallucinations before being absorbed into the system; and many innovative weapons of mass destruction, from gas attacks to nanobacterial bombs. Her A.I.M. uniform belt contains a phasing device that allows her to teleport. She keeps many different devices at hand, varying upon her situation and opponent. When facing a captured Hank Pym, she boasts that she kept 157 methods of containing him on hand. Reception Melody MacReady of Screen Rant called Monica Rappaccini one of the "most ruthless villains of the Marvel Universe," writing, "Monica is manipulative, prejudiced, murderous, and proud of what she does; this was best shown when she was one of the best villains in the game Marvel's Avengers." Other versions House of M An alternate version of Monica Rappaccini appears in the "House of M" storyline. She worked alongside the Scorpion II and the Hulk to overthrow Governor Exodus' fascist mutant government in Australia when Scarlet Witch reality-warped Earth into a mutant-dominated society,. When the Hulk became Australia's new leader, she secret plan to create a cybernetic army to overthrow Earth's mutant rulers was exposed. Denying any knowledge of the army, she promised to restore the cyborgs' humanity. When this warped reality was undone, the disoriented Rappaccini found herself stranded in Australia alongside Bruce Banner. Evading the Scorpion II's attempt to arrest her, Rappaccini returned to A.I.M. Death's Head 3.0 (Earth-6216) An alternate version of Monica Rappaccini appears in the alternate future timeline in Death's Head 3.0. She created the Uni-Alias, an artificial variant of Captain Universe's Uni-Power. Decades later her granddaughter Varina Goddard, a Senior Scientist in the future A.I.M., used the Uni-Alias as a power source for the Death's Head robot in her attempt to assassinate the United Nations Secretary General. Ant-Man: Natural Enemy An alternate version of Monica Rappaccini appears in the Ant-Man: Natural Enemy. She captures the shrunken Scott Lang and was going to make her pet, but later attempts to kill Lang by flushing the latter down the toilet. It is revealed that Rappaccini murdered animals when she was a child, especially ants. In other media Television Monica Rappaccini / Scientist Supreme appears in Spider-Man, voiced by Grey DeLisle. This version oversees the organization's front at the Bilderberg Academy boarding school by posing as its headmistress. Monica Rappaccini / Scientist Supreme appears in M.O.D.O.K., voiced by Wendi McLendon-Covey. This version is an A.I.M. scientist and work rival of the titular character. Additionally, she has a teenage daughter named Carmilla who was the result of Monica creating a male clone named "Manica" and having him inseminate her. Introduced in the episode "If Bureaucracy Be Thy Death!", it is revealed that she once greatly admired MODOK and applied for A.I.M. to work together, but she developed a hatred for him after MODOK took credit for her killing a major yet unnamed member of the Avengers. Complicating this however, she later realizes that MODOK supports her endeavors and put her in a higher position so she can continue her work. After A.I.M. goes bankrupt and is bought out by GRUMBL, the latter promotes Monica to Scientist Supreme, but limits her work. By the end of the series, MODOK convinces her to leave A.I.M., though she decides to continue working for MODOK at his new company, A-I-M-2. Video games Monica Rappaccini / Scientist Supreme appears in Marvel Powers United VR, voiced by Jennifer Hale. Monica Rappaccini / Scientist Supreme appears in Marvel Strike Force. Monica Rappaccini / Scientist Supreme appears in Marvel's Avengers, voiced by Jolene Andersen. This version serves as a senior executive of A.I.M. who assists Dr. George Tarleton in his efforts to control the growing Inhuman population and caring for him after he was mutated due to exposure to a Terrigen crystal. Upon discovering her injections were derived from Captain America's blood and accelerated his mutation instead, Tarleton injects Rappaccini with it and leaves her for dead. In a mid-credits scene however, Rappaccini survived after transplanting an Inhuman's duplication ability to herself off-screen. Following Tarleton's defeat at the hands of the Avengers, she takes over A.I.M. as Scientist Supreme and meets with the organization's board of directors, vowing to renew A.I.M. experiments and develop new technology. In the DLC expansions "Taking A.I.M.", "Future Imperfect", "Cosmic Cube", "War for Wakanda", and "No Rest for the Wicked", she leads A.I.M. in building a time gate to work with Nick Fury, Hawkeye, and her future self to avert a Kree invasion. She creates the Cosmic Cube to stop the aliens, but it freezes her and everyone around her in time while the rest of the world falls into chaos. Meanwhile, a clone of the present Rappaccini continues working on the Cosmic Cube until the Avengers and Hawkeye's future self intervene to stop her from destroying reality, with the latter sacrificing himself and killing Rappaccini to do so. Despite this, another clone of Rappaccini hires Ulysses Klaue and Crossbones to help her invade Wakanda for its Vibranium and leading scientists. However, Klaue kills most of the scientists in pursuit of his own goals, leading to Rappaccini cutting ties with Klaue and leading A.I.M. in a separate attack on Wakanda. Due to the Avengers' work in dismantling A.I.M., the desperate Rappaccini revives Tarleton to preserve the organization, but he kidnaps her instead. References External links Characters created by Fred Van Lente Comics characters introduced in 2005 Fictional biochemists Fictional mad scientists Fictional private military members Fictional toxicologists Marvel Comics female supervillains Marvel Comics scientists
Monica Rappaccini
[ "Chemistry" ]
2,906
[ "Fictional biochemists", "Biochemists" ]
7,352,737
https://en.wikipedia.org/wiki/Mucophagy
Mucophagy (literally "mucus feeding") is defined as the act of feeding on mucus of fishes or invertebrates. Also, it may refer to consumption of mucus or dried mucus in primates. There are mucophagous parasites, such as some types of sea lice that attach themselves to gill segments of fish. In addition, these mucophages may serve as cleaners of other animals, usually fishes. Another usage of this term is in reference to the feeding organ rich in mucous cells in which water is pumped, feeding particles get entrapped in mucus, and the latter proceeds into the esophagus. See also Nose picking References Carnivory
Mucophagy
[ "Biology" ]
146
[ "Behavior", "Ethology stubs", "Eating behaviors", "Carnivory", "Ethology" ]
7,352,745
https://en.wikipedia.org/wiki/Cybertext
Cybertext as defined by Espen Aarseth in 1997 is a type of ergodic literature where the user traverses the text by doing nontrivial work. Definition Cybertexts are pieces of literature where the medium matters. Each user obtains a different outcome based on the choices they make. According to Aarseth, "information is here understood as a string of signs, which may (but does not have to) make sense to a given observer." Cybertexts may be equated to the transition between a linear piece of literature, such as a novel, and a game. In a novel, the reader has no choice, the plot and the characters are all chosen by the author: there is no 'user', just a 'reader'. This is important because it entails that the person working their way through the novel is not an active participant. Cybertext is based on the idea that getting to the message is just as important as the message itself. In order to obtain the message, work on the part of the user is required. This may also be referred to as nontrivial work on the part of the user. This means that the reader does not merely interpret the text but performs actions such as active choice and decision-making through navigation options. There is also a feedback loop between the reader and the text. Application The concept of cybertext offers a way to expand the reach of literary studies to include phenomena that are perceived today as foreign or marginal. In Aarseth's work, cybertext denotes the general set of text machines which, operated by readers, yield different texts for reading. For example, in Raymond Queneau's book Hundred Thousand Billion Poems, each reader will encounter not just poems arranged in a different order, but different poems depending on the precise way in which they turn the sections of page. Cybertext can also be used as a broader alternative for hypertext, particularly as it critiques the critical responses to the latter. Aarseth, together with literary scholars such as N. Katherine Hayles, maintains that cybertext cannot be applied according to the conventional author-text-message paradigms since it is a computational engine. Background The term cybertext is derived from cyber- in the word cybernetics, which was coined by Norbert Wiener in his book Cybernetics, or Control and Communication in the Animal and the Machine (1948), which in turn comes from the Greek word kybernetes – helmsman. The prefix is then merged with the word "text", which is identified as a distinctive structure for producing and consuming verbal meaning in post-structuralist literary theory. Although Aarseth's use of the term has been the most influential, he was not the first to use it. The neologism cybertext appeared several times in the late 1980s and early 1990s. It was the name of a software company in the mid-1980s, and was used by speculative fiction poetry author Bruce Boston as the title of a book he published in 1992, which contained science-fictional poetry. Cybertext is part of what scholars called generational shifts involving literature on digital media. The first phase was hypertext, which transitioned to hypermedia during the mid-1990s. These developments coincided with the invention of the first graphic browser called Mosaic and the popularization of the world wide web. Cybertext came after hypermedia amid the move toward the focus on software code, particularly its considerable ability to control the reception process without reducing interactivity. The fundamental idea in the development of the theory of cybernetics is the concept of feedback: a portion of information produced by the system that is taken, total or partially, as input. Cybernetics is the science that studies control and regulation in systems in which there exists flow and feedback of information. Though first used by science fiction poet Bruce Boston, the term cybertext was brought to the literary world's attention by Espen Aarseth in 1997. Aarseth's concept of cybertext focuses on the organization of the text in order to analyze the influence of the medium as an integral part of the literary dynamic. According to Aarseth, cybertext is not a genre in itself; in order to classify traditions, literary genres and aesthetic value, we should inspect texts at a much more local level. He also maintained that traditional literary theory and interpretation are not main features in cybertext since it focuses on the textual medium (textonomy) and the study of textual meaning (textology). Examples An example of a cybertext is Twelve Blue by Michael Joyce. It is a web-based text that includes navigation modes characterized by fluid and multiple sense of structures of electronic textuality such as colored threads that play different "bars" and blue-script text that returns to images of rivers and water. Depending on what link you choose or what portion of the diagram on the side you pick you will be transferred to a different portion of the text. So in the end, you do not really finish reading the entire story or 'novel' you go through random pages and try piecing the story together yourself. You may never really 'finish' the story. But, because it is a cybertext the 'finishing' of the story is not as important as its impact on the reader, or on the conveyance. Another example is Stir Fry Texts, by Jim Andrews, which is a cybertext where there are many layers of text, and as you move your mouse over the words, the layers beneath them are 'dug' through. The House is another example of a cybertext where one might assume a description of the piece as follows: It is an unruly text, the words don't listen, you are not supreme. You are guided through the piece. This is a cybertext with minimal control. You watch as something unfolds before you, "a crumbling mania", you must be able to go with the flow, to read texts upside down, to piece together a reflection of words, to be okay with texts half read disappearing or moving so far away so continuously that you can not make out those very important words. See also Digital rhetoric Electronic literature Gamebook Hypermedia Hypertext Interactive fiction New media Video games as an art form References External links Hypertext Terms Cybernetics Digital humanities Genres of electronic literature Electronic literature
Cybertext
[ "Technology" ]
1,286
[ "Digital humanities", "Computing and society" ]
7,352,871
https://en.wikipedia.org/wiki/Cantlop%20Bridge
Cantlop Bridge is a single span cast-iron road bridge over the Cound Brook, located to the north of Cantlop in the parish of Berrington, Shropshire. It was constructed in 1818 to a design possibly by Thomas Telford, having at least been approved by him, and replaced an unsuccessful cast iron coach bridge constructed in 1812. The design of the bridge was innovative for the period, using a light-weight design of cast-iron lattice ribs to support the road deck in a single span, and appears to be a scaled-down version of a Thomas Telford bridge at Meole Brace, Shropshire. The bridge is the only surviving Telford-approved cast-iron bridge in Shropshire, and is a Grade II* listed building and scheduled monument. It originally carried the turnpike road from Shrewsbury to Acton Burnell. History and description Thomas Telford worked as the county surveyor of Shropshire between 1787 and 1834, and the bridge is reported to have once held a cast iron plate above the centre of the arch inscribed with "Thomas Telford Esqr - Engineer - 1818", which is apparently visible in historic photographs, but has not been in place since at least 1985. The bridge design incorporates dressed red and grey sandstone abutments with ashlar dressings, these are slightly curved and ramped, with chamfered ashlar quoins, string courses, and moulded cornices. The structural cast-iron consists of a single segmental span with four arched lattice ribs, braced by five transverse cast-ironmembers. The road deck is formed from cast-iron metal deck plates, tarmacked over, and now finished with gravel. The original parapets have at some point been replaced with painted cast-iron railings with dograils, dogbars and shaped end balusters. Present-day The bridge today remains as a monument only, being closed to vehicular traffic. It was bypassed by a more modern adjacent concrete bridge built in the 1970s. It is in the care of English Heritage and is freely accessible to pedestrians. A layby exists for visitors to park and there is an information board. See also Grade II* listed buildings in Shropshire Council (A–G) Listed buildings in Berrington, Shropshire Notes References Blackwall, A 1985. 'Historic Bridges of Shropshire', Shrewsbury: Shropshire Libraries Burton, A 1999. 'Thomas Telford', London: Aurum Press Sutherland, R J M 1997. 'Structural Iron, 1750–1850', Aldershot: Ashgate Bridges by Thomas Telford Bridges in Shropshire Structural engineering English Heritage sites in Shropshire Grade II* listed buildings in Shropshire
Cantlop Bridge
[ "Engineering" ]
538
[ "Structural engineering", "Civil engineering", "Construction" ]
7,353,037
https://en.wikipedia.org/wiki/Reading%20stone
A reading stone is an approximately hemispherical lens that can be placed over text to magnify the letters, making it easier for people with presbyopia to read. Reading stones were among the earliest common uses of lenses. The invention of reading stones is often credited to Abbas ibn Firnas in the 9th century, although the regular use of reading stones did not begin until around 1000 AD. Early reading stones were made from rock crystal (quartz), beryl and glass, which could be shaped and polished into lenses used for magnification. The Swedish Visby lenses, dating from the 11th or 12th century, may have been early reading stones. The function of reading stones was replaced by spectacles from the late 13th century onwards, but modern versions are still in use. In their contemporary form, they can be found as rod-shaped magnifiers, flat on one side, that magnify a line of text at a time, or as large dome magnifiers which magnify a circular area of a page. Larger Fresnel lenses can be placed over an entire page. The modern versions are typically made of plastic. See also Dome magnifier References Magnifiers Corrective lenses
Reading stone
[ "Technology", "Engineering" ]
251
[ "Magnifiers", "Measuring instruments" ]
7,353,645
https://en.wikipedia.org/wiki/Mathematical%20discussion%20of%20rangekeeping
In naval gunnery, when long-range guns became available, an enemy ship would move some distance after the shells were fired. It became necessary to figure out where the enemy ship, the target, was going to be when the shells arrived. The process of keeping track of where the ship was likely to be was called rangekeeping, because the distance to the target—the range—was a very important factor in aiming the guns accurately. As time passed, train (also called bearing), the direction to the target, also became part of rangekeeping, but tradition kept the term alive. Rangekeeping is an excellent example of the application of analog computing to a real-world mathematical modeling problem. Because nations had so much money invested in their capital ships, they were willing to invest enormous amounts of money in the development of rangekeeping hardware to ensure that the guns of these ships could put their projectiles on target. This article presents an overview of the rangekeeping as a mathematical modeling problem. To make this discussion more concrete, the Ford Mk 1 Rangekeeper is used as the focus of this discussion. The Ford Mk 1 Rangekeeper was first deployed on the in 1916 during World War I. This is a relatively well documented rangekeeper that had a long service life. While an early form of mechanical rangekeeper, it does illustrate all the basic principles. The rangekeepers of other nations used similar algorithms for computing gun angles, but often differed dramatically in their operational use. In addition to long range gunnery, the launching of torpedoes also requires a rangekeeping-like function. The US Navy during World War II had the TDC, which was the only World War II-era submarine torpedo fire control system to incorporate a mechanical rangekeeper (other navies depended on manual methods). There were also rangekeeping devices for use with surface ship-launched torpedoes. For a view of rangekeeping outside that of the US Navy, there is a detailed reference that discusses the rangekeeping mathematics associated with torpedo fire control in the Imperial Japanese Navy. The following discussion is patterned after the presentations in World War II US Navy gunnery manuals. Analysis Coordinate system US Navy rangekeepers during World War II used a moving coordinate system based on the line of sight (LOS) between the ship firing its gun (known as the "own ship") and the target (known as the "target"). As is shown in Figure 1, the rangekeeper defines the "y axis" as the LOS and the "x axis" as a perpendicular to the LOS with the origin of the two axes centered on the target. An important aspect of the choice of coordinate system is understanding the signs of the various rates. The rate of bearing change is positive in the clockwise direction. The rate of range is positive for increasing target range. Target tracking General approach During World War II, tracking a target meant knowing continuously the target's range and bearing. These target parameters were sampled periodically by sailors manning gun directors and radar systems, who then fed the data into a rangekeeper. The rangekeeper performed a linear extrapolation of the target range and bearing as a function of time based on the target information samples. In addition to ship-board target observations, rangekeepers could also take input from spotting aircraft or even manned balloons tethered to the own ship. These spotting platforms could be launched and recovered from large warships, like battleships. In general, target observations made by shipboard instruments were preferred for targets at ranges of less than 20,000 yards and aircraft observations were preferred for longer range targets. After World War II, helicopters became available and the need to conduct the dangerous operations of launching and recovering spotting aircraft or balloons was eliminated (see Iowa-class battleship for a brief discussion). During World War I, target tracking information was often presented on a sheet of paper. During World War II, the tracking information could be displayed on electronic displays (see Essex-class aircraft carrier for a discussion of the common displays). Target range Early in World War II, the range to the target was measured by optical rangefinders. Though some night operations were conducted using searchlights and star shells, in general optical rangefinders were limited to daytime operation. During the latter part of World War II, radar was used to determine the range to the target. Radar proved to be more accurate than the optical rangefinders (at least under operational conditions) and was the preferred way to determine target range during both night and day. Target speed Early in World War II, target range and bearing measurements were taken over a period of time and plotted manually on a chart. The speed and course of the target could be computed using the distance the target traveled over an interval of time. During the latter part of World War II, the speed of the target could be measured using radar data. Radar provided accurate bearing rate, range, and radial speed, which was converted to target course and speed. In some cases, such as with submarines, the target speed could be estimated using sonar data. For example, the sonar operator could measure the propeller turn rate acoustically and, knowing the ship's class, compute the ship's speed (see TDC for more information). Target course The target course was the most difficult piece of target data to obtain. In many cases, instead of measuring target course many systems measured a related quantity called angle on the bow. Angle on the bow is the angle made by the ship's course and the line of sight (see Figure 1). The angle on the bow was usually estimated based on the observational experience of the observer. In some cases, the observers improved their estimation abilities by practicing against ship models mounted on a "lazy Susan". The Imperial Japanese Navy had a unique tool, called Sokutekiban (測的盤), that was used to assist observers with measuring angle on the bow. The observer would first use this device to measure the angular width of the target. Knowing the angular width of the target, the range to the target, and the known length of that ship class, the angle on the bow of the target can be computed using equations shown in Figure 2. Human observers were required to determine the angle on the bow. To confuse the human observers, ships often used dazzle camouflage, which consisted of painting lines on a ship in an effort to make determining a target's angle on the bow difficult. While dazzle camouflage was useful against some types of optical rangefinders, this approach was useless against radar and it fell out of favor during World War II. Position prediction The prediction of the target ship's position at the time of projectile impact is critical because that is the position at which the own ship's guns must be directed. During World War II, most rangekeepers performed position prediction using a linear extrapolation of the target's course and speed. While ships are maneuverable, the large ships maneuver slowly and linear extrapolation is a reasonable approach in many cases. During World War I, rangekeepers were often referred to as "clocks" (e.g. see range and bearing clocks in the Dreyer Fire Control Table). These devices were called clocks because they regularly incremented the target range and angle estimates using fixed values. This approach was of limited use because the target bearing changes are a function of range and using a fixed change causes the target bearing prediction to quickly become inaccurate. Range The target range at the time of projectile impact can be estimated using Equation 1, which is illustrated in Figure 3. where is the range to the target at the time of projectile impact. is the range to the target at the time of gun firing. is the projectile time of flight plus system firing delays , i.e. . The exact prediction of the target range at the time of projectile impact is difficult because it requires knowing the projectile time of flight, which is a function of the projected target position. While this calculation can be performed using a trial and error approach, this was not a practical approach with the analog computer hardware available during World War II. In the case of the Ford Rangekeeper Mk 1, the time of flight was approximated by assuming the time of flight was linearly proportional to range, as is shown in Equation 2. where is the constant of proportionality between time of flight (TOF) and target range. The assumption of TOF being linearly proportional to range is a crude one and could be improved through the use of more sophisticated means of function evaluation. Range prediction requires knowing the rate of range change. As is shown in Figure 3, the rate of range change can be expressed as shown in Equation 3. where is the own ship speed along the LOS where . is the target ship speed along the LOS where . Equation 4 shows the complete equation for the predicted range. Azimuth The prediction of azimuth is performed similarly to the range prediction. Equation 5 is the fundamental relationship, whose derivation is illustrated in Figure 4. where is the azimuth to the target at the time of gun firing. is the azimuth to the target at the time of projectile impact. The rate of bearing change can be computed using Equation 6, which is illustrated in Figure 4. where is the own ship speed along the x axis, i.e. . is the target speed along the x axis, i.e. . Substituting , Equation 7 shows the final formula for the predicted bearing. Ballistic correction Firing artillery at targets beyond visual range historically has required computations based on firing tables. The impact point of a projectile is a function of many variables: Air temperature Air density Wind Range Earth rotation Projectile, fuze, weapon characteristics Muzzle velocity Propellant temperature Drift Parallax between the guns and the rangefinders and radar systems Elevation difference between target and artillery piece The firing tables provide data for an artillery piece firing under standardized conditions and the corrections required to determine the point of impact under actual conditions. There were a number of ways to implement a firing table using cams. Consider Figure 5 for example. In this case the gun angle as a function of target's range and the target's relative elevation is represented by the thickness of the cam at a given axial distance and angle. A gun direction officer would input the target range and relative elevation using dials. The pin height then represents the required gun angle. This pin height could be used to drive cams or gears that would make other corrections, such as for propellant temperature and projectile type. The cams used in a rangekeeper needed to be very precisely machined in order to accurately direct the guns. Because these cams were machined to specifications composed of data tables, they became an early application of CNC machine tools. In addition to the target and ballistic corrections, the rangekeeper must also correct for the ships undulating motion. The warships had a gyroscope with its spin axis vertical. This gyro determined two angles that defined the tilt of the ship's deck with respect to the vertical. Those two angle were fed to the rangekeeper, which applied a correction based on these angles. While the rangekeeper designers spent an enormous amount of time working to minimize the sources of error in the rangekeeper calculations, there were errors and information uncertainties that contributed to projectiles missing their targets on the first shot. The rangekeeper had dials that allowed manual corrections to be incorporated into the rangekeeper firing solution. When artillery spotters would call in a correction, the rangekeeper operators would manually incorporate the correction using these dials. Notes External links USN Report on IJN Torpedo Technology: This report shows that the Imperial Japanese Navy used a similar approach to the US Navy for the rangekeeping function. British Fire Control: British gunnery manual that discusses their approach to long range gun direction. Firing Tables: Powerpoint presentation on firing tables Ballistics Artillery operation Naval artillery
Mathematical discussion of rangekeeping
[ "Physics" ]
2,395
[ "Applied and interdisciplinary physics", "Ballistics" ]
7,354,687
https://en.wikipedia.org/wiki/Winkel%20tripel%20projection
The Winkel tripel projection (Winkel III), a modified azimuthal map projection of the world, is one of three projections proposed by German cartographer Oswald Winkel (7 January 1874 – 18 July 1953) in 1921. The projection is the arithmetic mean of the equirectangular projection and the Aitoff projection: The name (German for 'triple') refers to Winkel's goal of minimizing three kinds of distortion: area, direction, and distance. Algorithm where λ is the longitude relative to the central meridian of the projection, φ is the latitude, φ is the standard parallel for the equirectangular projection, sinc is the unnormalized cardinal sine function, and In his proposal, Winkel set A closed-form inverse mapping does not exist, and computing the inverse numerically requires the use of iterative methods. Comparison with other projections David M. Goldberg and J. Richard Gott III showed that the Winkel tripel fares better against several other projections analyzed against their measures of distortion, producing minimal distance, Tissot indicatrix ellipticity and area errors, and the least skew of any of the projections they studied. By a different metric, Capek's "Q", the Winkel tripel ranked ninth among a hundred map projections of the world, behind the common Eckert IV projection and Robinson projections. In 1998, the Winkel tripel projection replaced the Robinson projection as the standard projection for world maps made by the National Geographic Society. Many educational institutes and textbooks soon followed National Geographic's example in adopting the projection, most of which still utilize it. See also List of map projections References External links Table of common projections Map projections
Winkel tripel projection
[ "Mathematics" ]
356
[ "Map projections", "Coordinate systems" ]
7,354,718
https://en.wikipedia.org/wiki/Guy%20Terjanian
Guy Terjanian is a French mathematician who has worked on algebraic number theory. He achieved his Ph.D. under Claude Chevalley in 1970, and at that time published a counterexample to the original form of a conjecture of Emil Artin, which suitably modified had just been proved as the Ax-Kochen theorem. In 1977, he proved that if p is an odd prime number, and the natural numbers x, y and z satisfy , then 2p must divide x or y. See also Ax–Kochen theorem References Further reading math.unicaen.fr article Topic: Arithmetic & geometry French people of Armenian descent 20th-century French mathematicians Algebraists French number theorists Living people Year of birth missing (living people)
Guy Terjanian
[ "Mathematics" ]
153
[ "Algebra", "Algebraists" ]
7,354,807
https://en.wikipedia.org/wiki/Passenger%20car%20equivalent
Passenger car equivalent (PCE) or passenger car unit (PCU) is a metric used in transportation engineering to assess traffic-flow rate on a highway. A passenger car equivalent is essentially the impact that a mode of transport has on traffic variables (such as headway, speed, density) compared to a single car. Traffic studies and/or analysis must be done to obtain the number of trips, which shall then be converted to PCUs based on the above standards. Each region has its own manual with regards to PCU equivalence factors. Highway capacity is measured in PCE/hour daily. A common method used in the US is the density method. However, the PCU values derived from the density method are based on underlying homogeneous traffic concepts such as strict lane discipline, car following, and a vehicle fleet that does not vary greatly in width. On the other hand, highways in India carry heterogeneous traffic, where road space is shared among many traffic modes with different physical dimensions. Loose lane discipline prevails; car following is not the norm. This complicates computing of PCE. Using multiple heuristic techniques, transportation engineers convert a mixed traffic stream into a hypothetical passenger-car stream. Methods Many methods exist for determining passenger car units (PCUs). Examples: homogenization coefficient, semi-empirical method, Walker's method, headway method, multiple linear regression method simulation method. Transport for London recommend the following PCU values in an urban context: Pedal cycle 0.2 Motorcycle 0.4 Car or light goods vehicle 1.0 Medium goods vehicle 1.5 Bus or coach 2.0 Heavy goods vehicle (HGV) 2.3 It may be appropriate to use different values for the same vehicle type according to circumstances. For example, in the UK in the 1960s and 1970s, bicycles were evaluated thus: on rural roads 0.5 on urban roads 0.33 on roundabouts 0.5 at traffic lights 0.2. References Transportation engineering Equivalent units
Passenger car equivalent
[ "Mathematics", "Engineering" ]
408
[ "Equivalent quantities", "Quantity", "Industrial engineering", "Equivalent units", "Civil engineering", "Transportation engineering", "Units of measurement" ]
7,355,118
https://en.wikipedia.org/wiki/Brontok
Brontok is a computer worm running on Microsoft Windows. It is able to disperse by e-mail. Variants include: Brontok.A Brontok.D Brontok.F Brontok.G Brontok.H Brontok.I Brontok.K Brontok.Q Brontok.U Brontok.BH The most affected countries were Russia, Vietnam and Brazil, followed by Spain, Mexico, Iran, Azerbaijan, India and the Philippines. Other names Other names for this worm include: W32/Rontokbro.gen@MM, W32.Rontokbro@mm, BackDoor.Generic.1138, W32/Korbo-B, Worm/Brontok.a, Win32.Brontok.A@mm, Worm.Mytob.GH, W32/Brontok.C.worm, Win32/Brontok.E, Win32/Brontok.X@mm, and W32.Rontokbro.D@mm. Origin Brontok originated in Indonesia. It was first discovered in 2005. The name refers to elang brontok, a bird species native to South & Southeast Asia. It arrives as an attachment of e-mail named kangen.exe (kangen itself means "to miss someone/thing"). The virus/email itself contains a message in Indonesian (and some English). When translated, this reads: [By: HVM31 JowoBot #VM Community] -- stop the collapse in this country—1. Try the Hoodlums, the Smugglers, the Bribers, the gamblers, & drugs Port (Send to "Nusakambangan") -- 2.Stop Free Sex, Abortion, & Prostitution (Go To HELL) 3.Stop (sea and river pollution), forest burning, & wild hunting. 4.SAY NO TO DRUGS!!! - THE END IS NEAR - 5. Do you think you're smart? Inspired by: (Spizaetus Cirrhatus) that is almost extinct [By: HVM31 JowoBot #VM Communityunity -- It also contains a JavaScript pop-up. The worm also carried out a ping flood attack on two websites: Israel.gov.il and playboy.com, possibly in an act of hacktivism. A number of other websites with .com TLD were also attacked, prompting popular Indonesian forum Kaskus to switch to .us TLD until May 2012. Brontok inspired the creation of a more persistent trojan/worm such as Daprosy Worm which attacked internet cafes in July 2009. Symptoms When Brontok is first run, it copies itself to the user's application data directory. It then sets itself to start up with Windows, by creating a registry entry in the HKLM\Software\Microsoft\Windows\CurrentVersion\Run registry key. It disables the Windows Registry Editor (regedit.exe) and modifies Windows Explorer settings. It removes the option of "Folder Options" in the Tools menu so that the hidden files, where it is concealed, are not easily accessible to the user. It also turns off Windows firewall. In some variants, when a window is found containing certain strings (such as "application data") in the window title, the computer reboots. User frustration also occurs when an address typed into Windows Explorer is blanked out before completion. Using its own mailing engine, it sends itself to email addresses it finds on the computer, even faking the own user's email address as the sender. The computer also restarts when trying to open the Windows Command Prompt and prevents the user from downloading files. It also pop ups the default Web browser and loads a web page (HTML) which is located in the "My Pictures" (or on Windows Vista, "Pictures") folder. It creates .exe files in folders usually named as the folder itself (..\documents\documents.exe) this also includes all mapped network drives. Removal Brontok can be removed by most antivirus software although there are various standalone tools available by antivirus providers. References Email worms Hacking in the 2000s Cybercrime in India Windows malware Denial-of-service attacks Internet in Russia Internet in Brazil Internet in Vietnam Internet in Spain Internet in Azerbaijan Internet in Mexico Internet in Iran Cybercrime in the Philippines Attacks in Azerbaijan Attacks in Brazil Attacks in India Attacks in Iran Attacks in Mexico Attacks in the Philippines Attacks in Russia Attacks in Vietnam Internet in Israel Attacks in Israel Playboy
Brontok
[ "Technology" ]
957
[ "Denial-of-service attacks", "Computer security exploits" ]
7,355,338
https://en.wikipedia.org/wiki/Direct%20agglutination%20test
A direct agglutination test (DAT) is any test that uses whole organisms as a means of looking for serum antibodies. The abbreviation, DAT, is most frequently used for the serological test for visceral leishmaniasis. References Blood tests
Direct agglutination test
[ "Chemistry" ]
54
[ "Blood tests", "Chemical pathology" ]
7,355,393
https://en.wikipedia.org/wiki/Opisthodomos
An opisthodomos (ὀπισθόδομος, 'back room') can refer to either the rear room of an ancient Greek temple or to the inner shrine, also called the adyton ('not to be entered'). The confusion arises from the lack of agreement in ancient inscriptions. In modern scholarship, it usually refers to the rear porch of a temple. On the Athenian Acropolis especially, the opisthodomos came to be a treasury, where the revenues and precious dedications of the temple were kept. Its use in antiquity was not standardised. In part because of the ritual secrecy of such inner spaces, it is not known exactly what took place within opisthodomoi; it can safely be assumed that practice varied widely by place, date and particular temple. Architecturally, the opisthodomos (as a back room) balances the pronaos or porch of a temple, creating a plan with diaxial symmetry. The upper portion of its outer wall could be decorated with a frieze, as on the Hephaisteion and the Parthenon. Opisthodomoi are present in the layout of: Temples ER, A and O at Selinus Temple of Aphaea at Aegina Temple of Zeus at Olympia Hephaisteion in the Agora of Athens Parthenon on the Acropolis in Athens Temple of Concordia, Agrigento Temple of Poseidon on Cape Sounion Temple of Apollo Epikourios at Bassae Temple of Athena Lindia at Lindos Temple of Dionysus at Teos References Architectural elements Ancient Greek architecture Rooms
Opisthodomos
[ "Technology", "Engineering" ]
339
[ "Building engineering", "Rooms", "Architectural elements", "Components", "Architecture" ]
7,356,525
https://en.wikipedia.org/wiki/REDCON
In the U.S. military, the term REDCON is short for Readiness Condition and is used to refer to a unit's readiness to respond to and engage in combat operations. There are five REDCON levels, as described below in this excerpt from Army Field Manual 71–1. Overview REDCON-1: Full alert; unit ready to move and fight. WMD alarms and hot loop equipment stowed; OPs pulled in. (A hot loop is a field telephone circuit between the subunits of a company.) All personnel alert and mounted on vehicles; weapons manned. Engines started. Company team is ready to move immediately. REDCON-1.5 WMD alarms and hot loop equipment stowed; OPs pulled in. All personnel alert and mounted on vehicles; weapons manned. Company team is ready to move immediately. REDCON-2: Full alert; unit ready to fight. Equipment stowed (except hot loop and WMD alarms). Precombat checks complete. All personnel alert and mounted in vehicles; weapons manned & charged, round in chamber, weapon on safe. (NOTE: Depending on the tactical situation and orders from the commander, dismounted OPs may remain in place.) All (100 percent) digital and FM communications links operational. Status reports submitted in accordance with task force SOP. Company team is ready to move within 15 minutes of notification. REDCON-3: Reduced alert. Fifty percent of the unit executes work and rest plans. Remainder of the unit executes security plan. Based on the commander's guidance and the enemy situation, some personnel executing the security plan may execute portions of the work plan. Company team is ready to move within 30 minutes of notification. REDCON-4: Minimum alert. OPs manned; one soldier per platoon designated to monitor radio and man turret weapons. Digital and FM links with task force and other company teams maintained. Company team is ready to move within one hour of notification. See also Alert state DEFCON Force Protection Condition Redcon (2016 game) References External links REDCON levels from Army Field Manual 71-1 on GlobalSecurity.org Alert measurement systems Military life Military terminology of the United States
REDCON
[ "Technology" ]
441
[ "Warning systems", "Alert measurement systems" ]
7,356,531
https://en.wikipedia.org/wiki/PAL-M
PAL-M is the analogue colour TV system used in Brazil since early 1972, making it the first South American country to broadcast in colour. It is unique among analogue TV systems in that it combines the 525-line 30 frames-per-second System M with the PAL colour encoding system (using very nearly the NTSC colour subcarrier frequency), unlike all other countries which pair PAL with 625-line systems and NTSC with 525-line systems. Colour broadcasts began on 19 February 1972, when a TV station in Caxias do Sul, TV Difusora, transmitted the Caxias do Sul Grape Festival in collaboration with TV Rio. Transition from black and white to colour on most programmes was not complete until 1978, and only became commonplace nationwide by 1980. Origins NTSC being the "natural" choice for countries with monochrome standard M, the choice of a different colour system poses problems of incompatibility with available hardware and the need to develop new television sets and production hardware. Walter Bruch, inventor of PAL, explains Brazil's choice of PAL over NTSC against these odds by an advertising campaign Telefunken and Philips carried out across South America in 1972, which included colour test broadcasts of popular shows (done with TV Globo) and technical demonstrations with executives of television stations. Technical specifications PAL-M signals are in general identical to North American NTSC signals, except for the encoding of the colour carrier. Both systems are based on the monochrome CCIR System M standard, therefore, PAL-M will display in monochrome with sound on an NTSC set and vice versa. Nevertheless, due to the different gamma correction values (2.2 for NTSC, 2.8 for PAL-M), gray tones will be incorrect. PAL-M is incompatible with 625-line based versions of PAL, because its frame rate, scan line, colour subcarrier and sound carrier specifications are different. It will therefore usually give a rolling and/or squashed monochrome picture with no sound on a native European PAL television, as do NTSC signals. PAL-M details: Transmission band: VHF/UHF Fields: 60 Scan lines: 525 Active lines: 480 Channel bandwidth: 6 MHz Video bandwidth: 4.2 MHz Vision/sound carrier spacing: 4.5 MHz Colour subcarrier: 3.575611 MHz Assumed receiver gamma: 2.8 Color model: YUV PAL-M colorimetry: Colorimetry is similar to the original 1953 color NTSC specification: Standard: BT.470-6 White point: C Color primaries: Red: x 0.67; y 0.33 Green: x 0.21; y 0.71 Blue: x 0.14; y 0.08 PAL-M systems conversion issues PAL-M being a standard unique to one country, the need to convert it to/from other standards often arises. Conversion to/from NTSC is easy, as only the colour carrier needs to be changed. Frame rate and scan lines can remain untouched. Conversion to/from PAL/625 lines/25 frame/s and SECAM/625/25 signals involves changing the frame rates as well as the scan lines. This is achieved using complicated circuitry involving a digital frame store, the same method used for converting between NTSC and the 625/25 standards. The fact that the colour encoding of PAL-M and PAL/625/25 is the same does not help, as the entire signal goes through an A/D-D/A conversion process anyway. However some special VHS video recorders are available which can allow viewers the flexibility of enjoying PAL-M recordings using a standard PAL (625/50 Hz) colour TV, or even through multi-system TV sets. Video recorders like Panasonic NV-W1E (AG-W1 for the USA), AG-W2, AG-W3, NV-J700AM, Aiwa HV-MX100, HV-MX1U, Samsung SV-4000W and SV-7000W feature a digital TV system conversion circuitry. Some recorders support the other way around, being able to playback standard PAL (625/50 Hz) in 50 Hz-compatible PAL-M TV sets, such as the Panasonic NV-FJ605. PAL 60 The PAL colour system (either baseband or with any RF system, with the normal 4.43 MHz subcarrier unlike PAL-M) can also be applied to an NTSC-like 525-line picture to form what is often known as "PAL-60" (sometimes "PAL-60/525," "Pseudo-PAL," or "Quasi-PAL"). This non-standard signal is a method used in European domestic VCRs and DVD players for playback of NTSC material on PAL televisions. It's not identical to PAL-M and incompatible with it, because the colour subcarrier is at a different frequency; it will therefore display in monochrome on PAL-M and NTSC television sets. Technological obsolescence SBTVD and ABERT/SET tests The analog PAL-M was scheduled to be supplanted by a digital high-definition system named Sistema Brasileiro de Televisão Digital (SBTVD) by 2015, and finishing in 2018. From 1999 to 2000, the ABERT/SET group in Brazil did system comparison tests of ATSC, DVB-T and ISDB-T under the supervision of the CPqD foundation. Originally, Brazil including Argentina, Paraguay and Uruguay planned to adopt the DVB-T standard. However, the ABERT/SET group selected ISDB-T, after field-tests results showed that it was the most robust system under Brazilian reception conditions. Therefore, SBTVD was replaced by the Brazilian variant of the ISDB standard, ISDB-Tb, which features SBTVD's characteristics into the originally-Japanese digital norm. See also Broadcast television systems References Television in Brazil Television technology Television transmission standards Video formats
PAL-M
[ "Technology" ]
1,241
[ "Information and communications technology", "Television technology" ]
7,357,267
https://en.wikipedia.org/wiki/ASTM%20A500
ASTM A500 is a standard specification published by the ASTM for cold-formed welded and seamless carbon steel structural tubing in round, square, and rectangular shapes. It is commonly specified in the US for hollow structural sections, but the more stringent CSA G40.21 is preferred in Canada. Another related standard is ASTM A501, which is a hot-formed version of this A500. ASTM A500 defines four grades of carbon steel based primarily on material strength. This is a standard set by the standards organization ASTM International, a voluntary standards development organization that sets technical standards for materials, products, systems, and services. Density Like other carbon steels, A500 and A501 steels have a specific gravity of approximately 7.85, and therefore a density of approximately 7850 kg/m3 (0.284 pounds per cubic inch). Grades A500 cold-formed tubing comes in four grades based on chemical composition, tensile strength, and heat treatment. The yield strength requirements are higher for square and rectangular than for round tubing. The minimum copper content is optional. Grade D must be heat treated. Mechanical Properties Shaped structural tubing References Steels ASTM standards Structural engineering standards
ASTM A500
[ "Engineering" ]
250
[ "Steels", "Structural engineering", "Alloys", "Structural engineering standards" ]
7,357,562
https://en.wikipedia.org/wiki/Software%20engine
A software engine is a core component of a complex software system. The word "engine" is a metaphor of a car's engine. Thus a software engine is a complex subsystem. There is no formal guideline for what should be called an engine, but the term has become widespread in the software industry. Notable examples Multi-engine systems Mainstream web browsers have both a rendering engine and a JavaScript engine. Video games are often based on a game engine. Some of these also have specialized physics or graphics engines. References Software engineering
Software engine
[ "Technology", "Engineering" ]
112
[ "Software engineering", "Systems engineering", "Information technology", "Computer engineering" ]
7,357,563
https://en.wikipedia.org/wiki/Calponin
Calponin is a calcium binding protein. Calponin tonically inhibits the ATPase activity of myosin in smooth muscle. Phosphorylation of calponin by a protein kinase, which is dependent upon calcium binding to calmodulin, releases the calponin's inhibition of the smooth muscle ATPase. Structure and function Calponin is mainly made up of α-helices with hydrogen bond turns. It is a binding protein and is made up of three domains. These domains in order of appearance are Calponin Homology (CH), regulatory domain (RD), and Click-23, domain that contains the calponin repeats. At the CH domain calponin binds to α-actin and filamin and binds to actin within the RD domain. Calmodulin, when activated by calcium may bind weakly to the CH domain and inhibit calponin binding with α-actin. Calponin is responsible for binding many actin binding proteins, phospholipids, and regulates the actin/myosin interaction. Calponin is also thought to negatively affect the bone making process due to being expressed in high amounts in osteoblasts. References External links Proteins
Calponin
[ "Chemistry" ]
255
[ "Biomolecules by chemical classification", "Protein stubs", "Biochemistry stubs", "Molecular biology", "Proteins" ]
7,357,636
https://en.wikipedia.org/wiki/Caldesmon
Caldesmon is a protein that in humans is encoded by the CALD1 gene. Caldesmon is a calmodulin binding protein. Like calponin, caldesmon tonically inhibits the ATPase activity of myosin in smooth muscle. This gene encodes a calmodulin- and actin-binding protein that plays an essential role in the regulation of smooth muscle and nonmuscle contraction. The conserved domain of this protein possesses the binding activities to -calmodulin, actin, tropomyosin, myosin, and phospholipids. This protein is a potent inhibitor of the actin-tropomyosin activated myosin MgATPase, and serves as a mediating factor for -dependent inhibition of smooth muscle contraction. Alternative splicing of this gene results in multiple transcript variants encoding distinct isoforms. Immunochemistry In diagnostic immunochemistry, caldesmon is a marker for smooth muscle differentiation. References Further reading External links Proteins
Caldesmon
[ "Chemistry" ]
212
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
7,358,011
https://en.wikipedia.org/wiki/Bismuth%20ferrite
Bismuth ferrite (BiFeO3, also commonly referred to as BFO in materials science) is an inorganic chemical compound with perovskite structure and one of the most promising multiferroic materials. The room-temperature phase of BiFeO3 is classed as rhombohedral belonging to the space group R3c. It is synthesized in bulk and thin film form and both its antiferromagnetic (G type ordering) Néel temperature (approximately 653 K) and ferroelectric Curie temperature are well above room temperature (approximately 1100K). Ferroelectric polarization occurs along the pseudocubic direction () with a magnitude of 90–95 μC/cm2. Sample Preparation Bismuth ferrite is not a naturally occurring mineral and several synthesis routes to obtain the compound have been developed. Solid state synthesis In the solid state reaction method bismuth oxide (Bi2O3) and iron oxide (Fe2O3) in a 1:1 mole ratio are mixed with a mortar or by ball milling and then fired at elevated temperatures. Preparation of pure stoichiometric BiFeO3 is challenging due to the volatility of bismuth during firing which leads to the formation of stable secondary Bi25FeO39 (selenite) and Bi2Fe4O9 (mullite) phase. Typically a firing temperature of 800 to 880 Celsius is used for 5 to 60 minutes with rapid subsequent cooling. Excess Bi2O3 has also been used a measure to compensate for bismuth volatility and to avoid formation of the Bi2Fe4O9 phase. Single crystal growth Bismuth ferrite melts incongruently, but it can be grown from a bismuth oxide rich flux (e.g. a 4:1:1 mixture of Bi2O3, Fe2O3 and B2O3 at approximately 750-800 Celsius). High quality single crystals have been important for studying the ferroelectric, antiferromagnetic and magnetoelectric properties of bismuth ferrite. Chemical routes Wet chemical synthesis routes based on sol-gel chemistry, modified Pechini routes, hydrothermal synthesis and precipitation have been used to prepare phase pure BiFeO3. The advantage of the chemical routes is the compositional homogeneity of the precursors and the reduced loss of bismuth due to the much lower temperatures needed. In sol-gel routes, an amorphous precursor is calcined at 300-600 Celsius to remove organic residuals and to promote crystallization of the bismuth ferrite perovskite phase, while the disadvantage is that the resulting powder must be sintered at high temperature to make a dense polycrystal. Solution combustion reaction is a low-cost method used to synthesize porous BiFeO3. In this method, a reducing agent (such glycine, citric acid, urea, etc.) and an oxidizing agent (nitrate ions, nitric acid, etc.) are used to generate the reduction-oxidation (RedOx) reaction. The appearance of the flame, and consequently the temperature of the mixture, depends on the oxidizing/reducing agents ratio used. Annealing up to 600 °C is sometimes needed to decompose the bismuth oxo-nitrates generated as intermediates. Since the content of Fe cations in this semiconductor material, Mӧssbauer spectroscopy is a proper technique to detect the presence of a paramagnetic component in the phase. Thin films The electric and magnetic properties of high quality epitaxial thin films of bismuth ferrite reported in 2003 revived the scientific interest for bismuth ferrite. Epitaxial thin films have the great advantage that their properties can be tuned by processing or chemical doping, and that they can be integrated in electronic circuitry. Epitaxial strain induced by single crystalline substrates with different lattice parameters than bismuth ferrite can be used to modify the crystal structure to monoclinic or tetragonal symmetry and change the ferroelectric, piezoelectric or magnetic properties. Pulsed laser deposition (PLD) is a very common route to epitaxial BiFeO3 films, and SrTiO3 substrates with SrRuO3 electrodes are typically used. Sputtering, molecular-beam epitaxy (MBE), metal organic chemical vapor deposition (MOCVD), atomic layer deposition (ALD), and chemical solution deposition are other methods to prepare epitaxial bismuth ferrite thin films. Apart from its magnetic and electric properties bismuth ferrite also possesses photovoltaic properties which is known as ferroelectric photovoltaic (FPV) effect. Applications Being a room temperature multiferroic material and due to its ferroelectric photovoltaic (FPV) effect, bismuth ferrite has several applications in the field of magnetism, spintronics, photovoltaics, etc. Photovoltaics In the FPV effect, a photocurrent is generated in a ferroelectric material under illumination and its direction is dependent upon the ferroelectric polarization of that material. The FPV effect has a promising potential as an alternative to conventional photovoltaic devices. But the main hindrance is that a very small photocurrent is generated in ferroelectric materials like LiNbO3, which is due to its large bandgap and low conductivity. In this direction bismuth ferrite has shown a great potential since a large photocurrent effect and above bandgap voltage is observed in this material under illumination. Most of the works using bismuth ferrite as a photovoltaic material has been reported on its thin film form but in a few reports researchers have formed a bilayer structure with other materials like polymers, graphene and other semiconductors. In a report p-i-n heterojunction has been formed with bismuth ferrite nanoparticles along with two oxide based carrier transporting layers. In spite of such efforts the power conversion efficiency obtained from bismuth ferrite is still very low. References https://doi.org/10.1016/j.jallcom.2011.05.106 Bismuth compounds Iron(III) compounds Oxides Perovskites
Bismuth ferrite
[ "Chemistry" ]
1,352
[ "Oxides", "Salts" ]
7,358,025
https://en.wikipedia.org/wiki/819%20line
819-line was an analog monochrome TV system developed and used in France as television broadcast resumed after World War II. Transmissions started in 1949 and were active up to 1985, although limited to France, Belgium and Luxembourg. It is associated with CCIR System E and F. History When Europe resumed TV transmissions after World War II (i.e. in the late 1940s and early 1950s) most countries standardized on 625-line television systems. The two exceptions were the British 405-line system, which had already been introduced in 1936, and the French 819-line system. During the 1940s René Barthélemy had already reached 1,015 lines and even 1,042 lines. On November 20, 1948, François Mitterrand, the then Secretary of State for Information, decreed a broadcast standard of 819 lines developed by Henri de France; broadcasting began at the end of 1949 in this higher definition format. It was used in France by TF1, and in Monaco by Tele Monte Carlo. Some 819-line TV sets were available, like the Grammont 504-A-31 from 1951 and the Philips 14TX100 multi-standard 625/819-line TV from 1952. The system was also adopted (with limited bandwidth, affecting image resolution) in 1953 in Belgium by RTB and in 1955 in Luxembourg by Télé-Luxembourg. Broadcasts were discontinued in Belgium in February 1968, and in Luxembourg in September 1971. Despite some attempts to create a color SECAM version of the 819-line system, France gradually abandoned the system in favor of the Europe-wide standard of 625-lines with the final 819-line transmissions taking place in Paris from the Eiffel Tower on 19 July 1983. Tele Monte Carlo in Monaco were the last broadcasters to transmit 819-line television, closing down their transmitter in 1985. Technical details This was arguably the world's first high-definition television system, and, by today's standards, it could be called 736i (as it had 737 lines active, but one of the lines was composed of 2 halves) with a maximum theoretical resolution of 408×368 line pairs (which in digital terms can be expressed as broadly equivalent to 816×736 pixels) with a 4:3 aspect ratio. By comparison with modern digital standards, 720p is 1,280×720 pixels, of which the 4:3 portion would be 960×720 pixels, while PAL DVDs have a resolution of 720×576 pixels. The testcards used with the system had resolution gratings that went up to 900 TV lines. However, the theoretical picture quality far exceeded the capabilities of the analogue equipment of its time, and each 819-line channel occupied a wide 14 MHz of VHF bandwidth. 819-lines were broadcast using two CCIR systems, System E and System F. System E System E implementation provided very good (near HDTV) picture quality but with an uneconomical use of bandwidth; a 625/50 signal providing the same clarity as an 819-line image, but matted down 4:3 with the same number of lines, would still need nearly 6 MHz for the vision carrier alone (vs typical 5 to 6 MHz in actual use), and 5 MHz for 525/60 (vs typical 4.2 MHz), although a 405/50 transmission could get away with only 2.5 MHz (typical 3 MHz, as System A made no allowance for the Kell factor and thus had a "narrow pixel"/"tall line" appearance). Thus even an unusually crisp "standard" definition (or slightly soft 405-line) image only needed half, or even one-quarter the vision bandwidth of the 819-line system to give a "balanced" appearance, despite their lower overall resolution still seeming perfectly clear on the more affordable small-screen receivers often used in the pre-color era. With the usual additions of sound carrier and vestigial sideband the result was a combined signal that demanded approximately two to three times the bandwidth of more moderately specified standards, even when colour was added to them (as the color subcarrier resides within the luma signal space). System F System F was an adapted 819-line system used in Belgium and Luxembourg as an answer to the bandwidth problem, using only half the original vision bandwidth and approximately half the sound carrier offset. It allowed French 819-line programming to squeeze into the 7 MHz VHF broadcast channels used in those neighboring countries, albeit with a substantial loss of horizontal resolution (408×737 effective); although this still offered approximately twice the actual clarity of 405-line System A (twice the lines, roughly the same horizontal definition), the contrast between vertical and horizontal resolution would have made it seem perceptually softer than a 625 line signal with the same bandwidth. Use of System F was discontinued in Belgium in February 1968, and in Luxembourg in September 1971. Countries and territories that used the 819-line system This is a list of nations that used the 819-line system for television broadcasting: (TF1) (RTB) (Télé-Luxembourg) (Tele Monte Carlo) (prior to independence) (before 1957) See also CCIR System E CCIR System F References Television technology
819 line
[ "Technology" ]
1,084
[ "Information and communications technology", "Television technology" ]
11,985,661
https://en.wikipedia.org/wiki/Finitely%20generated%20algebra
In mathematics, a finitely generated algebra (also called an algebra of finite type) is a commutative associative algebra over a field where there exists a finite set of elements of such that every element of can be expressed as a polynomial in , with coefficients in . Equivalently, there exist elements such that the evaluation homomorphism at is surjective; thus, by applying the first isomorphism theorem, . Conversely, for any ideal is a -algebra of finite type, indeed any element of is a polynomial in the cosets with coefficients in . Therefore, we obtain the following characterisation of finitely generated -algebras is a finitely generated -algebra if and only if it is isomorphic as a -algebra to a quotient ring of the type by an ideal . If it is necessary to emphasize the field K then the algebra is said to be finitely generated over K. Algebras that are not finitely generated are called infinitely generated. Examples The polynomial algebra is finitely generated. The polynomial algebra in countably infinitely many generators is infinitely generated. The field of rational functions in one variable over an infinite field is not a finitely generated algebra over . On the other hand, is generated over by a single element, , as a field. If is a finite field extension then it follows from the definitions that is a finitely generated algebra over . Conversely, if is a field extension and is a finitely generated algebra over then the field extension is finite. This is called Zariski's lemma. See also integral extension. If is a finitely generated group then the group algebra is a finitely generated algebra over . Properties A homomorphic image of a finitely generated algebra is itself finitely generated. However, a similar property for subalgebras does not hold in general. Hilbert's basis theorem: if A is a finitely generated commutative algebra over a Noetherian ring then every ideal of A is finitely generated, or equivalently, A is a Noetherian ring. Relation with affine varieties Finitely generated reduced commutative algebras are basic objects of consideration in modern algebraic geometry, where they correspond to affine algebraic varieties; for this reason, these algebras are also referred to as (commutative) affine algebras. More precisely, given an affine algebraic set we can associate a finitely generated -algebra called the affine coordinate ring of ; moreover, if is a regular map between the affine algebraic sets and , we can define a homomorphism of -algebras then, is a contravariant functor from the category of affine algebraic sets with regular maps to the category of reduced finitely generated -algebras: this functor turns out to be an equivalence of categories and, restricting to affine varieties (i.e. irreducible affine algebraic sets), Finite algebras vs algebras of finite type We recall that a commutative -algebra is a ring homomorphism ; the -module structure of is defined by An -algebra is called finite if it is finitely generated as an -module, i.e. there is a surjective homomorphism of -modules Again, there is a characterisation of finite algebras in terms of quotients An -algebra is finite if and only if it is isomorphic to a quotient by an -submodule . By definition, a finite -algebra is of finite type, but the converse is false: the polynomial ring is of finite type but not finite. However, if an -algebra is of finite type and integral, then it is finite. More precisely, is a finitely generated -module if and only if is generated as an -algebra by a finite number of elements integral over . Finite algebras and algebras of finite type are related to the notions of finite morphisms and morphisms of finite type. References See also Finitely generated module Finitely generated field extension Artin–Tate lemma Finite algebra Morphism of finite type Algebras Commutative algebra
Finitely generated algebra
[ "Mathematics" ]
829
[ "Mathematical structures", "Algebras", "Fields of abstract algebra", "Algebraic structures", "Commutative algebra" ]
11,988,606
https://en.wikipedia.org/wiki/Shale%20gouge%20ratio
Shale Gouge Ratio (typically abbreviated to SGR) is a mathematical algorithm that aims to predict the fault rock types for simple fault zones developed in sedimentary sequences dominated by sandstone and shale. The parameter is widely used in the oil and gas exploration and production industries to enable quantitative predictions to be made regarding the hydrodynamic behavior of faults. Definition At any point on a fault surface, the shale gouge ratio is equal to the net shale/clay content of the rocks that have slipped past that point. The SGR algorithm assumes complete mixing of the wall-rock components in any particular 'throw interval'. The parameter is a measure of the 'upscaled' composition of the fault zone. Application to hydrocarbon exploration Hydrocarbon exploration involves identifying and defining accumulations of hydrocarbons that are trapped in subsurface structures. These structures are often segmented by faults. For a thorough trap evaluation, it is necessary to predict whether the fault is sealing or leaking to hydrocarbons and also to provide an estimate of how 'strong' the fault seal might be. The 'strength' of a fault seal can be quantified in terms of subsurface pressure, arising from the buoyancy forces within the hydrocarbon column, that the fault can support before it starts to leak. When acting on a fault zone this subsurface pressure is termed capillary threshold pressure. For faults developed in sandstone and shale sequences, the first order control on capillary threshold pressure is likely to be the composition, in particular the shale or clay content, of the fault-zone material. SGR is used to estimate the shale content of the fault zone. In general, fault zones with higher clay content, equivalent to higher SGR values, can support higher capillary threshold pressures. On a broader scale, other factors also exert a control on the threshold pressure, such as depth of the rock sequence at the time of faulting, and the maximum burial depth. As maximum burial depth exceeds 3 km, the effective strength of the fault seal will increase for all fault zone compositions. References Yielding, Needham & Freeman, 1997. American Association of Petroleum Geologists Bulletin, vol.81, p.897-917. See also Fault gouge Petroleum geology Structural geology Petroleum geology Geophysics Economic geology Seismology
Shale gouge ratio
[ "Physics", "Chemistry" ]
469
[ "Petroleum", "Petroleum geology", "Applied and interdisciplinary physics", "Geophysics" ]
11,988,917
https://en.wikipedia.org/wiki/Multimedia%20Web%20Ontology%20Language
Machine interpretation of documents and services in Semantic Web environment is primarily enabled by (a) the capability to mark documents, document segments and services with semantic tags and (b) the ability to establish contextual relations between the tags with a domain model, which is formally represented as ontology. Human beings use natural languages to communicate an abstract view of the world. Natural language constructs are symbolic representations of human experience and are close to the conceptual model that Semantic Web technologies deal with. Thus, natural language constructs have been naturally used to represent the ontology elements. This makes it convenient to apply Semantic Web technologies in the domain of textual information. In contrast, multimedia documents are perceptual recording of human experience. An attempt to use a conceptual model to interpret the perceptual records gets severely impaired by the semantic gap that exists between the perceptual media features and the conceptual world. Notably, the concepts have their roots in perceptual experience of human beings and the apparent disconnect between the conceptual and the perceptual world is rather artificial. The key to semantic processing of multimedia data lies in harmonizing the seemingly isolated conceptual and the perceptual worlds. Representation of the Domain knowledge needs to be extended to enable perceptual modeling, over and above conceptual modeling that is supported. The perceptual model of a domain primarily comprises observable media properties of the concepts. Such perceptual models are useful for semantic interpretation of media documents, just as the conceptual models help in the semantic interpretation of textual documents. Multimedia Ontology language (M-OWL) is an ontology representation language that enables such perceptual modeling. It assumes a causal model of the world, where observable media features are caused by underlying concepts. In MOWL, it is possible to associate different types of media features in different media format and at different levels of abstraction with the concepts in a closed domain. The associations are probabilistic in nature to account for inherent uncertainties in observation of media patterns. The spatial and temporal relations between the media properties characterizing a concept (or, event) can also be expressed using MOWL. Often the concepts in a domain inherit the media properties of some related concepts, such as a historic monument inheriting the color and texture properties of its building material. It is possible to reason with the media properties of the concepts in a domain to derive an Observation Model for a concept. Finally, MOWL supports an abductive reasoning framework using Bayesian networks, that is robust against imperfect observations of media data. History W3C forum has undertaken the initiative of standardizing the ontology representation for web-based applications. The Web Ontology Language (OWL), standardized in 2004 after maturing through XML(S), RDF(S) and DAML+OIL is a result of that effort. Ontology in OWL (and some of its predecessor languages) has been successfully used in establishing semantics of text in specific application contexts. The concepts and properties in these traditional ontology languages are expressed as text, making an ontology readily usable for semantic analysis of textual documents. Semantic processing of media data calls for perceptual modeling of domain concepts with their media properties. M-OWL has been proposed as an ontology language that enables such perceptual modeling. While M-OWL is a syntactic extension of OWL, it uses a completely different semantics based on probabilistic causal model of the world. Key features Syntactically, MOWL is an extension of OWL. These extensions enable Definition of media properties following MPEG-7 media description model. Probabilistic association of media properties with the domain concepts. Formal semantics to the media properties to enable reasoning. Formal semantics for spatio-temporal relations across media objects and events. MOWL is accompanied with reasoning tools that support Construction of model of observation for a concept in multimedia documents with expected media properties. Probabilistic (Bayesian) reasoning for concept recognition with the model of observation. See also Large Scale Concept Ontology for Multimedia Ontology for Media Resources Bibliography H Ghosh, S Chaudhury and A Mallik. Ontology for multimedia applications. IEEE Intelligent Informatics Bulletin. 14(1). December 2013. A Mallik, H Ghosh, G Harit and S Chaudhury. MOWL: An Ontology Representation Language for Web based Multimedia Applications. ACM Transactions of Multimedia Computing, Communications and Applications (TOMCCAP). 10(1). December 2013. S Ajmani, H Ghosh, A Mallik and S Chaudhury. An ontology based personalized garment recommendation system. Workshop on Personalization, Recommender Systems and Social Media. Web Intelligence. USA, Nov 17–20, 2013. A Mallik, S Chaudhury and H Ghosh. Nrityakosha: Preserving the Intangible Heritage of Indian Classical Dance. In ACM Journal of Computing and Cultural Heritage. 4(3), December 2011. A Malik, S Chaudhury, H Ghosh, Preservation of Intangible Heritage: A case-study of Indian Classical Dance. In eHeritage 2010: 2nd ACM Workshop on eHeritage and Digital Art Preservation [ACM Multimedia Conference], October 2010. S Chaudhury and H Ghosh. Ontology based access to heritage artefacts on the web. In Multimedia Information Extraction and Digital Heritage Preservation. Ed. B.B. Chaudhuri and U. Munshi. World Scientific Pub Co. Inc., Mar. 2011 H. Ghosh, G. Harit and S. Chaudhury. Using ontology for building distributed digital libraries with multimedia contents. World Digital Library, 1(2), Dec 2008, pp. 83–100. S. Wattamwar and H. Ghosh. Spatio-Temporal Query for Multimedia Database. Workshop on Multimedia Semantics. ACM Multimedia Conference 2008, Vancouver (Canada), October 2008 H. Ghosh, P. Poornachandra, A. Mallik and S. Chaudhury. Learning Ontology for Personalized Video Retrieval. International Workshop on Many Faces of Multimedia Semantics (WMS07), ACM Multimedia Conference, Augsberg (Germany) September 2007. H.Ghosh, S. Chaudhury, K. Kashyap and B. Maiti. Ontology Specification and Integration for Multimedia Applications. In Ontologies in the Context of Information Systems, Ed. R. Sharman, R. Kishore and R. Ramesh. Springer, 2007, pp. 265–296 H.Ghosh, G. Harit and S. Chaudhury. Ontology based interaction with multimedia collections. International Conference on Digital Libraries, New Delhi, 2006. G. Harit, S. Chaudhury and H. Ghosh. Using Multimedia Ontology for generating conceptual annotations and hyperlinks in video collections. International conference on Web Intelligence, Hong Kong, 2006. T. Karthik, S. Chaudhury and H. Ghosh. Specifying Spatio-Temporal Relations in Multimedia Ontologies. International Conference of Pattern Recognition and Machine Intelligence, Kolkata 2005. H. Ghosh and S. Chaudhury. Distributed and Reactive Query Planning in R-MAGIC: An Agent-based Multimedia Retrieval System. IEEE Trans KDE, 16(9), Sep 2004. H. Ghosh, N. Rajarathnam and S. Chaudhury. Knowledge Representation for Web-based Services in a Multi-cultural Environment. IEEE International Workshop on Website Evolution (WSE-2001), Florence, Nov 2001. Multimedia Semantic Web Ontology languages
Multimedia Web Ontology Language
[ "Technology" ]
1,577
[ "Multimedia" ]
11,989,095
https://en.wikipedia.org/wiki/Latent%20semantic%20mapping
Latent semantic mapping (LSM) is a data-driven framework to model globally meaningful relationships implicit in large volumes of (often textual) data. It is a generalization of latent semantic analysis. In information retrieval, LSA enables retrieval on the basis of conceptual content, instead of merely matching words between queries and documents. LSM was derived from earlier work on latent semantic analysis. There are 3 main characteristics of latent semantic analysis: Discrete entities, usually in the form of words and documents, are mapped onto continuous vectors, the mapping involves a form of global correlation pattern, and dimensionality reduction is an important aspect of the analysis process. These constitute generic properties, and have been identified as potentially useful in a variety of different contexts. This usefulness has encouraged great interest in LSM. The intended product of latent semantic mapping, is a data-driven framework for modeling relationships in large volumes of data. Mac OS X v10.5 and later includes a framework implementing latent semantic mapping. See also Latent semantic analysis Notes References Information retrieval techniques Natural language processing
Latent semantic mapping
[ "Technology" ]
220
[ "Natural language processing", "Computing stubs", "Natural language and computing" ]
11,989,315
https://en.wikipedia.org/wiki/Medium-density%20housing
Medium-density housing is a term used within urban planning and academic literature to refer to a category of residential development that falls between detached suburban housing and large multi-story buildings. There is no singular definition of medium-density housing as its precise definition tends to vary between jurisdiction. Scholars however, have found that medium density housing ranges from about 25 to 80 dwellings per hectare, although most commonly sits around 30 and 40 dwellings/hectare. Typical examples of medium-density housing include duplexes, triplexes, townhouses, row homes, detached homes with garden suites, and walk-up apartment buildings. In Australia the density of standard suburban residential areas has traditionally been between 8-15 dwellings per hectare. In New Zealand medium-density development is defined as four or more units with an average density of less than 350m2. Such developments typically consist of semi-attached and multi-unit housing (also known as grouped housing) and low-rise apartments. In the United States, medium-density housing is usually referred to as middle-sized or cluster development that fits between neighborhoods with single family homes and high-rise apartments. This kind of development is usually intended to bridge the gap between low- and high-density neighborhoods. Because this kind of housing refers to density specifically, the type of building or number of units can vary. Medium-density housing in America has historically been perceived as undesirable due to the affordable nature of the housing that attracts low-income residents, and its perceived breach on the established suburban lifestyle. The various styles of medium-density housing are now being considered as more sustainable development options to help solve the housing crisis in America. Characteristics Medium-density housing is commonly identified by how it contrasts both suburban development and high-density development. Suburbs are characterized by large lot sizes, generous setbacks from the street, low density, and single-uses. High-density development, such as high-rise apartment towers have very high density with minimal setbacks and located near a variety of other land uses and transit connections. In contrast, medium-density development sits between these two extremes. Buildings usual are no taller than 4 stories, shorter than high-rises, but with smaller setbacks and individual lots than suburban areas. Most often, medium-density housing provides multiple housing units within a shared structure. These buildings tend to share common infrastructure such as party walls, water mains, parking areas, and green space. Due to the sharing of infrastructure and co-location of multiple units in a single building, medium-density housing tends to have lower per unit construction costs than single-family homes. Lower construction cost result in lower housing prices, mean that medium-density housing is often more affordable than a detached home. Many have suggested that increasing the supply of medium-density housing, known as the Missing Middle, is crucial to improving housing affordability in North America. Medium-density housing allows for more compact development meaning distances between destinations is shortened. As a result, areas of medium-density are more likely to be mixed-use with easy access to shopping and services. Close proximity to community services and amenities Efficient use of land, resources, and infrastructure Small to medium footprints Smaller, well-designed units Simple construction Reduced parking A sense of community More affordable History United States In the U.S. most medium-density or middle-sized housing was built between the 1870s and 1940s due to the need to provide denser housing near jobs. Examples include the streetcar suburbs of Boston which included more two-family and triple-decker homes than single-family homes, or areas like Brooklyn, Baltimore, Washington D.C. or Philadelphia which feature an abundance of row-houses. This type of housing, once an affordable option for rental or homeownership has turned into luxury development due to the rising land and construction costs of nearby developments. Before WW1 the Garden-city movement had become an increasingly popular method for planning neighborhoods. As the U.S. began experiencing a housing crisis for war workers, it began a mass production of housing that followed the Garden-city form. This greatly impacted development patterns across the United States until the Great Depression. During the Great Depression the U.S. government passed the National Housing Act of 1934 to create the Federal Housing Authority with the goal of providing more federally backed loans to Americans so they could purchase homes. This led to the White Flight of the 1940s because many White Americans were able to move from urban cities to homes at the outskirts of the city, as well as purchase cars. This furthered the suburbanization of America which led to the increase in home sizes, land use, automobile use and contributed to suburban sprawl. Neighborhoods were no longer built to human scale, but rather built to accommodate larger developments; in the suburbs this meant larger single family homes and wider roads for cars and in the city high-rise apartments. In the 1960s architects identified a stark difference between neighborhoods created by high-rise development and suburban sprawl, and realized there was a need for more medium-density or middle-sized housing to bridge the gap between cities and suburbs. Architects and developers started building cluster housing to address this gap in housing but these kinds of developments weren't marketed towards low-income residents in need of housing. Due to the recession in the 70's President Nixon issued a moratorium on government funding for low-income housing. Medium-density or cluster development were framed as an undesirable but necessary solution to the housing crisis by TV programs and newspapers. Established suburbs of the postwar-era had created distinctions between home and work life, also distancing themselves from their neighbors. The introduction of medium-density housing into established suburbs was not allowed due to exclusionary single-family zoning and because it was viewed breach of family fundamentals that had been established with suburban living. Medium-density, cluster or middle-sized housing was referred to as an inadequate, makeshift substitute for those who couldn't afford suburban living. This perception of medium-density or middle sized housing has been thought to be fueled by irrational fears of density and wanting to keep low-income residents out of suburban neighborhoods. This led to the decrease of medium or middle-sized housing in America, referred to as Missing Middle Housing. Australia Many traditional types of housing developed prior to car-based cities were at comparable densities, such as the terraced (row) or courtyard housing found in many parts of the world. The inner suburbs in many Australian cities and those activity centres developed during the late Victorian suburban boom have examples of medium density housing. Since the 1960s, many Australian states have encouraged urban consolidation policies which have facilitated the construction of medium density housing. The debate around medium-density housing arose during the Garden Suburbs movement. The first studies on medium-density housing happened during the 1960s during the post-war housing boom, focusing on housing consumption rather than sustainability and affordability. In the 1970s more studies performed investigated barriers to producing medium-density housing and attributed them to planning. Studies in the 1980s and 1990s however focused more on perceptions of medium-density housing and how it is designed. Despite positive perceptions of medium-density housing from those who actually lived in it, people living in less dense housing perceived it as negative. New Zealand In New Zealand housing has historically focused on a semi-rural or suburban density and has experienced extensive suburban sprawl. Several reports have highlighted the need for medium-density housing in New Zealand as a means of providing affordable sustainable housing. Criticism The design of medium density housing requires careful consideration of urban design principles. In some cases, urban consolidation policies have allowed demolition of existing low-density housing across established residential suburbs, replacing them with various forms of medium-density dwellings. Because of this, many medium-density developments have been controversial in the last 20–30 years because of their perceived negative impacts on the neighborhood character of established residential areas. In Australia there has been an increasing policy emphasis by state and local governments to regulate the design of new medium density developments, such as the Victorian government's ResCode, released in 2001, and the metropolitan strategy, Melbourne 2030, which seeks to confine such housing to activity centers. In America, restrictive zoning and "no-growth" ordinances stop cities and towns from densifying their neighborhoods with medium-density or middle-sized housing. Rezoning a city or town can be time-consuming, costly and remains susceptible to community pushback by NIMBYs. Critics of goldilocks density, a term coined by Lloyd Alter, argue that medium-density housing is not a blanket solution for the housing crisis different cities face because each cities will need to take a different approach. See also Urban density Missing middle housing Green building Affordable housing Save Our Suburbs Transit oriented development New Urbanism Subsidized housing in the United States Urban sprawl Not In My Backyard movement Exclusionary zoning References External Sources Mid-rise: density at the human scale Urban density is not your enemy The Color of Law: A Forgotten History of How Our Government Segregated America Urban planning Urban design
Medium-density housing
[ "Engineering" ]
1,827
[ "Urban planning", "Architecture" ]