id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
40,247 | https://en.wikipedia.org/wiki/Pulsed%20inductive%20thruster | A pulsed inductive thruster (PIT) is a form of ion thruster, used in spacecraft propulsion. It is a plasma propulsion engine using perpendicular electric and magnetic fields to accelerate a propellant with no electrode.
Operation
A nozzle releases a puff of gas which spreads across a flat spiraling induction coil of wire about 1 meter across. A bank of capacitors releases a pulse of high voltage electric current of tens of kilovolts lasting 10 microseconds into the coil, generating a radial magnetic field. This induces a circular electrical field in the gas, ionizing it and causing charged particles (free electrons and ions) to revolve in the opposite direction as the original pulse of current. Because the motion of this induced current flow is perpendicular to the magnetic field, the plasma is accelerated out into space by the Lorentz force at a high exhaust velocity (10 to 100 km/s).
Advantages
Unlike an electrostatic ion thruster which uses an electric field to accelerate only one species (positive ions), a PIT uses the Lorentz body force acting upon all charged particles within a quasi-neutral plasma. Unlike most other ion and plasma thrusters, it also requires no electrodes (which are susceptible to erosion) and its power can be scaled up simply by increasing the number of pulses per second. A 1-megawatt system would pulse 200 times per second.
Pulsed inductive thrusters can maintain constant specific impulse and thrust efficiency over a wide range of input power levels by adjusting the pulse rate to maintain a constant discharge energy per pulse. It has demonstrated efficiency greater than 50%.
Pulsed inductive thrusters can use a wide range of gases as a propellant, such as water, hydrazine, ammonia, argon, or xenon, among many others. Due to this ability, it has been suggested to use PITs for Martian missions: an orbiter could refuel by scooping CO2 from the atmosphere of Mars, compressing the gas and liquefying it into storage tanks for the return journey or another interplanetary mission, whilst orbiting the planet.
Developments
Early development began with fundamental proof-of-concept studies performed in the mid-1960s. NASA conducts experiments on this device since the early 1980s.
PIT Mk V, VI and VII
NGST (Northrop Grumman Space Technology), as a contractor for NASA, built several experimental PITs.
Research efforts during the first period (1965–1973) were aimed at understanding the structure of an inductive current sheet and evaluating different concepts for propellant injection and preionization.
In the second period (1979–1988), the focus shifted more towards developing a true propulsion system and increasing the performance of the base design through incremental design changes, with the build of Mk I and Mk IV prototypes.
The third period (1991-today) began with the introduction of a new PIT thruster design known as the Mk V. It evolved into the Mk VI, developed to reproduce Mk V single-shot tests, which completely characterize thruster performance. It uses an improved coil of hollow copper tube construction and an improved propellant valve, but is electrically identical to the Mk V, using the same capacitors and switches. The Mk VII (early 2000s) has the same geometry as Mk VI, but is designed for high pulse frequency and long-duration firing with a liquid-cooled coil, longer-life capacitors, and fast, high-power solid-state switches. The goal for Mk VII is to demonstrate up to 50 pulses per second at the rated efficiency and impulse bit at 200 kW of input power in a single thruster. Mk VII design is the base for the most recent NuPIT (Nuclear-electric PIT).
The PIT has obtained relatively high performance in the laboratory environment, but it still requires additional advancements in switching technology and energy storage before becoming practical for high-power in-space applications, with the need for a nuclear-based onboard power source.
FARAD
FARAD, which stands for Faraday accelerator with radio-frequency assisted discharge, is a lower-power alternative to the PIT that has the potential for space operation using current technologies.
In the PIT, both propellant ionization and acceleration are performed by the HV pulse of current in the induction coil, while FARAD uses a separate inductive RF discharge to preionize the propellant before it is accelerated by the current pulse. This preionization allows FARAD to operate at much
lower discharge energies than the PIT (100 joules per pulse vs 4 kilojoules per pulse) and allows for a reduction in the thruster's size.
References
Ion engines
Magnetic propulsion devices | Pulsed inductive thruster | Physics,Chemistry | 951 |
36,069,663 | https://en.wikipedia.org/wiki/Australasian%20Institute%20of%20Mining%20and%20Metallurgy | The Australasian Institute of Mining and Metallurgy (AusIMM) provides services to professionals engaged in all facets of the global minerals sector and is based in Carlton, Victoria, Australia.
History
The Institute had its genesis in 1893 with the formation in Adelaide of the Australasian Institute of Mining Engineers drawing its inspiration from the success of the American Institute of Mining Engineers, and some impetus from the Mine Managers Association of Broken Hill. Office-holders were equally from South Australia and "The Hill", where the Institute established its headquarters.
This approach to the foundation of a federal organization was welcomed in mining districts of other Australian colonies. and branches were formed in Broken Hill, the Thames Goldfield (New Zealand), Ballarat, and elsewhere. Succeeding annual conferences were held at Ballarat, Hobart, Broken Hill and other mining centres. The 1926 conference was held in Otago, New Zealand.
In 1896 its headquarters were removed from Broken Hill to Melbourne, and in June 1919 adopted its present name.
In 1954 the institute applied for a royal charter, granted 1955.
The AusIMM represents more than 15 500 members drawn from all sections of the industry and supported by a network of branches and societies in Australasia and internationally.
Member grades and post-nominals
Some notable members
AIME
Sir Henry Ayers foundation president, 1893
Uriah Dudley foundation general secretary 1893–1897
David Lauder Stirling (c. 1871 – 30 August 1949); president 1894, secretary 1906–1941 or later; also secretary, Victorian Chamber of Mines 1898–1945
H. W. Ferd Kayser (mine manager Mount Bischoff Tin Mining Company), vice-president 1894, president 1898, 1899
Alexander Montgomery (government geologist in New Zealand, Tasmania, and Western Australia), president 1895
Ernest Lidgey geological surveyor in Victoria; conducted Australia's first geophysical surveys; president 1901
Samuel Henry McGowan (c. 1845 – 13 May 1921), accountant specializing in gold mining companies, mayor of Bendigo 1899–1900; president 1902
F. Danvers Power, lecturer at Sydney University, president 1897, 1904.
Robert C. Sticht general manager, Mount Lyell Mining & Railway Company, president 1905, 1915, vice-president 1909
G. D. Delprat (manager of the Broken Hill mine), president 1906
Dr. Alfred William Howitt, C.M.G., F.G.S., the eminent naturalist, was president 1907
Frank A. Moss, (general manager of Kalgurli Gold Mines), president 1907
C. F. Courtney (general manager of the Sulphide Corporation), president 1908
Richard Hamilton, (general manager of the Great Boulder Proprietary mine), president 1909, vice-president 1910
G. A. Richard (of Mount Morgan, Queensland), president 1910
Herman Carl Bellinger from US; mine manager, Cobar 1909–1914, president 1912
James Hebbard (manager of the Central Mine, Broken Hill), president 1913
John Warren (mining) (manager of Block 10, Broken Hill), vice-president 1894, president 1902
Hyman Herman (director of the Victorian geological survey), joined 1897, president 1914, remained councillor to 1959.
Robert Silvers Black, (general manager of Kalgurli Gold Mines), president 1917
J. W. Sutherland metallurgist at Lake View Consols and Golden Horse Shoe gold mines; president 1918
Professor D. B. Waters of Otago, New Zealand, vice-president 1917,1918 (absent for most of this period — he was with New Zealand Tunnelling Company in France).
AIMM
R. W. Chapman, vice-president 1906, president 1920
Colin Fraser (later Sir Colin), president 1923
H. W. Gepp, later Sir Herbert William Gepp, president 1924
Ernest W. Skeats (professor of geology, University of Melbourne), vice-president 1924, president 1925
David Lauder Stirling, general secretary 1922–45
R. M. Murray (general manager, Mount Lyell Mining & Railway Company), president 1927
Alfred Stephen Kenyon, treasurer 1897, secretary 1906, president 1928
E. C. Andrews (New South Wales Government Geologist), president 1929
William Edward Wainwright (general manager of Broken Hill South), president 1919, 1930, vice-president 1916–18, 1933, 1934
Wiliam Harley Wainwright son of W. E. Wainwright, (chief metallurgist, BHP) life member
Essington Lewis (managing director of BHP) vice-president 1932, president 1935
Andrew Fairweather, president 1932 (succeeded W. E. Mainwright at Broken Hill South mine and as General Manager)
Professor J. Neill Greenwood (dean of Melbourne University Faculty of Applied Science), president 1936,1937
Donald Yates, superintendent of Broken Hill Associated Smelters Pty., president 1937
Julius Kruttschnitt (general manager, Mount Isa Mines) president 1939
Oliver H. Woodward (general manager, North Mine, Broken Hill) active in tunnelling operations WWI, president 1940
Arthur H. P. Moline (1877–1965) (succeeded R. M. Murray as general manager, Mount Lyell, in 1944), president 1945
Asdruebal James Keast (general manager, Zinc Corporation; Australian Aluminium Production Commission 1951–55), president 1946, vice-president 1947
Frank R. Hockey / Francis Richard Hockey (general superintendent, BHP), president 1947, vice-president 1949,1950
F. F. Espie / Frank Fancett Espie (general superintendent, Western Mining Corporation), president 1948
Godfrey Bernard O'Malley, vice-president 1943–46
Maurice Alan Edgar Mawby (director of exploration, Zinc Corporation, Limited), vice-president 1950,1951, president 1953,1954
Ian Munro McLennan (General Manager, BHP), president 1951
Beryl Elaine Jacka MBE, typist 1936; assistant general secretary 1945–52, secretary 1952–1976
Gordon Colvin Lindesay Clark CMG
See also
British
North of England Institute of Mining and Mechanical Engineers (known as the Mining Institute) founded 1852
Institution of Mining Engineers founded 1889, incorporating the Mining Institute above
Institution of Mining and Metallurgy founded 1892
Institute of Materials, Minerals and Mining merger of IMM and Institute of Materials in 2002.
US
American Institute of Mining, Metallurgical, and Petroleum Engineers (originally American Institute of Mining Engineers founded 1871)
References
1893 establishments in Australia
Engineering societies based in Australia
Organizations established in 1893
Metallurgical organizations
Mining organisations in Australia
Organisations based in Victoria (state)
Metallurgical industry of Australia | Australasian Institute of Mining and Metallurgy | Chemistry,Materials_science,Engineering | 1,345 |
37,011,244 | https://en.wikipedia.org/wiki/Conductor%20clashing | In an overhead power line, conductor clashing occurs when energized wires accidentally come into contact with each other. Overhead transmission systems typically use un-insulated bare conductors for reasons of weight and economy. When bare conductors touch, the resulting momentary short circuit or electric arc can cause disturbances to the electric power system, damage to the conductors, or fire. Conductor clashing may be caused by wind, ice, excess sag due to creep or thermal expansion due to sustained heavy loading, or by contact with animals or objects. Conductor clash is prevented by proper design and installation to anticipate the likely conditions of weather and load. The effects of clashing conductors can be mitigated by fuses or protective relays and circuit breakers to de-energize the shorted conductors. For some types of transmission line, it may be possible to automatically reclose a circuit breaker in expectation that the clash was a momentary problem, thus minimizing interruption of service to grid customers.
Causes
Heavy winds or gusts can often result in the unintended contact of conductors, particularly where power lines exhibit excessive sag or other structural conditions that permit conductors to come into close proximity.
Trees near power lines may break and drop branches onto the wires, increasing the potential for conductors to clash by bringing them together.
Vehicles may hit transmission towers or poles, and aircraft may get entangled in wires. This may cause power lines to clash. This type of collision, often the result of accidents, can have a cascading effect on the power system, leading to conductor clashing.
Seismic activity may displace transmission line support structures, disturbing the planned spacing of conductors and possibly producing a clash.
Acts of vandalism targeted at power lines introduce another reason for conductor clashing. Deliberate acts of hurling objects at power lines can induce drooping and the subsequent collision of wires.
Process
When conductors clash, heat is produced, along with vaporization of conductor material, and the expulsion of metal particles. These ejected particles, often in the form of sparks, are then carried away by the wind.
The combustion aspect is driven by the release of energy in the form of an electrical arc (electrical breakdown of gas resulting in electrical discharge). Simultaneously, the conductor material erodes and vaporizes due to the intense heat generated by this arc. The process is significantly influenced by key parameters, including arc voltage, short-circuit current, and the duration of the arc. A higher arc voltage intensifies the energy of the electrical arc, while an increased short-circuit current leads to more substantial heat generation and vaporization of the conductor material. The duration of the arc plays a critical role, impacting the extent of material vaporization and potentially leading to molten or burning particles.
Contact between conductors may produce an electric arc with a bright flash, the emission of sparks, and a puff of white smoke. The intense heat of the arc causes the underlying metal to reach its boiling point and vaporize. When these vaporized metal particles come into contact with the air, they ignite and burn rapidly, forming Al2O3 (aluminum oxide) as small aerosol particles. These aerosol particles can reach temperatures anywhere from 930 K (Kelvin) to 2730 K and create the characteristic puff of smoke. When the oxide is in a molten state, the oxidation process proceeds rapidly, with the heat generated by oxidation offsetting heat losses through convection and radiation. These droplets will continue to burn until all the metal is consumed or until they reach the ground.
Effects
Fire ignition resulting from conductor clashing has been a recurring issue worldwide, with numerous instances occurring in various countries. Such incidents can lead to significant environmental damage, such as forest fires, as well as substantial financial losses and, in some cases, pose potential threats to human lives.
An example of a conductor clashing catastrophe occurred in Western Australia on December 2, 2004. A 19.1 kV (kilovolt) conductor became dislodged from a pole-mounted insulator at the first pole and subsequently clashed with the underslung running earth conductor approximately 200 meters away. This collision led to a flashover (ignition of combustible material in an enclosed area), releasing hot metal particles (sparks) that ignited dry harvested stubble, which initiated the wildfire. Amid the fire, both conductors snapped, with the first conductor ultimately succumbing to structural wear and the influence of northerly winds. When both conductors fell and made contact with the dividing fence, the wildfire was ignited. It's worth noting that the property owner had previously reported a low-hanging power line conductor adjacent to the first pole. According to the property owner's estimate, roughly 468 hectares of land had been burned.
References
Electric power distribution
Electrical phenomena | Conductor clashing | Physics | 964 |
47,634,858 | https://en.wikipedia.org/wiki/Meltwater%20pulse%201B | Meltwater pulse 1B (MWP1b) is the name used by Quaternary geologists, paleoclimatologists, and oceanographers for a period of either rapid or just accelerated post-glacial sea level rise that some hypothesize to have occurred between 11,500 and 11,200 years ago at the beginning of the Holocene and after the end of the Younger Dryas. Meltwater pulse 1B is also known as catastrophic rise event 2 (CRE2) in the Caribbean Sea.
Other named, postglacial meltwater pulses are known most commonly as meltwater pulse 1A0 (meltwater pulse19ka), meltwater pulse 1A, meltwater pulse 1C, meltwater pulse 1D, and meltwater pulse 2. It and these other periods of proposed rapid sea level rise are known as meltwater pulses because the inferred cause of them was the rapid release of meltwater into the oceans from the collapse of continental ice sheets.
Sea level
There is considerable unresolved disagreement over the significance, timing, magnitude, and even existence of meltwater pulse 1B. It was first recognized by Richard G Fairbanks in his coral reef studies in Barbados. From the analysis of data from cores of coral reefs surrounding Barbados, he concluded that during meltwater pulse 1B, sea level rose in about 500 years about 11,300 years ago.
However, in 1996 and 2010, Bard and others published detailed analysis of data from cores from coral reefs surrounding Tahiti. They concluded that meltwater pulse 1B was, at best, just an acceleration of sea level rise at about 11,300 years ago and it was, at worst, not statistically different from a constant rate sea level rise between 11,500 and 10,200 years ago. They argued that meltwater pulse 1B was certainly not an abrupt jump in sea level, which they would consider to be a meltwater pulse. They argue that the rise in sea level estimated by Fairbanks from cores is an artifact created by differential tectonic uplift between different sides of a tectonic structure lying between the two Barbados cores used to identify meltwater pulse 1B and calculate its magnitude.
Other differing estimates about the magnitude of meltwater pulse 1B have been published. In 2010, Standford and others found it to be "robustly expressed" as a multi-millennial interval of enhanced rates of sea-level rise between 11,500 and 8,800 years ago with peak rates of rise of up to 25 mm/yr. In 2004, Liu and Milliman reexamined the original data from Barbados and Tahiti and reconsidered the mechanics and sedimentology of reef drowning by sea level rise. They concluded that meltwater pulse 1B occurred between 11,500 and 11,200 years ago, a 300-year interval, during which sea level rose from to , giving a mean annual rate of around 40mm/yr Other studies have revised the estimated magnitude of meltwater pulse 1B downward to between and less than .
Source(s) of meltwater pulse 1B
Given the disagreement over its timing, magnitude, and even existence, it has been very difficult to constrain the source of meltwater pulse 1B. In his modeling of global glacial isostatic adjustment, Peltier assumed that the predominant source for MWP-1B was the Antarctic Ice Sheet. However, no justification for this assumption is provided in his papers. In addition, Leventer and others argue that the timing of deglaciation in eastern Antarctica roughly coincides with the onset of meltwater pulse 1B and the Antarctic Ice Sheet is a likely source. Finally, McKay and others suggested that recession of the West Antarctic Ice Sheet may have supplied the meltwater needed to the start meltwater pulse 1B.
However, later studies involving the surface exposure dating of glacial erratics, nunataks, and other formerly glaciated exposures using cosmogenic dating contradicted the above arguments and assumptions. These studies tentatively concluded that the actual amount of thinning of the East Antarctic Ice Sheet is too small, , and likely too gradual and too late to have contributed any significant amount of meltwater to meltwater pulse 1B. They also concluded that the ice sheet retreat and thinning accelerated for the West Antarctic Ice Sheet only after 7,000 years ago. Although other researchers have concluded that the abrupt decay of the Laurentide Ice Sheet might have been sufficient to have been responsible for meltwater pulse 1B, its sources remain an unresolved mystery. For example, recent research in West Antarctica found that sufficient deglaciation contemporaneous with meltwater pulse 1B occurred to readily explain this rapid period of global sea level rise.
Mississippi River superflood events MWF-5
A variety of paleoclimate and paleohydrologic proxies, which can be used to reconstruct the prehistoric discharge of the Mississippi River, can be found in the sediments of the Louisiana continental shelf and slope, including the Orca and Pygmy basins, within the Gulf of Mexico. These proxies have been used by Quaternary geologists, paleoclimatologists, and oceanographers to reconstruct both the duration and discharge the mouth of the prehistoric Mississippi River for the Late glacial and postglacial periods, including the time of meltwater pulse 1B. The chronology of flooding events found by the study of cores on the Louisiana continental shelf and slope are in agreement with the timing of meltwater pulses. For example, meltwater pulse 1A in the Barbados coral record matches quite well with a group of two separate Mississippi River meltwater flood events, MWF-3 (12,600 ) and MWF-4 (11,900 ). In addition, meltwater pulse 1B in the Barbados coral record matches a cluster of four Mississippi River superflood events, MWF-5, that occurred between 9,900 and 9,100 . In 2003, Aharon reported that flood event MWF-5 consists of four separate and distinct superfloods at 9,970-9,870; 9,740-9,660; 9,450-9,290; and 9,160-8,900 . The discharge at the mouth of the Mississippi River during three of the four superfloods of MWF-5 is estimated to have varied between 0.07 and 0.08 sverdrups (million cubic meters per second). The superflood at 9450-9290 is estimated to have had a discharge of 0.10 sverdrups (million cubic meters per second). This research also shows that the Mississippi superfloods of MWF-5 occurred during the Preboreal. The same research found an absence of either meltwater floods or superfloods discharging into the Gulf of Mexico from the Mississippi River during the preceding thousand years, which is known as the cessation event, that corresponds with the Younger Dryas stadial.
The Pleistocene deposits blanketing the Louisiana Continental shelf and slope between the mouth of the Mississippi River and Orca and Pygmy basins largely consist of sediments transported down the Mississippi River mixed with variable additions of local biologically generated carbonate. Because of this, the provenance of the meltwater and superfloods can be readily inferred from the sediment's composition. The composition of the sediments brought into the Gulf of Mexico and deposited on the Louisiana continental shelf and slope during the superfloods of MWF-5 reflect an abrupt change in mineralogy, fossil content, organic matter, and amount after 12,900 years ago at the start of the Younger Dryas interval.
First, after 12,900 years ago, smectite-rich sediments from the Missouri River drainage are progressively and quickly replaced by sediments associated with the Great Lakes region and further south along the Mississippi River, as indicated by their clay mineralogy. Second, after 12,900 years ago, the overall quantity of sediment being transported down the Mississippi River abruptly decreases with a corresponding and significantly increased proportion of locally produced biologically generated carbonate and organic matter. Third, after 12,900 years ago, various analyses, e.g. C/N ratio and Rock–Eval Pyrolysis, indicate that the type of organic matter present changes from organic matter that was reworked from old formations by glacials to well-preserved Holocene organic matter that is mainly of marine origin. Finally, after 12,900 years ago, the presence of reworked nannofossils disappear from sediments accumulating on the Louisiana continental shelf and slope.
The above noted changes in the nature of accumulating sediments indicate that after the start of the Younger Dryas, the southern route for Laurentide Ice Sheet meltwater was largely blocked. On the rare occasions it could flow southward, glacial meltwater flowed through Lake Agassiz and sometimes the Great Lakes to the Mississippi River. As the water moved through either Lake Agassiz or other proglacial lakes, they completely trapped and removed any glacial outwash and the older, reworked organic material and reworked nannofossils that the outwash contained. As a result, the sediment carried by the Mississippi River after the start of the Younger Dryas consisted of illite and chlorite enriched sediments from the Great Lakes region that lacked any reworked nannofossils. These changes argue that the superfloods of MWF-5 which fed Meltwater Pulse B are related to either rare periods of southerly discharge of meltwater through Lake Agassiz, nonglacial periods of climate-enhanced discharge within the Mississippi River Basin, or a combination of both.
Antarctic iceberg discharge events
In case of the Antarctic Ice Sheet, an equivalent well-dated, high-resolution record of the discharge of icebergs from various parts of the Antarctic Ice Sheet for the past 20,000 years is also available. Research by Weber and others constructed a record from variations in the amount of iceberg-rafted debris versus time and other environmental proxies in two cores taken from the ocean bottom within Iceberg Alley of the Weddell Sea. The cores of ocean bottom sediments within Iceberg Alley provide a spatially integrated signal of the variability of the discharge of icebergs into the marine waters by the Antarctic Ice Sheet because it is a confluence zone in which icebergs calved from the entire Antarctic Ice Sheet drift along currents, converge, and exit the Weddell Sea to the north into the Scotia Sea.
Between 20,000 and 9,000 years ago, Weber and others documented eight well-defined periods of increased iceberg calving and discharge from various parts of the Antarctic Ice Sheet. Five of these periods, AID5 through AID2 (Antarctic Iceberg Discharge events), are comparable in duration and have a repeat time of about 800–900 years. The largest of the Antarctic Iceberg Discharge events is AID2. Its peak intensity at about 11,300 years ago, which is synchronous with meltwater pulse 1B in the Barbados sea-level record, is consistent with a significant Antarctic contribution to meltwater pulse 1B. The lack of a sea level response in the Tahiti coral record might indicate a regionally specific sea-level response to a deglaciation event only from the Pacific sector of the Antarctica Ice Sheet.
See also
Deglaciation
Holocene glacial retreat
Younger Dryas
References
External links
Gornitz, V. (2007) Sea Level Rise, After the Ice Melted and Today. Science Briefs, NASA's Goddard Space Flight Center. (January 2007)
Gornitz, V. (2012) The Great Ice Meltdown and Rising Seas: Lessons for Tomorrow. Science Briefs, NASA's Goddard Space Flight Center. (June 2012)
Liu, J.P. (2004) Western Pacific Postglacial Sea-level History., River, Delta, Sea Level Change, and Ocean Margin Research Center, Marine, Earth and Atmospheric Sciences, North Carolina State University, Raleigh, NC.
Glaciology
Oceanography
Paleoclimatology
Sea level
10th millennium BC | Meltwater pulse 1B | Physics,Environmental_science | 2,452 |
20,838,282 | https://en.wikipedia.org/wiki/UMA%20Acceleration%20Architecture | In computing, UMA Acceleration Architecture (UXA) is the reimplementation of the EXA graphics acceleration architecture of the X.Org Server developed by Intel. Its major difference with EXA is the use of GEM, replacing Translation Table Maps. In February 2009 it became clear that UXA would not be merged back into EXA.
Intel is transitioning from UXA to SNA.
Implementations
In May 2009 it was announced that Ubuntu would migrate their graphics acceleration for the Ubuntu 9.10 release to UXA.
See also
Direct Rendering Infrastructure
Mesa 3D
EGL
References
X-based libraries | UMA Acceleration Architecture | Technology | 127 |
480,178 | https://en.wikipedia.org/wiki/Infinitesimal%20rotation%20matrix | An infinitesimal rotation matrix or differential rotation matrix is a matrix representing an infinitely small rotation.
While a rotation matrix is an orthogonal matrix representing an element of (the special orthogonal group), the differential of a rotation is a skew-symmetric matrix in the tangent space (the special orthogonal Lie algebra), which is not itself a rotation matrix.
An infinitesimal rotation matrix has the form
where is the identity matrix, is vanishingly small, and
For example, if representing an infinitesimal three-dimensional rotation about the -axis, a basis element of
The computation rules for infinitesimal rotation matrices are as usual except that infinitesimals of second order are routinely dropped. With these rules, these matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals. It turns out that the order in which infinitesimal rotations are applied is irrelevant.
Discussion
An infinitesimal rotation matrix is a skew-symmetric matrix where:
As any rotation matrix has a single real eigenvalue, which is equal to +1, the corresponding eigenvector defines the rotation axis.
Its module defines an infinitesimal angular displacement.
The shape of the matrix is as follows:
Associated quantities
Associated to an infinitesimal rotation matrix is an infinitesimal rotation tensor :
Dividing it by the time difference yields the angular velocity tensor:
Order of rotations
These matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals. To understand what this means, consider
First, test the orthogonality condition, . The product is
differing from an identity matrix by second-order infinitesimals, discarded here. So, to first order, an infinitesimal rotation matrix is an orthogonal matrix.
Next, examine the square of the matrix,
Again discarding second-order effects, note that the angle simply doubles. This hints at the most essential difference in behavior, which we can exhibit with the assistance of a second infinitesimal rotation,
Compare the products to ,
Since is second-order, we discard it: thus, to first order, multiplication of infinitesimal rotation matrices is commutative. In fact,
again to first order. In other words, .
This useful fact makes, for example, derivation of rigid body rotation relatively simple. But one must always be careful to distinguish (the first-order treatment of) these infinitesimal rotation matrices from both finite rotation matrices and from Lie algebra elements. When contrasting the behavior of finite rotation matrices in the Baker–Campbell–Hausdorff formula above with that of infinitesimal rotation matrices, where all the commutator terms will be second-order infinitesimals, one finds a bona fide vector space. Technically, this dismissal of any second-order terms amounts to Group contraction.
Generators of rotations
Suppose we specify an axis of rotation by a unit vector [x, y, z], and suppose we have an infinitely small rotation of angle Δθ about that vector. Expanding the rotation matrix as an infinite addition, and taking the first-order approach, the rotation matrix ΔR is represented as:
A finite rotation through angle θ about this axis may be seen as a succession of small rotations about the same axis. Approximating Δθ as θ/N, where N is a large number, a rotation of θ about the axis may be represented as:
It can be seen that Euler's theorem essentially states that all rotations may be represented in this form. The product Aθ is the "generator" of the particular rotation, being the vector associated with the matrix A. This shows that the rotation matrix and the axis-angle format are related by the exponential function.
One can derive a simple expression for the generator G. One starts with an arbitrary plane defined by a pair of perpendicular unit vectors a and b. In this plane one can choose an arbitrary vector x with perpendicular y. One then solves for y in terms of x and substituting into an expression for a rotation in a plane yields the rotation matrix R, which includes the generator .
To include vectors outside the plane in the rotation one needs to modify the above expression for R by including two projection operators that partition the space. This modified rotation matrix can be rewritten as an exponential function.
Analysis is often easier in terms of these generators, rather than the full rotation matrix. Analysis in terms of the generators is known as the Lie algebra of the rotation group.
Exponential map
Connecting the Lie algebra to the Lie group is the exponential map, which is defined using the standard matrix exponential series for For any skew-symmetric matrix , is always a rotation matrix.
An important practical example is the case. In rotation group SO(3), it is shown that one can identify every with an Euler vector , where is a unit magnitude vector.
By the properties of the identification , is in the null space of . Thus, is left invariant by and is hence a rotation axis.
Using Rodrigues' rotation formula on matrix form with , together with standard double angle formulae one obtains,
This is the matrix for a rotation around axis by the angle in half-angle form. For full detail, see exponential map SO(3).
Notice that for infinitesimal angles second-order terms can be ignored and remains
Relationship to skew-symmetric matrices
Skew-symmetric matrices over the field of real numbers form the tangent space to the real orthogonal group at the identity matrix; formally, the special orthogonal Lie algebra. In this sense, then, skew-symmetric matrices can be thought of as infinitesimal rotations.
Another way of saying this is that the space of skew-symmetric matrices forms the Lie algebra of the Lie group The Lie bracket on this space is given by the commutator:
It is easy to check that the commutator of two skew-symmetric matrices is again skew-symmetric:
The matrix exponential of a skew-symmetric matrix is then an orthogonal matrix :
The image of the exponential map of a Lie algebra always lies in the connected component of the Lie group that contains the identity element. In the case of the Lie group this connected component is the special orthogonal group consisting of all orthogonal matrices with determinant 1. So will have determinant +1. Moreover, since the exponential map of a connected compact Lie group is always surjective, it turns out that every orthogonal matrix with unit determinant can be written as the exponential of some skew-symmetric matrix. In the particular important case of dimension the exponential representation for an orthogonal matrix reduces to the well-known polar form of a complex number of unit modulus. Indeed, if a special orthogonal matrix has the form
with . Therefore, putting and it can be written
which corresponds exactly to the polar form of a complex number of unit modulus.
The exponential representation of an orthogonal matrix of order can also be obtained starting from the fact that in dimension any special orthogonal matrix can be written as where is orthogonal and S is a block diagonal matrix with blocks of order 2, plus one of order 1 if is odd; since each single block of order 2 is also an orthogonal matrix, it admits an exponential form. Correspondingly, the matrix S writes as exponential of a skew-symmetric block matrix of the form above, so that exponential of the skew-symmetric matrix Conversely, the surjectivity of the exponential map, together with the above-mentioned block-diagonalization for skew-symmetric matrices, implies the block-diagonalization for orthogonal matrices.
See also
Generators of rotations
Infinitesimal rotations
Infinitesimal rotation tensor
Infinitesimal transformation
Rotation group SO(3)#Infinitesimal rotations
Notes
References
Sources
Rotation
Mathematics of infinitesimals | Infinitesimal rotation matrix | Physics,Mathematics | 1,576 |
7,853,144 | https://en.wikipedia.org/wiki/Triethylammonium%20acetate | Triethylammonium acetate is a volatile salt, which is often used as an ion-pairing reagent in high-performance liquid chromatography separations of oligonucleotides. Since unadjusted triethylammonium acetate salt solutions contain neither a conjugate acid nor a conjugate base, they are not buffers.
References
Ammonium compounds
Acetates | Triethylammonium acetate | Chemistry | 86 |
4,965,295 | https://en.wikipedia.org/wiki/Sebacoyl%20chloride | Sebacoyl chloride (or sebacoyl dichloride) is a di-acyl chloride, with formula (CH2)8(COCl)2. A colorless oily liquid with a pungent odor, it is soluble in hydrocarbons and ethers. Sebacoyl chloride is corrosive; like all acyl chlorides, it hydrolyzes, evolving hydrogen chloride. It is less susceptible to hydrolysis though than shorter chain aliphatic acyl chlorides.
Preparation
Sebacoyl chloride can be prepared by reacting sebacic acid with an excess of thionyl chloride. Residual thionyl chloride can be removed by distillation.
Use
Sebacoyl chloride can be polymerized with hexamethylenediamine yielding nylon-6,10.
See also
Sebacic acid
Adipoyl chloride
References
Acyl chlorides
Monomers | Sebacoyl chloride | Chemistry,Materials_science | 186 |
59,849,415 | https://en.wikipedia.org/wiki/Lancet%20MMR%20autism%20fraud | In February 1998, a fraudulent research paper by physician Andrew Wakefield and twelve coauthors, titled "Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children", was published in the British medical journal The Lancet. The paper falsely claimed causative links between the measles, mumps, and rubella (MMR) vaccine and colitis and between colitis and autism. The fraud involved data selection, data manipulation, and two undisclosed conflicts of interest. It was exposed in a lengthy Sunday Times investigation by reporter Brian Deer, resulting in the paper's retraction in February 2010 and Wakefield being discredited and struck off the UK medical register three months later. Wakefield reportedly stood to earn up to US$43 million per year selling diagnostic kits for a non-existent syndrome he claimed to have discovered. He also held a patent to a rival vaccine at the time, and he had been employed by a lawyer representing parents in lawsuits against vaccine producers.
The scientific consensus on vaccines and autism is that there is no causal connection between MMR, or any other vaccine, and autism.
1998 The Lancet paper
In February 1998, a group led by Andrew Wakefield published a paper in the British medical journal the Lancet, supported by a press conference at the Royal Free Hospital in London, where the research was carried out. This paper reported on twelve children with developmental disorders referred to the hospital and described a constellation of bowel symptoms, as well as endoscopy and biopsy findings, that were said to be evidence of a new "syndrome" that Wakefield would later call "autistic enterocolitis". The paper described MMR vaccination as the "apparent precipitating event", tabulated the parents of eight of the twelve children as linking their developmental symptoms with MMR vaccination, suggested the connection between autism and the gastrointestinal pathologies was "real", and called for further research. But it admitted that the research did not "prove" an association between the MMR vaccine and autism.
At a press conference accompanying the paper's publication, later criticized as "science by press conference", Wakefield said that he thought it prudent to use single vaccines instead of the MMR triple vaccine until this could be ruled out as an environmental trigger. Wakefield said, "I can't support the continued use of these three vaccines given in combination until this issue has been resolved." In a video news release issued by the hospital to broadcasters in advance of the press conference, he called for MMR vaccine to be "suspended in favour of the single vaccines". In a BBC interview, Wakefield's mentor, Roy Pounder, who was not a coauthor, "admitted the study was controversial". He added: "In hindsight it may be a better solution to give the vaccinations separately ... When the vaccinations were given individually there was no problem." These suggestions were supported neither by Wakefield's coauthors nor any scientific evidence.
British television coverage of the press conference was intense, but press interest was mixed. The Guardian and the Independent reported it on their front pages, while the Daily Mail only gave the story a minor mention in the middle of the paper, and the Sun did not cover it.
Controversy over MMR
Multiple subsequent studies failed to find any link between the MMR vaccine, colitis, and autism. In March 1998, a panel of 37 scientific experts set up by the Medical Research Council, headed by Professor Sir John Pattison found "no evidence to indicate any link" between the MMR vaccine and colitis or autism in children.
Public concern over Wakefield's claims of a possible link between MMR and autism gained momentum in 2001 and 2002, after he published further papers suggesting that the immunisation programme was not safe. These were a review paper with no new evidence, published in a minor journal, and two papers on laboratory work that he said showed that measles virus had been found in tissue samples taken from children who had autism and bowel problems. There was wide media coverage including distressing anecdotal evidence from parents, and political coverage attacking the health service and government peaked with unmet demands that Prime minister Tony Blair reveal whether his infant son, Leo, had been given the vaccine. It was the biggest science story of 2002, with 1257 articles mostly written by non-expert commentators. In the period January to September 2002, 32% of the stories written about MMR mentioned Leo Blair, as opposed to only 25% that mentioned Wakefield. Less than a third of the stories mentioned the overwhelming evidence that MMR is safe. The paper, press conference and video sparked a major health scare in the United Kingdom. As a result of the scare, full confidence in MMR fell from 59% to 41% after publication of the Wakefield research. In 2001, 26% of family doctors felt the government had failed to prove there was no link between MMR and autism and bowel disease. In his book Bad Science, Ben Goldacre describes the MMR vaccine scare as one of the "three all-time classic bogus science stories" by the British newspapers (the other two are the Arpad Pusztai affair about genetically modified crops, and Chris Malyszewicz and the MRSA hoax).
A 2003 survey of 366 family doctors in the UK reported that 77% of them would advise giving the MMR vaccine to a child with a close family history of autism, and that 3% of them thought that autism could sometimes be caused by the MMR vaccine. A similar survey in 2004 found that these percentages changed to 82% and at most 2%, respectively, and that confidence in MMR had been increasing over the previous two years.
A factor in the controversy is that only the combined vaccine is available through the UK National Health Service. As of 2010 there are no single vaccines for measles, mumps and rubella licensed for use in the UK. Prime Minister Tony Blair gave support to the programme, arguing that the vaccine was safe enough for his own son, Leo, but refusing on privacy grounds to state whether Leo had received the vaccine; in contrast, the subsequent Prime Minister, Gordon Brown, explicitly confirmed that his son has been immunised. Cherie Blair confirmed that Leo had been given the MMR vaccination when promoting her autobiography.
The government stressed that administration of the combined vaccine instead of separate vaccines decreases the risk of children catching the disease while waiting for full immunisation coverage. The combined vaccine's two injections results in less pain and distress to the child than the six injections required by separate vaccines, and the extra clinic visits required by separate vaccinations increases the likelihood of some being delayed or missed altogether; vaccination uptake significantly increased in the UK when MMR was introduced in 1988. Health professionals have heavily criticized media coverage of the controversy for triggering a decline in vaccination rates. No scientific basis has been found for preferring separate vaccines, or for using any particular interval between them.
In 2001, Mark Berelowitz, one of the co-authors of the paper, said "I am certainly not aware of any convincing evidence for the hypothesis of a link between MMR and autism". The Canadian Paediatric Society, the Centers for Disease Control and Prevention, the Institute of Medicine of the National Academy of Sciences,
and the UK National Health Service have all concluded that there is no link between the MMR vaccine and autism, and a 2011 journal article described the vaccine–autism connection as "the most damaging medical hoax of the last 100 years".
Newspaper investigation
Conflict of interest
Public understanding of the claims sharply changed in February 2004 with revelations by The Sunday Times of an undisclosed conflict of interest on Wakefield's part in that, two years before the paper's publication, he had been approached by Richard Barr, a lawyer of Justice, Awareness and Basic Support, who was looking for an expert witness to start a planned class action regarding alleged "vaccine damage". Barr hired Wakefield at £150 per hour, plus expenses, and only then did they recruit the twelve children, actively seeking the parents of cases that might imply a connection between MMR and autism. Barr and Wakefield convinced the UK Legal Aid Board, a UK government organization to give financial support to people who could not afford access to justice, to assign £55,000 to fund the initial stage of the research. According to journalist Brian Deer, the project was intended to create evidence for the court case, but this only became publicly known six years after the Lancet report, with the newspaper's first disclosures.
Based on Deer's evidence, the Lancet editor-in-chief Richard Horton said Wakefield's paper should have never been published because its findings were "entirely flawed". Although Wakefield maintained that the legal aid funding was for a separate, unpublished study (a position later rejected by a panel of the UK General Medical Council), the editors of The Lancet judged that the funding source should have been disclosed to them. Horton wrote, "It seems obvious now that had we appreciated the full context in which the work reported in the 1998 Lancet paper by Wakefield and colleagues was done, publication would not have taken place in the way that it did." Several of Wakefield's co-researchers also strongly criticized the lack of disclosure.
No ethical approval
Among Deer's earliest reported allegations was that, contrary to a statement in the paper, Wakefield's research on the 12 children was conducted without any institutional review board authorization—a claim quickly denied in February 2004 by both the paper's authors and the Lancet. The paper itself said, "Ethical approval and consent. Investigations were approved by the Ethical Practices Committee of the Royal Free Hospital NHS Trust, and parents gave informed consent." The dispute over this would remain unresolved, however, until settled in the English High Court in March 2012, where a senior judge vindicated Deer. Quoting the text, Justice Mitting ruled, "This statement was untrue and should not have been included in the paper."
Retraction of an interpretation
The Lancet and many other medical journals require papers to include the authors' conclusions about their research, known as the "interpretation". The summary of the 1998 Lancet paper ended as follows:
In March 2004, immediately following the news of the conflict of interest allegations, ten of Wakefield's 12 coauthors retracted this interpretation, while insisting that the possibility of a distinctive gastrointestinal condition in children with autism merited further investigation. However, a separate study of children with gastrointestinal disturbances found no difference between those with autism spectrum disorders and those without, with respect to the presence of measles virus RNA in the bowel; it also found that gastrointestinal symptoms and the onset of autism were unrelated in time to the administration of MMR vaccine.
Later in 2004, the newspaper's investigation also found that Wakefield had a further conflict of interest in the form of a patent for a single measles vaccines, had manipulated evidence, and had broken other ethical codes. The Lancet paper was partially retracted in 2004 and fully retracted in 2010, when Lancets editor-in-chief Richard Horton described it as "utterly false" and said that the journal had been deceived. Wakefield was found guilty by the General Medical Council of serious professional misconduct in May 2010 and was struck off the Medical Register, meaning he could no longer practise as a doctor in the UK. In 2011, Deer provided further information on Wakefield's improper research practices to the British Medical Journal, which in a signed editorial described the original paper as fraudulent.
Deer continued his reporting in a Channel 4 Dispatches television documentary, MMR: What They Didn't Tell You, broadcast on 18 November 2004. This documentary reported that Wakefield had applied for patents on a single measles vaccine that claimed to be a potential rival of MMR, and that he knew of test results from his own laboratory at the Royal Free Hospital that contradicted his own claims. Wakefield's patent application was also noted in Paul Offit's 2008 book, Autism's False Prophets.
In January 2005, Wakefield sued Channel 4, 20/20 Productions, and the investigative reporter Brian Deer, who presented the Dispatches programme. However, after two years of litigation, and the revelation of more than £400,000 in undisclosed payments by lawyers to Wakefield, he discontinued his action and paid all the defendants' costs.
In 2006, Deer reported in The Sunday Times that Wakefield had been paid £435,643, plus expenses, by British trial lawyers attempting to prove that the vaccine was dangerous, with the undisclosed payments beginning two years before the Lancet paper's publication. This funding came from the UK legal aid fund, a fund intended to provide legal services to the poor.
Support for Wakefield
Despite The Sunday Times disclosures, Wakefield continued to find support. Melanie Phillips, an influential columnist with the Daily Mail, called the reporting of Wakefield's contract with the solicitor Richard Barr "a smear whose timing should raise a few eyebrows."
According to Deer writing in the BMJ, the General Medical Council hearing was also criticized by Richard Horton, the Lancet editor: "My own view is that the GMC is no place to continue this debate. But the process has started and it will be impossible to stop."
Manipulation of data
The Sunday Times continued the investigation, and on 8 February 2009, Brian Deer reported that Wakefield had "fixed" results and "manipulated" patient data in the Lancet, creating the appearance of a link with autism. Wakefield falsely denied these allegations, and even filed a complaint with the Press Complaints Commission (PCC) over this article on 13 March 2009. The complaint was expanded by a 20 March 2009 addendum by Wakefield's publicist. In July 2009, the PCC stated that it was staying any investigation regarding the Sunday Times article, pending the conclusion of the GMC investigation. In the event that Wakefield did not pursue his complaint, Deer published the complaint with a statement saying he and The Sunday Times rejected the complaint as "false and disingenuous in all material respects", and that the action had been suspended by the PCC in February 2010.
UK General Medical Council inquiry
Responding to the first Sunday Times reports, the General Medical Council (GMC), which is responsible for licensing doctors and supervising medical ethics in the UK, launched an investigation into the affair. The GMC brought the case itself, not citing any specific complaints, claiming that an investigation was in the public interest. The then-secretary of state for health, John Reid, called for a GMC investigation, which Wakefield himself welcomed. During a debate in the House of Commons, on 15 March 2004, Dr. Evan Harris, a Liberal Democrat MP, called for a judicial inquiry into the ethical aspects of the case, even suggesting it might be conducted by the CPS. In June 2006 the GMC confirmed that they would hold a disciplinary hearing of Wakefield.
The GMC's Fitness to Practise Panel first met on 16 July 2007 to consider the cases of Wakefield, Professor John Angus Walker-Smith, and Professor Simon Harry Murch. All faced charges of serious professional misconduct. The GMC examined, among other ethical points, whether Wakefield and his colleagues obtained the required approvals for the tests they performed on the children; the data-manipulation charges reported in the Sunday Times, which surfaced after the case was prepared, were not at question in the hearings. The GMC stressed that it would not be assessing the validity of competing scientific theories on MMR and autism. The GMC alleged that the trio acted unethically and dishonestly in preparing the research into the MMR vaccine. They denied the allegations. The case proceeded in front of a GMC Fitness to Practise panel of three medical and two lay members.
On 28 January 2010, the GMC panel delivered its decision on the facts of the case, finding four counts of dishonesty and 12 involving the abuse of developmentally disabled children. Wakefield was found to have acted "dishonestly and irresponsibly" and to have acted with "callous disregard" for the children involved in his study, conducting unnecessary and invasive tests. The panel found that the trial was improperly conducted without the approval of an independent ethics committee, and that Wakefield had multiple undeclared conflicts of interest.
On 24 May 2010, the GMC panel ordered that he be struck off the medical register. John Walker-Smith was also found guilty of serious professional misconduct and struck off the medical register, but that decision was reversed on appeal to the High Court in 2012, because the GMC panel had failed to decide whether Walker-Smith actually thought he was doing research in the guise of clinical investigation and treatment. The High Court criticised "a number of" wrong conclusions by the disciplinary panel and its "inadequate and superficial reasoning". Simon Murch was found not guilty.
In response to the GMC investigation and findings, the editors of the Lancet announced on 2 February 2010 that they "fully retract this paper from the published record". The Lancets editor-in-chief Richard Horton described it as "utterly false" and said that the journal had been deceived.
The Hansard text for 16 March 2010 reported Lord McColl asking the Government whether it had plans to recover legal aid money paid to the experts in connection with the measles, mumps and rubella/measles and rubella vaccine litigation. Lord Bach, Ministry of Justice dismissed this possibility.
Full retraction and fraud revelations
In an April 2010 report in The BMJ, Deer expanded on the laboratory aspects of his findings recounting how normal clinical histopathology results generated by the Royal Free Hospital were later changed in the medical school to abnormal results, published in the Lancet. Deer wrote an article in The BMJ casting doubt on the "autistic enterocolitis" that Wakefield claimed to have discovered. In the same edition, Deirdre Kelly, President of the European Society of Pediatric Gastroenterology and Nutrition and the Editor of the Journal of Pediatric Gastroenterology and Nutrition expressed some concern about The BMJ publishing this article while the GMC proceedings were underway.
On 5 January 2011, The BMJ published the first of a series of articles by Brian Deer, detailing how Wakefield and his colleagues had faked some of the data behind the 1998 Lancet article. By looking at the records and interviewing the parents, Deer found that for all 12 children in the Wakefield study, diagnoses had been tweaked or dates changed to fit the article's conclusion. Continuing BMJ series on 11 January 2011, Deer said that based upon documents he obtained under freedom of information legislation, Wakefield—in partnership with the father of one of the boys in the study—had planned to launch a venture on the back of an MMR vaccination scare that would profit from new medical tests and "litigation driven testing". The Washington Post reported that Deer said that Wakefield predicted he "could make more than $43 million a year from diagnostic kits" for the new condition, autistic enterocolitis. WebMD reported on Deer's BMJ report, saying that the $43 million predicted yearly profits would come from marketing kits for "diagnosing patients with autism" and "the initial market for the diagnostic will be litigation-driven testing of patients with AE [autistic enterocolitis, an unproven condition concocted by Wakefield] from both the UK and the USA". According to WebMD, the BMJ article also claimed that the venture would succeed in marketing products and developing a replacement vaccine if "public confidence in the MMR vaccine was damaged".
In an editorial accompanying Deer's 2011 series, The BMJ said, "it has taken the diligent scepticism of one man, standing outside medicine and science, to show that the paper was in fact an elaborate fraud", and asked:
Summarizing findings as of January 2011 in The BMJ, Deer set out the following analysis of the cases reported in the study:
In subsequent disclosures from the investigation, Deer obtained copies of unpublished gastrointestinal pathology reports on the children in the Lancet study that Wakefield had claimed showed "non-specific colitis" and "autistic enterocolitis". But expert analyses of these reports found bowel biopsies from the children to be overwhelmingly normal and with no evidence of any enterocolitis at all.
In September 2020, Johns Hopkins University Press published Deer's account of the fraud in his book The Doctor Who Fooled the World: Science, Deception, and the War on Vaccines. The book includes reporting of parents whose children were among the twelve recruited by Wakefield in the Lancet study. One described the paper as "fraudulent" while another complained of "outright fabrication".
Aftermath
Characterised as "perhaps the most damaging medical hoax of the 20th Century", The Lancet paper led to a sharp drop in vaccination rates in the UK and Ireland. Promotion of the claimed link, which continues in anti-vaccination propaganda despite being refuted, led to an increase in the incidence of measles and mumps, resulting in deaths and serious permanent injuries. Following the initial claims in 1998, multiple large epidemiological studies were undertaken. Reviews of the evidence by the Centers for Disease Control and Prevention, the American Academy of Pediatrics, the Institute of Medicine of the US National Academy of Sciences, the UK National Health Service, and the Cochrane Library all found no link between the MMR vaccine and autism. Physicians, medical journals, and editors have described Wakefield's actions as fraudulent and tied them to epidemics and deaths.
Among commentators drawing on Deer's investigation, academic Peter N. Steinmetz summarizes six fabrications and falsifications in the paper itself and in Wakefield's response in the areas of findings of non-specific colitis; behavioral symptoms; findings of regressive autism; ethics consent statement; conflict of interest statement; and methods of patient referral.
Wakefield has continued to defend his research and conclusions, saying there was no fraud, hoax or profit motive. He has subsequently become known for anti-vaccination activism. In 2016, Wakefield directed the anti-vaccination film Vaxxed: From Cover-Up to Catastrophe.
See also
Vaccine hesitancy
Folk epidemiology of autism
References
1998 hoaxes
Autism pseudoscience
MMR vaccine and autism
Health-related conspiracy theories
Vaccine hesitancy
Scientific controversies
de:MMR-Impfstoff#Der Fall Wakefield | Lancet MMR autism fraud | Technology | 4,660 |
4,404,004 | https://en.wikipedia.org/wiki/Metadata%20discovery | In metadata, metadata discovery (also metadata harvesting) is the process of using automated tools to discover the semantics of a data element in data sets. This process usually ends with a set of mappings between the data source elements and a centralized metadata registry. Metadata discovery is also known as metadata scanning.
Data source formats for metadata discovery
Data sets may be in a variety of different forms including:
Relational databases
NoSQL databases
Spreadsheets
XML files
Web services
Software source code such as Fortran, Jovial, COBOL, Assembler, RPG, PL/1, EasyTrieve, Java, C# or C++ classes, and thousands of other software languages
Unstructured text documents such as Microsoft Word or PDF files
A taxonomy of metadata matching algorithms
There are distinct categories of automated metadata discovery:
Lexical matching
Exact match - where data element linkages are made based on the exact name of a column in a database, the name of an XML element or a label on a screen. For example, if a database column has the name "PersonBirthDate" and a data element in a metadata registry also has the name "PersonBirthDate", automated tools can infer that the column of a database has the same semantics (meaning) as the data element in the metadata registry.
Synonym match - where the discovery tool is not just given a single name but a set of synonym.
Pattern match - in this case the tools is given a set of lexical patterns that it can match. For example, the tools may search for "*gender*" or "*sex*"
Semantic matching
Semantic matching attempts to use semantics to associate target data with registered data elements.
Semantic similarity - In this algorithm that relies on a database of word conceptual nearness is used. For example, the WordNet system can rank how close words are conceptually to each other. For example, the terms "Person", "Individual" and "Human" may be highly similar concepts.
Statistical matching
Statistical matching uses statistics about data sources data itself to derive similarities with registered data elements.
Distinct value analysis - By analyzing all the distinct values in a column the similarity to a registered data element may be made. For example, if a column only has two distinct values of 'male' and 'female' this could be mapped to 'PersonGenderCode'.
Data distribution analysis - By analyzing the distribution of values within a single column and comparing this distribution with known data elements a semantic linkage could be inferred.
Vendors
The following vendors (listed in alphabetical order) provide metadata discovery and metadata mapping software and solutions
Atlan (see )
BigHand/Esquire Innovations (see )
IBM
Imperva
Talend
InfoLibrarian Corporation (see )
MindHARBOR Metadata Database application (see )
Octopai - a Cross-Platform Metadata Discovery and Management Automation (see )
OvalEdge (see )
Revelytix (see )
Silver Creek Systems (see )
Stratio (see Data reliability is the base of successful companies)
Sypherlink: Harvester (see )
Unicorn Systems (see )
Research
INDUS project at the Iowa State University (see )
Mercury - A Distributed Metadata Management and Data Discovery System developed at the Oak Ridge National Laboratory DAAC (see )
See also
Metadata
Data mapping
Data warehouse
Semantic web
Defense Discovery Metadata Specification
References
Citations
Sources
Massive Data Analysis Systems by San Diego Supercomputer Center June 1997
IBM Whitepaper on Enterprise Metadata Discovery
White Paper on Metadata Management - by Esquire Innovations
Metadata | Metadata discovery | Technology | 707 |
16,454,909 | https://en.wikipedia.org/wiki/Nisoxetine | Nisoxetine (developmental code name LY-94939), originally synthesized in the Lilly research laboratories during the early 1970s, is a potent and selective inhibitor for the reuptake of norepinephrine (noradrenaline) into synapses. It currently has no clinical applications in humans, although it was originally researched as an antidepressant. Nisoxetine is now widely used in scientific research as a standard selective norepinephrine reuptake inhibitor. It has been used to research obesity and energy balance, and exerts some local analgesia effects.
Researchers have attempted to use a carbon-labeled form of nisoxetine for positron emission tomography (PET) imaging of the norepinephrine transporter (NET), with little success. However, it seems that tritium labeled nisoxetine (3H-nisoxetine, 3H-NIS) is a useful radioligand for labeling norepinephrine uptake sites in vitro, which nisoxetine and other antagonists for NET are able to inhibit.
History
In treating depression, it was theorized that substances that could enhance norepinephrine transmission, such as tricyclic antidepressants (TCA), could diminish the symptoms of clinical depression. The origins of nisoxetine can be found within the discovery of fluoxetine (Prozac, by Eli Lilly). In the 1970s, Bryan B. Molloy (a medicinal chemist) and Robert Rathbun (a pharmacologist) began a collaboration to search for potential antidepressant agents that would still retain the therapeutic activity of TCAs without undesirable cardiotoxicity and anticholinergic properties. The antihistamine drug diphenhydramine was found to inhibit monoamine uptake in addition to antagonizing histamine receptors, and this inhibition of monoamine uptake became a potential application for treating depression. As a result, Molloy, along with colleagues Schmiegal and Hauser, synthesized members of the phenoxyphenylpropylamine (PPA) group as analogues of diphenhydramine.
Richard Kattau in the Rathbun laboratory tested the newly created drugs within the series of PPAs for their ability to reverse apomorphine-induced hypothermia in mice (PIHM), a test in which the TCAs were active antagonists. Kattau found that one member of the series, LY94939 (nisoxetine), was as potent and effective as the TCAs in the reversal of PIHM. Nisoxetine was found to be as potent as desipramine in inhibiting norepinephrine uptake in brain synaptosomes while not acting as a potent inhibitor of serotonin (5-HT) or dopamine uptake.
Preclinical studies in humans were also performed in 1976 to evaluate the safety and possible mechanism of nisoxetine. At doses capable of blocking the uptake of norepinephrine and tyramine at nerve terminals, nisoxetine did not produce any substantial side effects. Abnormal electrocardiogram effects were also not observed, indicating it to be a relatively safe compound.
Later, however, researchers considered ways in which subtle chemical differences in the PPA series could selectively inhibit 5-HT uptake, which eventually led to the synthesis of nisoxetine's 4-trifluoremethyl analogue, fluoxetine. Nisoxetine was never marketed as a drug due to a greater interest in pursuing the development of fluoxetine, a selective serotonin reuptake inhibitor (SSRI).
Research
Obesity
Numerous evidence suggests that by altering catecholaminergic signaling (cell communication via norepinephrine and dopamine), food intake and body weight will be affected via classic hypothalamic systems that are involved in the regulation of energy balance. Antidepressants, such as the atypical antidepressant bupropion, can also cause weight loss due to their ability to increase extracellular dopamine and norepinephrine by inhibiting their uptake. Other research has focused on the interaction of serotonin and norepinephrine, leading to serotonin–norepinephrine reuptake inhibitors (SNRIs) as anti-obesity drugs.
The primary forebrain sensor of peripheral cues that relays information about the availability of energy and storage is the arcuate nucleus of the hypothalamus (ARH), and it contains two types of cells that have opposing effects on energy balance. These two types of cells are neuropeptide Y (NPY)-expressing cells, which cause hyperphagia and energy conservation, and cells that pro-opiomelanocortin (POMC), which are related to hypophagia and increased energy expenditure. NPY and norepinephrine are both localized in select neurons in the brain and periphery. A norepinephrine reuptake inhibitor, such as nisoxetine, could potentially cause anorexia by decreasing activity of cells that express NPY and norepinephrine.
In lean and obese mice, selective and combined norepinephrine and dopamine reuptake inhibition reduces food intake and body weight. Yet selective reuptake inhibitors of norepinephrine and dopamine (nisoxetine and a substance codenamed GBR12783, respectively) independently have no effect on food intake in mice. However, when given in combination, there is profound inhibition of food intake. This demonstrates a synergistic interaction between dopamine and norepinephrine in controlling ingestive behavior, similar to the action of SNRIs. The fact that nisoxetine alone does not affect food intake suggests that norepinephrine alone is insufficient to affect feeding or that the blocked reuptake of norepinephrine by nisoxetine is acting in the wrong place. Unlike nisoxetine, its sulfur analog thionisoxetine reduces food consumption in rodents and is a more promising treatment for obesity and eating disorders.
Analgesia effects
An essential activity of local anesthetics is the blockade of sodium channels. In this way, local anesthetics are able to produce infiltrative cutaneous analgesia, peripheral neural blockades, as well as spinal/epidural anesthesia. Due to nisoxetine's sodium channel blocking effect, it is also possible that it may also have a local anesthetic effect. Nisoxetine is able to suppress the nicotine-evoked increase of hippocampal norepinephrine in a dose-dependent nature through effects on the functioning of the nicotinic acetylcholine receptors. It is also able to inhibit tetrodotoxin-facilitated sensitive inward sodium currents in rat superior cervical ganglia.
Nisoxetine elicits local (cutaneous) but not systemic analgesia. Compared to lidocaine, a common anesthetic, nisoxetine is more potent (by four folds) and exhibits longer drug action towards producing cutaneous anesthesia. NMDA receptors are not involved in this local anesthetic effect. However, it is unclear whether nisoxetine may cause toxicity to the neuronal or subcutaneous tissues, which still needs to be investigated in the future.
3H-nisoxetine
Due to shortcomings of the previously available radioligands for the norepinephrine uptake site, researchers needed to find a better ligand for measuring norepinephrine reuptake sites. These shortcomings also meant that the norepinephrine uptake sites in the brain were less studied than the 5-HT uptake sites. Previous radioligands for the norepinephrine uptake sites, 3H-desipramine (3H-DMI) and 3H-mazindol (3H-MA), did not have specific and selective binding properties for norepinephrine sites.
3H-nisoxetine (3H-NIS), on the other hand, is a potent and selective inhibitor for the uptake of norepinephrine and is now used as a selective marker of the norepinephrine transporter. Most studies using 3H-NIS are conducted in the rat model, and not many have been performed in humans. 3H-NIS can be used to map anatomical sites associated with norepinephrine uptake through the technique of quantitative autoradiography (QAR), where the pattern of 3H-NIS binding is consistent with the pattern of norepinephrine activation. Lesion studies also confirm 3H-NIS's relation to presynaptic norepinephrine terminals.
3H-NIS binds with high affinity (Kd = 0.7 nM) and selectivity to a homogenous population of sites that are associated with norepinephrine uptake in the rat brain. Specific 3H-NIS binding increases as sodium concentration is raised, and binding of 3H-NIS is barely detectable in the absence of sodium. Binding of 3H-NIS is sodium-dependent because sodium ions are necessary for the neuronal uptake of norepinephrine. This binding is also heat-sensitive, where heating rat cerebral cortical membranes reduces the amount of specific binding. Nisoxetine (Ki = 0.7 + 0.02 nM), as well as other compounds that have a high affinity for norepinephrine uptake sites (DMI, MAZ, maprotiline), act as potent inhibitors of 3H-NIS binding to rat cortical membranes.
In humans, 3H-NIS is used to measure uptake sites in the locus coeruleus (LC). The LC, a source of norepinephrine axons, has been of focus in research due to reports of cell loss in the area that occurs with aging in humans. Decreased binding of 3H-NIS reflects the loss of LC cells.
NET imaging using PET
Researchers are attempting to image the norepinephrine transporter (NET) system using positron emission tomography (PET). Possible ligands to be used for this methodology must possess high affinity and selectivity, high brain penetration, appropriate lipophilicity, reasonable stability in plasma, as well as high plasma free fraction. 11C-labeled nisoxetine, synthesized by Haka and Kilbourn, was one possible candidate that was investigated for being used as a potential PET tracer. However, in vivo, 11C-labeled nisoxetine exhibits nonspecific binding, therefore limiting its effectiveness as a possible ligand for PET.
Pharmacological properties
Nisoxetine is a potent and selective inhibitor of norepinephrine uptake, where it is about 1000-fold more potent in blocking norepinephrine uptake than that of serotonin. It is 400-fold more potent in blocking the uptake of norepinephrine than that of dopamine. The R-isomer of nisoxetine has 20 times greater affinity than its S-isomer for NET. Nisoxetine has little or no affinity for neurotransmitter receptors. The NET Ki for nisoxetine is generally agreed to be 0.8 nM.
In a preclinical study where nisoxetine was administered to volunteers, the average plasma concentration after a single dose was found to be 0.028 microgram/ml, and after the fifteenth dose was 0.049 microgram/ml. The binding of nisoxetine is saturable in human placental NET, with specific binding values being 13.8 + 0.4 nM for Kd and 5.1 + 0.1 pmol/mg of protein for Bmax Sodium and chloride enhances nisoxetine binding by increasing the affinity of the binding site for its ligand, where Kd values increase as the concentration of chloride decrease. Bmax is not affected.
Activity of 3H-NIS on cerebral cortical homogenates in mice show a Kd of 0.80 + 0.11 nM and a Bmax of + 12 fmol/mg protein. Density of binding is generally associated with brain regions that exhibit norepinephrine levels, where the highest specific 3H-NIS binding is in the brainstem (LC) and the thalamus. Specific 3H-NIS binding is dependent on sodium cations, where specific and total binding is raised as the concentration of sodium is increased (Tejani-Butt et al., 1990). This binding occurs with high affinity towards a single class of sites that have similar pharmacological characteristics of the norepinephrine uptake site.
Nisoxetine and other inhibitors of norepinephrine uptake sites are able to inhibit the binding of 3H-NIS. When rats are intravenously injected with nisoxetine and the binding of 3H-NIS is measured, the Ki of nisoxetine is reported to be 0.8 + 0.1 nM for concentrations of up to 1 μM.
Adverse effects
Norepinephrine, along with dopamine and/or other serotonin reuptake inhibitors, are often prescribed in the treatment of mood disorders and are generally well tolerated.
Preclinical studies in humans using nisoxetine were conducted in the 1970s, and side effects of the drug were examined. Doses ranging from 1 mg to 50 mg do not result in any changes in base line values in haematologic tests, routine blood chemistries, or coagulation parameters. Larger doses produce some side effects, but no electrocardiographic changes are observed in any doses. Injections with doses of tyramine in humans while receiving nisoxetine results in a decreased responsiveness to tyramine with increased duration of administered nisoxetine. Another effect of nisoxetine administration is that subjects require much smaller doses of norepinephrine to produce the same blood pressure responses as those who receive a placebo. In other words, subjects exhibit an increased sensitivity to norepinephrine after nisoxetine administration. Preclinical test conclude that the drug, in tested doses, appears to be safe for use in humans.
Chemical properties
Nisoxetine is a racemic compound with two isomers.
Tricyclic (three-ring) structures can be found in many different drugs, and for medicinal chemists allows restrictions for the conformational mobility of two phenyl rings attached to a common carbon or hetero (non-carbon) atom. Small molecular changes, such as substituents or ring flexibility can cause changes in the pharmacological and physiochemical properties of a drug. The mechanism of action for the phenoxyphenylpropyamines can be explained by the critical role of the type and position of the ring substitution. The unsubstituted molecule is a weak SSRI. A compound highly potent and selective for blocking norepinephrine reuptake, a SNRI, results from 2-substitutions into the phenoxy ring.
See also
Reboxetine
Atomoxetine
Fluoxetine
References
Amines
Antidepressants
Catechol ethers
Norepinephrine reuptake inhibitors
Wakefulness-promoting agents
2-Methoxyphenyl compounds | Nisoxetine | Chemistry | 3,326 |
12,062,017 | https://en.wikipedia.org/wiki/Osteoimmunology | Osteoimmunology (όστέον, osteon from Greek, "bone"; from Latin, "immunity"; and λόγος, logos, from Greek "study") is a field that emerged about 40 years ago that studies the interface between the skeletal system and the immune system, comprising the "osteo-immune system". Osteoimmunology also studies the shared components and mechanisms between the two systems in vertebrates, including ligands, receptors, signaling molecules and transcription factors. Over the past decade, osteoimmunology has been investigated clinically for the treatment of bone metastases, rheumatoid arthritis (RA), osteoporosis, osteopetrosis, and periodontitis. Studies in osteoimmunology reveal relationships between molecular communication among blood cells and structural pathologies in the body.
System similarities
The RANKL-RANK-OPG axis (OPG stands for osteoprotegerin) is an example of an important signaling system functioning both in bone and immune cell communication. RANKL is expressed on osteoblasts and activated T cells, whereas RANK is expressed on osteoclasts, and dendritic cells (DCs), both of which can be derived from myeloid progenitor cells. Surface RANKL on osteoblasts as well as secreted RANKL provide necessary signals for osteoclast precursors to differentiate into osteoclasts. RANKL expression on activated T cells leads to DC activation through binding to RANK expressed on DCs. OPG, produced by DCs, is a soluble decoy receptor for RANKL that competitively inhibits RANKL binding to RANK.
Crosstalk
The bone marrow cavity is important for the proper development of the immune system, and houses important stem cells for maintenance of the immune system. Within this space, as well as outside of it, cytokines produced by immune cells also have important effects on regulating bone homeostasis. Some important cytokines that are produced by the immune system, including RANKL, M-CSF, TNFa, ILs, and IFNs, affect the differentiation and activity of osteoclasts and bone resorption. Such inflammatory osteoclastogenesis and osteoclast activation can be seen in ex vivo primary cultures of cells from the inflamed synovial fluid of patients with disease flare of the autoimmune disease rheumatoid arthritis.
Clinical osteoimmunology
Clinical osteoimmunology is a field that studies a treatment or prevention of the bone related diseases caused by disorders of the immune system. Aberrant and/or prolonged activation of immune system leads to derangement of bone modeling and remodeling. Common diseases caused by disorder of osteoimmune system is osteoporosis and bone destruction accompanied by RA characterized by high infiltration of CD4+ T cells in rheumatoid joints, in which two mechanisms are involved: One is an indirect effect on osteoclastogenesis from rheumatoid synovial cells in joints since synovial cells have osteoclast precursors and osteoclast supporting cells, synovial macrophages are highly differentiated into osteoclasts with help of RANKL released from osteoclast supporting cells.
The second is an indirect effect on osteoclast differentiation and activity by the secretion of inflammatory cytokines such as IL-1, IL-6, TNFa, in synovium of RA, which increase RANKL signaling and finally bone destruction. A clinical approach to prevent bone related diseases caused by RA is OPG and RANKL treatment in arthritis. There is some evidence that infections (e.g. respiratory virus infection) can reduce the numbers of osteoblasts in bone, the key cells involved in bone formation.
See also
Bone metabolism
Osteoimmunology and Osseointegration
HSC
Osteoarthritis
References
Physiology
Branches of immunology | Osteoimmunology | Biology | 855 |
33,288,603 | https://en.wikipedia.org/wiki/Auxiliary%20Activity%20family%209 | Auxiliary Activity family 9 (formerly GH61) is a family of lytic polysaccharide monooxygenases. The family was previously incorrectly classified as glycoside hydrolase family 61, however it was re-classified in March 2013.
AA9 is known to be a copper dependent, oxidative enzyme able to cleave crystalline cellulose. Activity greatly depends on the presence of a divalent copper ion and a reductant such as gallate or ascorbate. The oxidative action of AA9 can work on either the reducing or the non-reducing end of glucose moieties. Although AA9 enzymes can be found in a large spectrum of biomass degrading fungi, research into this family is relatively new. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.
References
EC 3.2.1
Glycoside hydrolase families
Protein families | Auxiliary Activity family 9 | Biology | 207 |
71,218,720 | https://en.wikipedia.org/wiki/Timothy%20B.%20Rhyne | Timothy B. Rhyne is a retired Michelin Research Fellow and co-inventor of the Tweel.
Rhyne received his BS and PhD degrees from North Carolina State University. He started at Michelin in 1978, originally in machine design, and moving to tire research and development in 1986.
He was an adjunct professor at Clemson University's ICAR (International Center for Automotive Research) where he taught tire mechanics.
In 2021, he and co-inventor Steven M. Cron were jointly awarded the Charles Goodyear Medal, the highest honor conferred by the American Chemical Society, Rubber Division. It was the first time that the award was given jointly.
References
Polymer scientists and engineers
Tire industry people
Year of birth missing (living people)
Living people
Michelin people | Timothy B. Rhyne | Chemistry,Materials_science | 156 |
27,863,637 | https://en.wikipedia.org/wiki/Shelf%20angle | In masonry veneer building construction, a shelf angle or masonry support is a steel angle which supports the weight of brick or stone veneer and transfers that weight onto the main structure of the building so that a gap or space can be created beneath to allow building movements to occur.
Background
Traditional masonry buildings had thick Load-bearing walls that supported the weight of the building. Openings in these load bearing walls such as doors and windows were typically small and spanned by steel lintels or masonry arches.
Modern buildings
The invention of skeleton frame buildings made it possible to reduce the thickness of the walls and have wide openings such as ribbon windows extending across most or all of the building facade. In these buildings, brick, stone, or other masonry cladding is often just a single wythe of material called a veneer since it is non-loadbearing. The only way to support the weight of this veneer across a wide opening is by providing a shelf angle on which the masonry bears. The shelf angle, in turn, is attached to major elements of the building structure such as floor beams or structural columns. Shelf angles are in reality a horizontal expansion joint which allows growth of the brick below the shelf angle and to allow movement or shrinkage of the frame without putting stresses on the brick veneer. In the United States, common sizes for steel shelf angles include L 3" x 3" x 1/4" and L 4" x 4" x 1/4".In the UK and Europe shelf angles / masonry support are predominantly manufactured in stainless steel to prevent corrosion and failure. These are bespoke to the building's frame and engineered to take the loads required.
References
See also
Masonry
Building materials
Building engineering | Shelf angle | Physics,Engineering | 345 |
22,731,444 | https://en.wikipedia.org/wiki/Skyline%20Solar | Skyline Solar was a Concentrated Photovoltaic (CPV) company based in Mountain View, California. The company developed medium-concentration photovoltaic systems to produce electricity for commercial, industrial and utility scale solar markets. The company was founded in 2007 by Bob MacDonald, Bill Keating and Eric Johnson. The operation of the company appears to have ceased in late 2012 and the website is deactivated.
Engineering challenges posed by concentration
The output of the PV system can be increased with improvements in cell efficiency, the use of trackers, concentration, and other more subtle engineering improvements. Tracking provides up to thirty percent more energy per peak watt compared to fixed tilt (non-tracking) systems. Controlling the temperature of the cells in a CPV system is critical to high energy generation because higher cell temperatures reduce the output of the panel. Skyline Solar uses a combination of tracking, cooling fins, and reflectors to focus light on a single strip of silicon cells and maximize energy production.
Technical details
Skyline Solar's X14 system combines crystalline silicon arrays with reflectors, single axis trackers, and cooling fins to create a system in which sunlight is concentrated 14 times, hence the name "X14." Integrated trackers adjust the position of the reflectors so that light remains concentrated on the solar cells while the sun travels across the sky. Long rows of arrays are oriented north–south and the tracker rotates east to west to optimize the light capture.
Skyline’s design
The Skyline Solar X14 System consists of three principal components: panels, reflectors and an integrated single-axis tracker. The photograph shows one Skyline X14 Array, which is rated at 3780 DC Watts STC using the 1000 Watts per square meter DNI standard or 3520 DC Watts using the 850 W/m2 standard used in Italy. A typical installation would include as few as 28 or as many as 20,000 arrays installed in many long rows.
Panels—Silicon cells represent the great majority of the cost of any large conventional PV system. Skyline's design replaces most of the silicon with mirrors. Reflectors concentrate sunlight by a factor of 14, and this allows Skyline to use 1/14 as much silicon as flat panel tracking systems and 1/20 as much as non-tracking flat panel. To keep the panels operating efficiently, Skyline bonds large aluminium cooling fins to the back of each panel. If unfolded, each fin would cover an area more than 40 times larger than the face of the solar panel itself. Natural convection, aided by the wind, keeps the panels operating at temperatures comparable to conventional PV modules.
Reflectors—Skyline's X14 Reflectors are near-parabolic in cross-section and their patented shape allows them to focus light uniformly on the solar panels. Uniform flux enhances system efficiency. In addition, the Skyline X14 Reflector design enables a tight optical coupling between adjacent arrays. This maximizes energy production regardless of the angle at which sunlight strikes the mirrors.
Integrated Single Axis Trackers—The higher the concentrating power of a CPV system, the more precisely it must be aimed. For example, looking at the Moon through powerful binoculars usually results in an unsteady image. As it is a medium concentration system, the Skyline X14 System has a wide acceptance angle. This is the angle (as seen from the PV system's lens or mirror) between the Sun's actual location and the spot in the sky that the concentrator is actually pointing at. Skyline's acceptance angle is 1.3° of azimuth and ± 60° of elevation.
Target PV markets
CPV technology has been adopted primarily from large commercial and utility-scale customers in highly sunny locations, such as the US southwest, western India, Italy and Chile. As is true for most CPV technologies, Skyline's products are not suited for residential or commercial rooftop installations.
Skyline Solar, Inc. was a 2008 winner of a Department of Energy Solar Energy Technology Program grant. The company installed its first system at the San Jose-based Santa Clara Valley Transportation Authority (VTA) on May 15, 2009.
References
External links
Photovoltaics manufacturers
Solar energy companies of the United States
Companies based in Mountain View, California
Manufacturing companies based in California
Energy companies established in 2007
Manufacturing companies established in 2007
Renewable resource companies established in 2007
American companies established in 2007
2007 establishments in California | Skyline Solar | Engineering | 881 |
147,536 | https://en.wikipedia.org/wiki/Calcium%20oxide | Calcium oxide (formula: CaO), commonly known as quicklime or burnt lime, is a widely used chemical compound. It is a white, caustic, alkaline, crystalline solid at room temperature. The broadly used term lime connotes calcium-containing inorganic compounds, in which carbonates, oxides, and hydroxides of calcium, silicon, magnesium, aluminium, and iron predominate. By contrast, quicklime specifically applies to the single compound calcium oxide. Calcium oxide that survives processing without reacting in building products, such as cement, is called free lime.
Quicklime is relatively inexpensive. Both it and the chemical derivative calcium hydroxide (of which quicklime is the base anhydride) are important commodity chemicals.
Preparation
Calcium oxide is usually made by the thermal decomposition of materials, such as limestone or seashells, that contain calcium carbonate (CaCO3; mineral calcite) in a lime kiln. This is accomplished by heating the material to above , a process called calcination or lime-burning, to liberate a molecule of carbon dioxide (CO2), leaving quicklime behind. This is also one of the few chemical reactions known in prehistoric times.
CaCO3(s) → CaO(s) + CO2(g)
The quicklime is not stable and, when cooled, will spontaneously react with CO2 from the air until, after enough time, it will be completely converted back to calcium carbonate unless slaked with water to set as lime plaster or lime mortar.
Annual worldwide production of quicklime is around 283 million tonnes. China is by far the world's largest producer, with a total of around 170 million tonnes per year. The United States is the next largest, with around 20 million tonnes per year.
Approximately 1.8t of limestone is required per 1.0t of quicklime. Quicklime has a high affinity for water and is a more efficient desiccant than silica gel. The reaction of quicklime with water is associated with an increase in volume by a factor of at least 2.5.
Hydroxyapatite's free CaO content rises with increased calcination temperatures and longer times. It also pinpoints particular temperature cutoffs and durations that impact the production of CaO, offering information on how calcination parameters impact the composition of the material.
Uses
The major use of quicklime is in the basic oxygen steelmaking (BOS) process. Its usage varies from about per ton of steel. The quicklime neutralizes the acidic oxides, SiO2, Al2O3, and Fe2O3, to produce a basic molten slag.
Ground quicklime is used in the production of aerated concrete such as blocks with densities of ca. .
Quicklime and hydrated lime can considerably increase the load carrying capacity of clay-containing soils. They do this by reacting with finely divided silica and alumina to produce calcium silicates and aluminates, which possess cementing properties.
Small quantities of quicklime are used in other processes; e.g., the production of glass, calcium aluminate cement, and organic chemicals.
Heat: Quicklime releases thermal energy by the formation of the hydrate, calcium hydroxide, by the following equation:
CaO (s) + H2O (l) Ca(OH)2 (aq) (ΔHr = −63.7kJ/mol of CaO)
As it hydrates, an exothermic reaction results and the solid puffs up. The hydrate can be reconverted to quicklime by removing the water by heating it to redness to reverse the hydration reaction. One litre of water combines with approximately of quicklime to give calcium hydroxide plus 3.54 MJ of energy. This process can be used to provide a convenient portable source of heat, as for on-the-spot food warming in a self-heating can, cooking, and heating water without open flames. Several companies sell cooking kits using this heating method.
It is known as a food additive to the FAO as an acidity regulator, a flour treatment agent and as a leavener. It has E number E529.
Light: When quicklime is heated to , it emits an intense glow. This form of illumination is known as a limelight, and was used broadly in theatrical productions before the invention of electric lighting.
Cement: Calcium oxide is a key ingredient for the process of making cement.
As a cheap and widely available alkali. About 50% of the total quicklime production is converted to calcium hydroxide before use. Both quick- and hydrated lime are used in the treatment of drinking water.
Petroleum industry: Water detection pastes contain a mix of calcium oxide and phenolphthalein. Should this paste come into contact with water in a fuel storage tank, the CaO reacts with the water to form calcium hydroxide. Calcium hydroxide has a high enough pH to turn the phenolphthalein a vivid purplish-pink color, thus indicating the presence of water.
Chemical pulping: Calcium oxide is used to make calcium hydroxide, which is used to regenerate sodium hydroxide from sodium carbonate in the chemical recovery at kraft pulp mills.
Plaster: There is archeological evidence that Pre-Pottery Neolithic B humans used limestone-based plaster for flooring and other uses. Such Lime-ash floor remained in use until the late nineteenth century.
Chemical or power production: Solid sprays or slurries of calcium oxide can be used to remove sulfur dioxide from exhaust streams in a process called flue-gas desulfurization.
Carbon capture and storage: Calcium oxide can be used to capture carbon dioxide from flue gases in a process called calcium looping.
Mining: Compressed lime cartridges exploit the exothermic properties of quicklime to break rock. A shot hole is drilled into the rock in the usual way and a sealed cartridge of quicklime is placed within and tamped. A quantity of water is then injected into the cartridge and the resulting release of steam, together with the greater volume of the residual hydrated solid, breaks the rock apart. The method does not work if the rock is particularly hard.
Disposal of corpses: Historically, it was mistakenly believed that quicklime was efficacious in accelerating the decomposition of corpses. The application of quicklime can, in fact, promote preservation. Quicklime can aid in eradicating the stench of decomposition, which may have led people to the erroneous conclusion.
It has been determined that the durability of ancient Roman concrete is attributed in part to the use of quicklime as an ingredient. Combined with hot mixing, the quicklime creates macro-sized lime clasts with a characteristically brittle nano-particle architecture. As cracks form in the concrete, they preferentially pass through the structurally weaker lime clasts, fracturing them. When water enters these cracks it creates a calcium-saturated solution which can recrystallize as calcium carbonate, quickly filling the crack.
The thermochemical heat storage mechanism is greatly impacted by the sintering of CaO and CaCO3. It demonstrates that the storage materials become less reactive and denser at increasing temperatures. It also pinpoints particular sintering processes and variables influencing the efficiency of these materials in heat storage.
Weapon
In 80 BC, the Roman general Sertorius deployed choking clouds of caustic lime powder to defeat the Characitani of Hispania, who had taken refuge in inaccessible caves. A similar dust was used in China to quell an armed peasant revolt in 178 AD, when lime chariots equipped with bellows blew limestone powder into the crowds.
Quicklime is also thought to have been a component of Greek fire. Upon contact with water, quicklime would increase its temperature above and ignite the fuel.
David Hume, in his History of England, recounts that early in the reign of Henry III, the English Navy destroyed an invading French fleet by blinding the enemy fleet with quicklime. Quicklime may have been used in medieval naval warfare – up to the use of "lime-mortars" to throw it at the enemy ships.
Substitutes
Limestone is a substitute for lime in many applications, which include agriculture, fluxing, and sulfur removal. Limestone, which contains less reactive material, is slower to react and may have other disadvantages compared with lime, depending on the application; however, limestone is considerably less expensive than lime. Calcined gypsum is an alternative material in industrial plasters and mortars. Cement, cement kiln dust, fly ash, and lime kiln dust are potential substitutes for some construction uses of lime. Magnesium hydroxide is a substitute for lime in pH control, and magnesium oxide is a substitute for dolomitic lime as a flux in steelmaking.
Safety
Because of vigorous reaction of quicklime with water, quicklime causes severe irritation when inhaled or placed in contact with moist skin or eyes. Inhalation may cause coughing, sneezing, and labored breathing. It may then evolve into burns with perforation of the nasal septum, abdominal pain, nausea and vomiting. Although quicklime is not considered a fire hazard, its reaction with water can release enough heat to ignite combustible materials.
Mineral
Calcium oxide is also a separate mineral species (with the unit formula CaO), named 'Lime'. It has an isometric crystal system, and can form a solid solution series with monteponite. The crystal is brittle, pyrometamorphic, and is unstable in moist air, quickly turning into portlandite (Ca(OH)2).
References
External links
Lime Statistics & Information from the United States Geological Survey
Factors Affecting the Quality of Quicklime
American Scientist (discussion of 14C dating of mortar)
Chemical of the Week – Lime
Material Safety Data Sheet
CDC – NIOSH Pocket Guide to Chemical Hazards
Alchemical substances
Bases (chemistry)
Calcium compounds
Cement
Dehydrating agents
Desiccants
Disinfectants
E-number additives
Limestone
Rock salt crystal structure
Oxides | Calcium oxide | Physics,Chemistry | 2,090 |
38,894,374 | https://en.wikipedia.org/wiki/List%20of%20Legionnaires%27%20disease%20outbreaks | This is a list of Legionnaires' disease outbreaks; Legionnaire's is a potentially fatal infectious disease caused by gram negative, aerobic bacteria belonging to the genus Legionella. The first reported outbreak was in Philadelphia, Pennsylvania in 1976 during a Legionnaires Convention at the Bellevue-Stratford Hotel.
An outbreak is defined as two or more cases where the onset of illness is closely linked in time (weeks rather than months) and in space, where there is suspicion of, or evidence of, a common source of infection, with or without microbiological support (i.e. common spatial location of cases from travel history).
Worldwide listings by year
1960s
1970s
1980s
1990s
2000s
2010s
2020s
Governmental controls to prevent outbreaks
Regulations and ordinances
The guidance issued by the UK government's Health and Safety Executive (HSE) now recommends that microbiological monitoring for wet cooling systems, using a dipslide, should be performed weekly. The guidance now also recommends that routine testing for legionella bacteria in wet cooling systems be carried out at least quarterly, and more frequently when a system is being commissioned, or if the bacteria have been identified on a previous occasion.
Further non-statutory UK guidance from the Water Regulations Advisory Scheme now exists for pre-heating of water in applications such as solar water heating systems.
The City of Garland, Texas, United States requires yearly testing for legionella bacteria at cooling towers at apartment buildings.
Malta requires twice yearly testing for Legionella bacteria at cooling towers and water fountains. Malta prohibits the installation of new cooling towers and evaporative condensers at health care facilities and schools.
The Texas Department of State Health Services has provided guidelines for hospitals to detect and prevent the spread of nosocomial infection due to legionella.
The European Working Group for Legionella Infections (EWGLI) was established in 1986 within the European Union framework to share knowledge and experience about potential sources of Legionella and their control. This group has published guidelines about the actions to be taken to limit the number of colony forming units (i.e., the "aerobic count") of micro-organisms per mL at 30 °C (minimum 48 hours incubation):
Almost all natural water sources contain Legionella and their presence should not be taken as an indication of a problem. The tabled figures are for total aerobic plate count, cfu/ml at 30 °C (minimum 48 hours incubation) with colony count determined by the pour plate method according to ISO 6222(21) or spread plate method on yeast extract agar. Legionella isolation can be conducted using the method developed by the US Center for Disease Control using buffered charcoal yeast extract agar with antibiotics.
Copper-Silver ionization is an effective industrial control and prevention process to eradicate Legionella in potable water distribution systems and cooling towers found in health facilities, hotels, nursing homes and most large buildings. In 2003, ionization became the first such hospital disinfection process to have fulfilled a proposed four-step modality evaluation; by then it had been adopted by over 100 hospitals. Additional studies indicate ionization is superior to thermal eradication.
A 2011 study by Lin, Stout and Yu found Copper-Silver ionization to be the only Legionella control technology which has been validated through a 4-step scientific approach.
It was previously believed that transmission of the bacterium was restricted to much shorter distances. A team of French scientists reviewed the details of an epidemic of Legionnaires' disease that took place in Pas-de-Calais in northern France in 2003–2004. There were 86 confirmed cases during the outbreak, of whom 18 died. The source of infection was identified as a cooling tower in a petrochemical plant, and an analysis of those affected in the outbreak revealed that some infected people lived as far as 6–7 km from the plant.
A study of Legionnaires' disease cases in May 2005 in Sarpsborg, Norway concluded that: "The high velocity, large drift, and high humidity in the air scrubber may have contributed to the wide spread of Legionella species, probably for >10 km."
In 2010 a study by the UK Health Protection Agency reported that 20% of cases may be caused by infected windscreen washer systems filled with pure water. The finding came after researchers spotted that professional drivers are five times more likely to contract the disease. No cases of infected systems were found whenever a suitable washer fluid was used.
Temperature affects the survival of Legionella as follows:
: Disinfection range
At : Legionellae die within 2 minutes
At : They die within 32 minutes
At : They die within 5 to 6 hours
Above : They can survive but do not multiply
: Ideal growth range
: Growth range
Below : They can survive but are dormant
Removing slime, which can carry legionellae when airborne, may be an effective control process.
See also
Legionnaire's Disease
Pontiac fever
1976 Philadelphia Legionnaires' disease outbreak
References
External links
American Legion
Building biology
Legionellosis Outbreaks
Legionellosis Outbreaks
Industrial hygiene
Outbreaks
Legionnaires' disease outbreaks
Disease outbreaks
2005 disasters in Canada
2012 disasters in Canada | List of Legionnaires' disease outbreaks | Engineering | 1,046 |
3,062,336 | https://en.wikipedia.org/wiki/Human%20right%20to%20water%20and%20sanitation | The human right to water and sanitation (HRWS) is a principle stating that clean drinking water and sanitation are a universal human right because of their high importance in sustaining every person's life. It was recognized as a human right by the United Nations General Assembly on 28 July 2010. The HRWS has been recognized in international law through human rights treaties, declarations and other standards. Some commentators have based an argument for the existence of a universal human right to water on grounds independent of the 2010 General Assembly resolution, such as Article 11.1 of the International Covenant on Economic, Social and Cultural Rights (ICESCR); among those commentators, those who accept the existence of international ius cogens and consider it to include the Covenant's provisions hold that such a right is a universally binding principle of international law. Other treaties that explicitly recognize the HRWS include the 1979 Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) and the 1989 Convention on the Rights of the Child (CRC).
The clearest definition of the human right to water was issued by the United Nations Committee on Economic, Social and Cultural Rights in General Comment 15 drafted in 2002. It was a non-binding interpretation that access to water was a condition for the enjoyment of the right to an adequate standard of living, inextricably related to the right to the highest attainable standard of health, and therefore a human right. It stated: "The human right to water entitles everyone to sufficient, safe, acceptable, physically accessible and affordable water for personal and domestic uses."
The first resolutions about the HRWS were passed by the UN General Assembly and the UN Human Rights Council in 2010. They stated that there was aman right to sanitation connected to the human right to water, since the lack of sanitation reduces the quality of water downstream, so subsequent discussions have continued emphasizing both rights together. In July 2010, United Nations (UN) General Assembly Resolution 64/292 reasserted the human right to receive safe, affordable, and clean accessible water and sanitation services. During that General Assembly, it stated that for the comprehension of enjoyment in life and all human rights, safe and clean drinking water as well as sanitation is acknowledged as a human right. General Assembly Resolution 64/292's assertion of a free human right of access to safe and clean drinking water and sanitation raises issues regarding governmental rights to control and responsibilities for securing that water and sanitation. The United Nations Development Programme has stated that broad recognition of the significance of accessing dependable and clean water and sanitation services will promote wide expansion of the achievement of a healthy and fulfilling life. A revised UN resolution in 2015 highlighted that the two rights were separate but equal.
The HRWS obliges governments to ensure that people can enjoy quality, available, acceptable, accessible, and affordable water and sanitation. Affordability of water considers the extent to which the cost of water becomes inhibitive such that it requires one to sacrifice access to other essential goods and services. Generally, a rule of thumb for the affordability of water is that it should not surpass 3–5% of households' income. Accessibility of water considers the time taken, convenience in reaching the source and risks involved while getting to the source of water. Water must be accessible to every citizen, meaning that water should not be further than 1,000 meters or 3,280 feet and must be within 30 minutes. Availability of water considers whether the supply of water is available in adequate amounts, reliable and sustainable. Quality of water considers whether water is safe for consumption, including for drinking or other activities. For acceptability of water, it must not have any odor and should not consist of any color.
The ICESCR requires signatory countries to progressively achieve and respect all human rights, including those of water and sanitation. They should work quickly and efficiently to increase access and improve service.
International context
The WHO/UNICEF Joint Monitoring Programme for Water Supply and Sanitation reported that 663 million people did not have access to improved sources of drinking water and more than 2.4 billion people lacked access to basic sanitation services in 2015. Access to clean water is a major problem for many parts of the world. Acceptable sources include "household connections, public standpipes, boreholes, protected dug wells, protected springs and rainwater collections." Although 9 percent of the global population lacks access to water, there are "regions particularly delayed, such as Sub-Saharan Africa". The UN further emphasizes that "about 1.5 million children under the age of five die each year and 443 million school days are lost because of water- and sanitation-related diseases." In 2022, over 2 billion people, 25% of the world's population, lacked consistent access to clean drinking water. 4.2 billion lacked access to safe sanitation services. By 2024, new estimates are much higher, with 4.4 billion people in low- and middle-income countries lacking access to safe household drinking water.
Legal foundations and recognition
The International Covenant on Economic, Social and Cultural Rights (ICESCR) of 1966 codified the economic, social, and cultural rights found within the Universal Declaration on Human Rights (UDHR) of 1948. Neither of these early documents explicitly recognized human rights to water and sanitation. Several later international human rights conventions, however, had provisions that explicitly recognized rights to water and sanitation.
The 1979 Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) has Article 14.2 that states that "parties shall take all appropriate measures to eliminate discrimination against women in rural areas to ensure, on a basis of equality of men and women, that they participate in and benefit from rural development and, in particular shall ensure to women the right: ... (h) To enjoy adequate living conditions, particularly in relation to housing, sanitation, electricity and water supply, transport and communications."
The 1989 Convention on the Rights of the Child (CRC) has Article 24 that provides that "parties recognize the right of the child to the enjoyment of the highest attainable standard of health and to facilities for the treatment of illness and rehabilitation of health ... 2. States parties shall pursue full implementation of this right and, in particular, shall take appropriate measures... (c) To combat disease and malnutrition, including within the framework of primary health care, through, inter alia... the provision of adequate nutritious foods and clean drinking water..."
The 2006 Convention on the Rights of Persons with Disabilities (CRPD) has Article 28(2)(a) that requires that "parties recognize the right of persons with disabilities to social protection and to the enjoyment of that right without discrimination on the basis of disability, and shall take appropriate steps to safeguard and promote the realization of this right, including measures to ensure equal access by persons with disabilities to clean water services, and to ensure access to appropriate and affordable services, devices and other assistance for disability-related needs."
"The International Bill of Human Rights"- which comprises the 1966: International Covenant on Civil and Political Rights (ICCPR); 1966: Articles 11 and 12 of the 1966 International Covenant of Economic, Social, and Cultural Right (ICERS); and 1948: Article 25 of the Universal Declaration of Human Rights (UDHR) documented the evolution of human right to water and sanitation and other water-associated rights to be recognised in worldwide decree.
Scholars also called attention to the importance of possible UN recognition of human rights to water and sanitation at the end of the twentieth century. Two early efforts to define the human right to water came from law professor Stephen McCaffrey of the University of the Pacific in 1992 and Dr. Peter Gleick in 1999. McCaffrey stated that "Such a right could be envisaged as part and parcel of the right to food or sustenance, the right to health, or most fundamentally, the right to life. Gleick added: "that access to a basic water requirement is a fundamental human right implicitly and explicitly supported by international law, declarations, and State practice."
The UN Committee for Economic, Social and Cultural Rights (CESCR) overseeing ICESCR compliance came to similar conclusions as these scholars with General Comment 15 in 2002. It was found that, the right to water was an implicitly part of the right to an adequate standard of living and related to the right to the highest attainable standard of health and the rights to adequate housing and adequate food. It defines that "The human right to water entitles everyone to sufficient, safe, acceptable, physically accessible and affordable water for personal and domestic uses. An adequate amount of safe water is necessary to prevent death from dehydration, to reduce the risk of water-related disease and to provide for consumption, cooking, personal and domestic hygienic requirements." Several countries agreed and formally acknowledged the right to water to be part of their treaty obligations under the ICESCR (e.g., Germany; United Kingdom; Netherlands) after publication of General Comment 15.
A further step was taken in 2005 by the former UN Sub-Commission on the Promotion and Protection of Human Rights which issued guidelines to assist governments to achieve and respect the human right to water and sanitation. These guidelines led the UN Human Rights Council to assign Catarina de Albuquerque as an independent expert on the issue of human rights obligations related to access to safe drinking water and sanitation in 2008. She wrote a detailed report in 2009 that outlined human rights obligations to sanitation, and the CESCR responded by stating that sanitation should be recognized by all states.
Following intense negotiations, 122 countries formally acknowledged "the Human Right to Water and Sanitation" in General Assembly Resolution 64/292 on 28 July 2010. It recognized the right of every human being to have access to sufficient water for personal and domestic uses (between 50 and 100 liters of water per person per day), which must be safe, acceptable and affordable (water costs should not exceed 3% of household income), and physically accessible (the water source has to be within 1,000 meters of the home and collection time should not exceed 30 minutes)." The General Assembly declared that clean drinking water is "essential to the full enjoyment of life and all other human rights". In September 2010, the UN Human Rights Council adopted a resolution recognizing that the human right to water and sanitation forms part of the right to an adequate standard of living.
The mandate of Catarina de Albuquerque as "Independent expert on the issue of human rights obligations related to access to safe drinking water and sanitation" was extended and renamed as "Special Rapporteur on the human right to safe drinking water and sanitation" after the resolutions in 2010. Through her reports to the Human Rights Council and the UN General Assembly, she continued clarifying the scope and content of the human right to water and sanitation. As Special Rapporteur, she addressed issues such as: Human Rights Obligations Related to Non-State Service Provision in Water and Sanitation (2010); Financing for the Realization of the Rights to Water and Sanitation (2011); Wastewater management in the realization of the rights to water and sanitation (2013); and Sustainability and non-retrogression in the realization of the rights to water and sanitation (2013). Léo Heller was appointed in 2014 to be the second Special Rapporteur on the human rights to safe drinking water and sanitation.
Subsequent resolutions extended the mandate of the Special Rapporteur and defined each state's role in the respect of these rights. The most recent General Assembly Resolution 7/169 of 2015 has been called a declaration of "The Human Rights to Safe Drinking Water and Sanitation. It recognized the distinction between the right to water and the right to sanitation. This decision was made due to concern about the right to sanitation being overlooked when compared to the right to water.
International jurisprudence
Inter-American Court of Human Rights
The right to water has been considered in the Inter-American Court of Human Rights case of the Sawhoyamaxa Indigenous Community v. Paraguay. The issues involved the states failure to acknowledge indigenous communities' property rights over ancestral lands. In 1991, the state removed the indigenous Sawhoyamaxa community from the land resulting in their loss of access to basic essential services, like water, food, schooling and health services. This fell within the scope of the American Convention on Human Rights; encroaching the right to life. Water is included in this right, as part of access to land. The courts required the lands to be returned, compensation provided, and basic goods and services to be implemented, while the community was in the process of having their lands returned.
International Centre for Settlement of Investment Disputes
The following cases from the International Centre for Settlement of Investment Disputes (ICSID) concern the contracts established between governments and corporations for the maintenance of waterways. Although the cases regard questions of investment, commentators have noted that the indirect impact of the right to water upon the verdicts is significant. World Bank data shows that water privatization spiked starting in the 1990s and significant growth in privatization continued into the 2000s.
Azurix Corp v. Argentina
The first notable case regarding the right to water in the ICSID is that of Azurix Corp v. Argentina. The dispute was between the Argentine Republic and Azurix Corporation regarding discrepancies arising from a 30-year contract between the parties to operate the water supply of various provinces. A consideration in regard to the right to water is implicitly made during the arbitration for compensation, where it was held that Azurix was entitled to a fair return on the market value of the investment. This was rather than the requested US$438.6 million, citing that a reasonable business person could not expect such a return, given the limits of water price increases and improvements that would be required to ensure a well-functioning, clean water system.
Biwater Gauff Ltd v. Tanzania
Secondly, a similar case encountered by the ICSID is that of Biwater Gauff Ltd v. Tanzania. This was again a case of a private water company in a contractual dispute with a government, this time the United Republic of Tanzania. This contract was for the operation and management of the Dar es Salaam water system. In May 2005, the Tanzania government ended the contract with Biwater Gauff for its alleged failure to meet performance guarantees. In July 2008, the Tribunal issued its decision on the case, declaring that the Tanzania government had violated the agreement with Biwater Gauff. It did not however award monetary damages to Biwater, acknowledging that public interest concerns were paramount in the dispute.
Right to water in domestic law
Without the existence of an international body that can enforce it, the human right to water relies upon the activity of national courts. The basis for this has been established through the constitutionalisation of economic, social and cultural rights (ESCR) through one of two means: as "directive principles" that are goals and are often non-justiciable; or as expressly protected and enforceable through the courts.
South Africa
In South Africa, the right to water is enshrined in the constitution and implemented by ordinary statutes. This is evidence of a slight modification of the second technique of constitutionalisation referred to as the "subsidiary legislation model". This means that a large portion of the content and implementation of the right is done an ordinary domestic statute with some constitutional standing.
Residents of Bon Vista Mansions v. Southern Metropolitan Local Council
The first notable case in which the courts did so was the Residents of Bon Vista Mansions v. Southern Metropolitan Local Council. The case was brought by residents of a block of flats (Bon Vista Mansions), following the disconnection of the water supply by the local Council, resulting from the failure to pay water charges. The court held that in adherence to the South African Constitution, that constitutionally all persons ought to have access to water as a right.
Further reasoning for the decision was based on General Comment 12 on the Right to Food, made by the UN Committee on Economic, Social and Cultural Rights imposing upon parties to the agreement the obligation to observe and respect already existing access to adequate food by not implementing any encroaching measures.
The court found that the discontinuation of the existing water source, which had not adhered to the "fair and reasonable" requirements of the South African Water Services Act, was illegal. It is important to note that the decision pre-dates the adoption of the UN General Comment No. 15.
Mazibuko v. City of Johannesburg
The quantity of water to be provided was further discussed in Mazibuko v City of Johannesburg. The case revolved around the distribution of water through pipes to Phiri, one of the oldest areas of Soweto. This case concerned two major issues: whether or not the city's policy regarding the supply of free basic water, 6 kilolitres per month to each account holder in the city was in conflict with Section 27 of the South African Constitution or Section 11 of the Water Services Act. The second issue being whether or not the installation of pre-paid water meters was lawful.
It was held in the High Court that the city's by-laws did not provide for the installation of meters and that their installation was unlawful. Further, as the meters halted supply of water to residence once the free basic water supply had ended, this was deemed an unlawful discontinuation of the water supply. The court held the residents of Phiri should be provided with a free basic water supply of 50 litres per person per day. The work of the Centre for Applied Legal Studies (CALS) of the University of the Witwatersrand in Johannesburg, South Africa and the Pacific Institute in Oakland, California, shared a 2008 Business Ethics Network BENNY Award for their work on this case. The Pacific Institute contributed legal testimony based on the work of Dr. Peter Gleick defining a human right to water and quantifying basic human needs for water.
The big respondents took the case to the Supreme Court of Appeal (SCA) which held that the city's water policy had been formulated based upon a material error of law in regards to the city's obligation to provide the minimum set in the South African National Standard, therefore it was set aside. The court also held the quantity for dignified human existence in compliance with section 27 of the constitution was in fact 42 litres per person per day rather than 50 litres per person per day. The SCA declared that the installation of water meters was illegal, but suspended the order for two years to give the city an opportunity to rectify the situation.
The issues went further to the Constitutional Court, which held that the duty created by the constitution required that the state take reasonable legislative and other measures progressively to realise the achievement of the right to access of water, within its available resource. The Constitutional Court also held that it is a matter for the legislature and executive institution of government to act within the allowance of their budgets and that the scrutiny of their programs is a matter of democratic accountability. Therefore, the minimum content set out by the regulation 3(b) is constitutional, rendering the bodies to deviate upwards and further it is inappropriate for a court to determine the achievement of any social and economic right the government has taken steps to implement. The courts had instead focused their inquiry on whether the steps taken by Government are reasonable, and whether the Government subjects its policies to regular review. The judgment has been criticized for deploying an "unnecessarily limiting concept of judicial deference".
India
The two most prominent cases in India regarding the right to water illustrate that although this is not explicitly protected in the Constitution of India, it has been interpreted by the courts that the right to life includes the right to safe and sufficient water.
Delhi Water Supply v. State of Haryana
Here a water usage dispute arose due to the fact that the state of Haryana was using the Jamuna River for irrigation, while the residents of Delhi needed it for the purpose of drinking. It was reasoned that domestic use overrode the commercial use of water and the court ruled that Haryana must allow enough water to get to Delhi for consumption and domestic use.
Subhash Kumar v. State of Bihar
Also notable is the case of Subhash Kumar v. State of Bihar, where a discharge of sludge from the washeries into the Bokaro River was petitioned against by way of public interest litigation. The courts found that the right to life, as protected by Article 21 of the Constitution of India, included the right to enjoy pollution-free water. The case failed upon the facts and it was held that the petition had been filed not in any public interest but for the petitioner's personal interest and therefore a continuation of litigation would amount to an abuse of process.
World Rights to Water Day
Water is essential for existence of living beings including humans. Therefore, having access to pure and adequate quantity of water is an inalienable human right. Hence, the Eco Needs Foundation (ENF) deems it necessary to recognise the right to water (with ensured per capita minimum quantity of water) through the appropriate expressed legal provision. The United Nations with its several covenants has made it obligatory for all the nations to ensure equitable distribution of water amongst all the citizens. Accordingly, the ENF began to observe and promote the celebration of World Rights to Water Day on 20 March, the date on which Dr. Babasaheb Ambedkar ("the father of modern India") led the world's first satyagraha for water in 1927. The World Right to Water Day calls for the adoption of special legislation establishing the universal right to water. Under the guidance of founder Dr Priyanand Agale, the ENF arranges a variety of several programmes to ensure the right to water for Indian citizens.
New Zealand
ESCR are not explicitly protected in New Zealand at the current time, either by the Human Rights or Bill of Rights Acts, therefore the right to water is not defended by law there. The New Zealand Law Society has recently indicated that this country would give further consideration to the legal status of economic, social and cultural rights.
United States
In Pilchen v. City of Auburn, New York, a single mother named Diane Pilchen was living as a rental tenant in a foreclosed house, in which the owner (landlord) of the house had failed to pay the water bill for some time. The City of Auburn billed Pilchen for the landlord's arrears, and repeatedly shut her water service off without notice when she could not pay these debts, making the house uninhabitable. The city condemned the home and forced Pilchen and her child to move out. Pilchen was represented by the Public Utility Law Project of New York (PULP) in the lawsuit. The City of Auburn attempted unsuccessfully to argue that water is not a constitutional right because bottled water could be used instead, an argument that was contested by PULP as absurd. In 2010, Pilchen won summary judgment in which it was determined that shutting off the water violated her constitutional rights, and that Pilchen could not be billed and stopped from having water due to an unrelated party's delays in paying water bills.
Standing Rock Sioux Tribe v. United States Army Corps of Engineers
In 2016, there was a prominent case known as Standing Rock Sioux Tribe v. United States Army Corps of Engineers, where the Sioux Tribe challenged the building of the Dakota Access Pipeline (DAPL). This crude oil pipeline spans over four states, which includes the beginning in North Dakota, then passes through both South Dakota and Iowa, and ends in Illinois. The Standing Rock Reservation is located near the border of North and South Dakota and the pipeline is built within a half a mile of it. Since the pipeline was built near the reservation, the tribe feared that historical and cultural significance of Lake Oahe would be tampered with, even though the pipeline does not run directly through the lake. Lake Oahe provides basic water necessities for the Sioux Tribe such as drinking water and for sanitation. The construction of the oil pipeline means that there is a higher risk of an oil spill into Lake Oahe, which made the tribe concerned. The Sioux Tribe sued the DAPL company as they believed that the creation of the pipeline was violating the National Environmental Policy Act and the National Historic Preservation Act. After the 2016 briefing, the court was unable to come to a conclusion, so the court decided to do additional briefings. After five briefings in 2017 and one briefing in 2018, the court has allowed the construction of the pipeline, but the Standing Rock tribe continues to fight to ensure that pipeline is removed.
Australia
The attention in Australia is focused on the rights of Indigenous Australians to water and sanitation. History of settler-colonialism overshadows today's state governance that regulates water use to indigenous Australians. There are many governmental agreements, but most of them are incomplete to fully influence power to the indigenous right to water and sanitation. In Mabo v Queensland, 1992, Native rights were legally recognized at the first time. Indigenous Australians often claim cultural bonds to the land. Although "culture" was recognized in the court as much as land resources, cultural and spiritual value of Aborigines to water body are fuzzy. It is challenging but needed to transcend their cultural and spiritual values into legal sphere. For now, there is virtually no progress.
Australian water law basically prescribes surface water for citizens who can use surface water but cannot own. In the constitution, however, there is no description about inland and riparian water. Therefore, the sphere of inland/riparian water rights are the primary mandates of the state. The Commonwealth Government obtains authority over water by borrowing the help of external relationship, including the Grants Power, Trade and Commerce Power.
In 2000, the Federal Court concluded the agreement that allowed indigenous landowners to take water for traditional purposes. However, the use is limited to traditional purpose, which did not include irrigation as a traditional practice.
In June 2004, CoAC concluded an intergovernmental accord on a National Water Initiative (NWI), promoting recognition of indigenous right to water. However, NWI is not concerned broadly about complex history of settler-colonialism, which has systematically created an unequal pattern of water distribution. Indigenous people in Australia are constantly seeking the right to water.
Remaining discussions
Transboundary effects
Given the fact that access to water is a cross-border source of concern and potential conflict in the Middle East, South Asia, the Eastern Mediterranean and parts of North America amongst other places, some non-governmental organizations (NGOs) and scholars argue that the right to water also has a trans-national or extraterritorial aspect. They argue that given the fact that water supplies naturally overlap and cross borders, states also have a legal obligation not to act in a way that might have a negative effect on the enjoyment of human rights in other states. The formal acknowledgement of this legal obligation could prevent the negative effects of the global "water crunch" (as a future threat and one negative result of human overpopulation).
Water shortages and increasing consumption of freshwater make this right incredibly complicated. As the world population rapidly increases, freshwater shortages will cause many problems. A shortage in the quantity of water brings up the question of whether or not water should be transferred from one country to another.
Water Dispute Between India and Pakistan
The water dispute between India and Pakistan is influenced by the scarcity of water in the South Asian region. The two countries have a pre-existing agreement known as the Indus Waters Treaty. The treaty was formed to limit the conflict between India and Pakistan regarding the use of the Indus basin and allocate water supply for both countries after the countries gained independence. However, disagreements regarding it have surfaced. According to the treaty, India is allowed to use the western river basin for irrigation and non-consumptive purposes, while Pakistan has the majority of control over the basin. However, Pakistan has voiced concerns that India's construction on the rivers may lead to severe water scarcity in Pakistan. Moreover, Pakistan voiced that the dams constructed by India for non-consumptive purposes may be used to divert water flow and disrupt Pakistan's water supply. In addition, the treaty involves rivers that originate from Jammu and Kashmir, which have been excluded from control over their own water bodies.
Water commercialization versus state provision
Contention exists regarding whose, if anyone's, responsibility it is to ensure the human right to water and sanitation. Often, two schools of thought emerge from such discourse: it is the state's responsibility to provide access to clean water to people versus the privatization of distribution and sanitation.
The commercialization of water is offered as a response to the increased scarcity of water that has resulted due to the world population tripling while the demand for water has increased six-fold. Market environmentalism uses the markets as a solution to environmental problems such as environmental degradation and an inefficient use of resources. Supporters of market environmentalism believe that the managing of water as an economic good by private companies will be more efficient than governments providing water resources to their citizens. Such proponents claim that the government costs of developing infrastructure for water resource allocation is not worth the marginal benefits of water provision, thus deeming the state as an ineffective provider of water. Moreover, it is argued that water commodification leads to more sustainable water management due to the economic incentives for consumers to use water more efficiently.
The opponents believe that the consequence of water being a human right excludes private sector involvement and requires that water should be given to all people because it is essential to life. Access to water as a human right is used by some NGOs as a means to combat privatization efforts. A human right to water "generally rests on two justifications: the non-substitutability of drinking water ('essential for life'), and the fact that many other human rights which are explicitly recognized in the UN Conventions are predicated upon an (assumed) availability of water (e.g. the right to food)."
Organizations
Organizations working on the rights to water and sanitation are listed below.
United Nations organizations
OHCHR (UN Office of the High Commissioner on Human Rights)
UNDP
UNICEF
Sanitation and Water for All
Governmental cooperation agencies
DFID (United Kingdom's Cooperation Agency)
GIZ (German Corporation for International Cooperation)
SDC (Swiss Agency for Development and Cooperation)
EPA (United States Environmental Protection Agency)
International non-governmental organizations and networks
Action against Hunger (ACF)
Blood:Water
Center for Water Security and Cooperation
Freshwater Action Network (FAN)
Pure Water for the World
The DigDeep Right to Water Project
The Pacific Institute
The Water Project
Transnational Institute with the Water Justice project
UUSC
WaterAid
WaterLex (defunct as of 2020)
PeaceJam
Thirst Project
See also
References
External links
Special Rapporteur on the human right to safe drinking water and sanitation by the UN High Commissioner for Human Rights
WaterLex Archive
The Human Right to Water and Sanitation: Translating Theory into Practice (2009) by GIZ
Right to Water: Understanding children's right to water on Humanium
Water
Sanitation
Right to health | Human right to water and sanitation | Environmental_science | 6,336 |
42,552,728 | https://en.wikipedia.org/wiki/RU-58841 |
RU-58841, also known as PSK-3841 or HMR-3841, is a nonsteroidal antiandrogen (NSAA) which was initially developed in the 1980s by Roussel Uclaf, the French pharmaceutical company from which it received its name. It was formerly under investigation by ProStrakan (previously ProSkelia and Strakan) for potential use as a topical treatment for androgen-dependent conditions including acne, pattern hair loss, and excessive hair growth. The compound is similar in structure to the NSAA RU-58642 but contains a different side-chain. These compounds are similar in chemical structure to nilutamide, which is related to flutamide, bicalutamide, and enzalutamide, all of which are NSAAs similarly. RU-58841 can be synthesized either by building the hydantoin moiety or by aryl coupling to 5,5-dimethylhydantoin.
RU-58841 produces cyanonilutamide (RU-56279) and RU-59416 as metabolites in animals. Cyanonilutamide has relatively low affinity for the androgen receptor but shows significant antiandrogenic activity in animals. RU-59416 has very low affinity for the androgen receptor.
See also
Cyanonilutamide
RU-56187
RU-57073
RU-58642
RU-59063
References
Further reading
Abandoned drugs
Primary alcohols
Anti-acne preparations
Hair loss medications
Hair removal
Hydantoins
Nitriles
Nonsteroidal antiandrogens
Trifluoromethyl compounds | RU-58841 | Chemistry | 345 |
35,884,376 | https://en.wikipedia.org/wiki/Curzon%20Memories%20App | The Curzon Memories App is a locative media mobile app based at the Curzon Community Cinema, Clevedon, UK. The cinema celebrated its centenary in April 2012 and is one of the oldest continuously operating independent cinemas in the UK. The app was developed as part of an academic practice-based research project by Charlotte Crofts in collaboration with the Curzon's education officer, Cathy Poole and was funded by the Digital Cultures Research Centre and an Early Career Researcher Grant from the University of the West of England.
The app draws upon the extensive Curzon Collection of Cinema Heritage Technology and contains audio and video dramatisations and oral histories of employees and patrons recounting their experiences throughout the life of the cinema, including Julia Elton (daughter of Sir Arthur Elton, a pioneer of British documentary) and Muriel Williams, who was in the cinema on the night it got bombed during the Bristol Blitz January 1941.
Visitors to the cinema are invited to download the free app, from iTunes or Google Play, and access those experiences in the locations where they actually happened. QR Codes are discreetly placed around the cinema, which act as triggers for these memories. The App also contains a walk around the exterior of the building which includes memories and rephotography of archive photographs triggered by GPS signals as the participant takes the walk. For those who cannot get to the Curzon Cinema, there is a manual interface, and the QR codes are available at the official website.
The app was built using the AppFurnace authoring system, made by Calvium, and is available on both Apple iOS and Android.
Projection Hero
One of the unique features of the Curzon Mermories project is Projection Hero, an Internet of Things installation developed by Charlotte Crofts, in collaboration with Tarim at Media Playgrounds, which comprises a miniature cinema which can be operated via any web-enabled smartphone by scanning the QR code on the cinema screen, either from within the Curzon Memories App or using any QR Reader. The installation also works by typing in the accompanying URL into a web browser. This takes the user to an interface which enables the user to manipulate the cinema. The smartphone effectively becomes a "remote control" allowing the user to dim the lights, open the curtains and play the movies, using a combination of Arduino circuit and actuators connected to the Internet via an H bridge. The installation features interviews with retired projectionists Maurice Thornton and Pete Stamp, discussing the art of projection in the digital age. The installation is on permanent display at the Curzon Cinema, Clevedon, and has also been exhibited at the Watershed Media Centre, Bristol.
Reviews
The Curzon Memories App was selected to pitch in front of industry judges at AppCircus London Google Campus and won the Media Communications and Cultural Studies Association Annual Conference, MeCCSA 2012 Poster prize. The project was featured on BBC Radio 4's You and Yours in an item on AppCircus where one of the jury, Facebook's Simon Cross said “You take the concept of what they did which is to bring the history of this old cinema to life, they could’ve stopped at videos and text, but connecting it to something physical such as a screen you can control from your phone, that’s a completely different experience … it just feels like you are connected to it, because your phone is controlling it … that feels pretty cool to me”.
The app has also been featured in the following press: The Guardian Apps Rush: "More location-based UK goodness with this app, based on the Curzon Community Cinema in Clevedon. It gets people to scan QR codes around the building to hear stories from local people. There's a manual interface to access everything if you're not in Clevedon too", Wired UK Magazine, Film Studies For Free, The Pervasive Media Cookbook, Imperica.
Research
This app was developed as part of an academic research project based at the University of the West of England to explore how new media can be used to enhance cinema heritage. It has been disseminated at a number of international academic conferences:
2011 ‘The Curzon Creative Technologies Project: Context Aware Media in a Screen Heritage Context’ at MeCCSA 2011 Annual Conference, Salford, Jan.
2011 ‘Technologies of Being: Pervasive Heritage’ at Postdigital Encounters: Creativity and Improvisation, JMP Symposium, UWE/Watershed, Bristol, June.
2011 ‘Pervasive Screens: Transforming the Consumption of Cinema History with the Curzon Heritage App’, Screen 2011, Glasgow, July.
2011 ‘Technologies of Seeing the Past: The Curzon Memories App’, EVA London 2011 Electronic and Visual Arts Conference, July. Article published in the peer-reviewed conference proceedings.
2012 Poster ‘“You Must Remember This”: Using Mobile Technologies to Celebrate the History of the Cinema with the Curzon Memories App’, plus exhibit of the Projection Hero installation, MeCCSA 2012, Jan – won prize for best poster.
‘The Curzon Memories Project: Reflections On The First Iteration’ at the Open Studio Lunchtime Talk, Pervasive Media Studio, Bristol, March (invited).
2011 ‘The Curzon Memories App: Designing a Screen Heritage Experience’, at Centre for Media and Cultural Research, Birmingham City University, May (invited, plus expenses).
2011 ‘Spatialising the Archive: Dramatisation, Oral History and Locative Heritage in the Curzon Memories App’ at IMTAL International Museum and Theatre Alliance Conference, Bristol, October (invited).
2011 ‘The Curzon Memories App – Shifting Practices, from Filmmaking to Experience Design’ MA Masterclass, University of Sussex Department of Media, Film and Music, October (invited, plus fee and expenses).
2011 Presentation on ‘The Curzon Memories App’, with Curzon Education Officer, Cathy Poole at MovIES (Moving Image Education Specialists) Meeting, BFI Southbank, November (invited, plus expenses).
2012 ‘Geo-spatial and Geo-temporal documentary: The Curzon Memories App, City Strata and The Cinemap Layer’, i-docs 2012, Watershed Media Centre, March (invited).
2012 Keynote: 'Being T/here: Locative Media Design and the Need for Armchair Mode', Environmental Utterance Conference, University of Falmouth, September (invited, plus fee and expenses).
References
External links
Curzon Memories App, Official Website
Clevedon
Augmented reality
Virtual reality works
Smartphones
Multimedia | Curzon Memories App | Technology | 1,339 |
10,799,951 | https://en.wikipedia.org/wiki/Power%20supply%20rejection%20ratio | In electronic systems, power supply rejection ratio (PSRR), also supply-voltage rejection ratio (kSVR; SVR), is a term widely used to describe the capability of an electronic circuit to suppress any power supply variations to its output signal.
In the specifications of operational amplifiers, the PSRR is defined as the ratio of the change in supply voltage to the equivalent (differential) output voltage it produces, often expressed in decibels. An ideal op-amp would have infinite PSRR, as the device should have no change to the output voltage with any changes to the power supply voltage. The output voltage will depend on the feedback circuit, as is the case of regular input offset voltages. But testing is not confined to DC (zero frequency); often an operational amplifier will also have its PSRR given at various frequencies (in which case the ratio is one of RMS amplitudes of sinewaves present at a power supply compared with the output, with gain taken into account). Unwanted oscillation, including motorboating, can occur when an amplifying stage is too sensitive to signals fed via the power supply from a later power amplifier stage.
Some manufacturers specify PSRR in terms of the offset voltage it causes at the amplifiers inputs; others specify it in terms of the output; there is no industry standard for this issue. The following formula assumes it is specified in terms of input:
where is the voltage gain.
For example: an amplifier with a PSRR of 100 dB in a circuit to give 40 dB closed-loop gain would allow about 1 millivolt of power supply ripple to be superimposed on the output for every 1 volt of ripple in the supply. This is because
.
And since that's 60 dB of rejection, the sign is negative so:
Note:
The PSRR doesn't necessarily have the same poles as A(s), the open-loop gain of the op-amp, but generally tends to also worsen with increasing frequency (e.g. http://focus.ti.com/lit/ds/symlink/opa2277.pdf).
For amplifiers with both positive and negative power supplies (with respect to earth, as op-amps often have), the PSRR for each supply voltage may be separately specified (sometimes written: PSRR+ and PSRR−), but normally the PSRR is tested with opposite polarity signals applied to both supply rails at the same time (otherwise the common-mode rejection ratio (CMRR) will affect the measurement of the PSRR).
For voltage regulators the PSRR is occasionally quoted (confusingly; to refer to output voltage change ratios), but often the concept is transferred to other terms relating changes in output voltage to input: Ripple rejection (RR) for low frequencies, line transient response for high frequencies, and line regulation for DC.
References
External links
Operational Amplifier Power Supply Rejection Ratio (PSRR) and Supply Voltages by Analog Devices, Inc. Definition and measurement of PSRR.
Application Note on PSRR Testing of Linear Voltage Regulators, by Florian Hämmerle (OMICRON Lab) and Steven Sandler (Picotest)
Introduction to System Design Using Integrated Circuits, via Google Books
Electronics concepts
Power supplies
Engineering ratios | Power supply rejection ratio | Mathematics,Engineering | 666 |
12,415,907 | https://en.wikipedia.org/wiki/Claw-free%20graph | In graph theory, an area of mathematics, a claw-free graph is a graph that does not have a claw as an induced subgraph.
A claw is another name for the complete bipartite graph (that is, a star graph comprising three edges, three leaves, and a central vertex). A claw-free graph is a graph in which no induced subgraph is a claw; i.e., any subset of four vertices has other than only three edges connecting them in this pattern. Equivalently, a claw-free graph is a graph in which the neighborhood of any vertex is the complement of a triangle-free graph.
Claw-free graphs were initially studied as a generalization of line graphs, and gained additional motivation through three key discoveries about them: the fact that all claw-free connected graphs of even order have perfect matchings, the discovery of polynomial time algorithms for finding maximum independent sets in claw-free graphs, and the characterization of claw-free perfect graphs. They are the subject of hundreds of mathematical research papers and several surveys.
Examples
The line graph of any graph is claw-free. is defined as having a vertex for every edge of . Two vertices are adjacent in whenever the corresponding edges share an endpoint in . A line graph cannot contain a claw. If three edges , , and in all share endpoints with another edge (forming a tree in with , , and as its leaves and as an internal vertex), then there must be another adjacency between these edges, preventing this tree from being an induced subgraph. This is because, by the pigeonhole principle, at least two of , , and must share one of the two endpoints of with each other, and are therefore adjacent in . Line graphs may be characterized in terms of nine forbidden subgraphs; the claw is the simplest of these nine graphs. This characterization provided the initial motivation for studying claw-free graphs.
The complement of any triangle-free graph is claw-free. These graphs include as a special case any complete graph.
Proper interval graphs, the interval graphs formed as intersection graphs of families of intervals in which no interval contains another interval, are claw-free, because four properly intersecting intervals cannot intersect in the pattern of a claw. The same is true more generally for proper circular-arc graphs.
The Moser spindle, a seven-vertex graph used to provide a lower bound for the chromatic number of the plane, is claw-free.
The graphs of several polyhedra and polytopes are claw-free, including the graph of the tetrahedron and more generally of any simplex (a complete graph), the graph of the octahedron and more generally of any cross polytope (isomorphic to the cocktail party graph formed by removing a perfect matching from a complete graph), the graph of the regular icosahedron, and the graph of the 16-cell.
The Schläfli graph, a strongly regular graph with 27 vertices, is claw-free.
Recognition
It is straightforward to verify that a given graph with vertices and edges is claw-free in time , by testing each 4-tuple of vertices to determine whether they induce a claw. With more efficiency, and greater complication, one can test whether a graph is claw-free by checking, for each vertex of the graph, that the complement graph of its neighbors does not contain a triangle. A graph contains a triangle if and only if the cube of its adjacency matrix contains a nonzero diagonal element, so finding a triangle may be performed in the same asymptotic time bound as matrix multiplication. Therefore, using fast matrix multiplication, the total time for this claw-free recognition algorithm would be .
observe that in any claw-free graph, each vertex has at most neighbors; for otherwise by Turán's theorem the neighbors of the vertex would not have enough remaining edges to form the complement of a triangle-free graph. This observation allows the check of each neighborhood in the fast matrix multiplication based algorithm outlined above to be performed in the same asymptotic time bound as matrix multiplication, or faster for vertices with even lower degrees. The worst case for this algorithm occurs when vertices have neighbors each, and the remaining vertices have few neighbors, so its total time is .
Enumeration
Because claw-free graphs include complements of triangle-free graphs, the number of claw-free graphs on vertices grows at least as quickly as the number of triangle-free graphs, exponentially in the square of .
The numbers of connected claw-free graphs on nodes, for are
1, 1, 2, 5, 14, 50, 191, 881, 4494, 26389, 184749, ... .
If the graphs are allowed to be disconnected, the numbers of graphs are even larger: they are
1, 2, 4, 10, 26, 85, 302, 1285, 6170, ... .
A technique of allows the number of claw-free cubic graphs to be counted very efficiently, unusually for graph enumeration problems.
Matchings
and, independently, proved that every claw-free connected graph with an even number of vertices has a perfect matching. That is, there exists a set of edges in the graph such that each vertex is an endpoint of exactly one of the matched edges. The special case of this result for line graphs implies that, in any graph with an even number of edges, one can partition the edges into paths of length two. Perfect matchings may be used to provide another characterization of the claw-free graphs: they are exactly the graphs in which every connected induced subgraph of even order has a perfect matching.
Sumner's proof shows, more strongly, that in any connected claw-free graph one can find a pair of adjacent vertices the removal of which leaves the remaining graph connected. To show this, Sumner finds a pair of vertices and that are as far apart as possible in the graph, and chooses to be a neighbor of that is as far from as possible. As he shows, neither nor can lie on any shortest path from any other node to , so the removal of and leaves the remaining graph connected. Repeatedly removing matched pairs of vertices in this way forms a perfect matching in the given claw-free graph.
The same proof idea holds more generally if is any vertex, is any vertex that is maximally far from , and is any neighbor of that is maximally far from . Further, the removal of and from the graph does not change any of the other distances from . Therefore, the process of forming a matching by finding and removing pairs that are maximally far from may be performed by a single postorder traversal of a breadth first search tree of the graph, rooted at , in linear time. provide an alternative linear-time algorithm based on depth-first search, as well as efficient parallel algorithms for the same problem.
list several related results, including the following: -connected -free graphs of even order have perfect matchings for any ; claw-free graphs of odd order with at most one degree-one vertex may be partitioned into an odd cycle and a matching; for any that is at most half the minimum degree of a claw-free graph in which either or the number of vertices is even, the graph has a -factor; and, if a claw-free graph is -connected, then any -edge matching can be extended to a perfect matching.
Independent sets
An independent set in a line graph corresponds to a matching in its underlying graph, a set of edges no two of which share an endpoint. The blossom algorithm of finds a maximum matching in any graph in polynomial time, which is equivalent to computing a maximum independent set in line graphs. This has been independently extended to an algorithm for all claw-free graphs by and .
Both approaches use the observation that in claw-free graphs, no vertex can have more than two neighbors in an independent set, and so the symmetric difference of two independent sets must induce a subgraph of degree at most two; that is, it is a union of paths and cycles. In particular, if is a non-maximum independent set, it differs from any maximum independent set by even cycles and so called augmenting paths: induced paths which alternate between vertices not in and vertices in , and for which both endpoints have only one neighbor in . As the symmetric difference of with any augmenting path gives a larger independent set, the task thus reduces to searching for augmenting paths until no more can be found, analogously as in algorithms for finding maximum matchings.
Sbihi's algorithm recreates the blossom contraction step of Edmonds' algorithm and adds a similar, but more complicated, clique contraction step. Minty's approach is to transform the problem instance into an auxiliary line graph and use Edmonds' algorithm directly to find the augmenting paths. After a correction by , Minty's result may also be used to solve in polynomial time the more general problem of finding in claw-free graphs an independent set of maximum weight.
Generalizations of these results to wider classes of graphs are also known.
By showing a novel structure theorem, gave a cubic time algorithm, which also works in the weighted setting.
Coloring, cliques, and domination
A perfect graph is a graph in which the chromatic number and the size of the maximum clique are equal, and in which this equality persists in every induced subgraph. It is now known (the strong perfect graph theorem) that perfect graphs may be characterized as the graphs that do not have as induced subgraphs either an odd cycle or the complement of an odd cycle (a so-called odd hole). However, for many years this remained an unsolved conjecture, only proven for special subclasses of graphs. One of these subclasses was the family of claw-free graphs: it was discovered by several authors that claw-free graphs without odd cycles and odd holes are perfect. Perfect claw-free graphs may be recognized in polynomial time. In a perfect claw-free graph, the neighborhood of any vertex forms the complement of a bipartite graph. It is possible to color perfect claw-free graphs, or to find maximum cliques in them, in polynomial time.
In general, it is NP-hard to find the largest clique in a claw-free graph. It is also NP-hard to find an optimal coloring of the graph, because (via line graphs) this problem generalizes the NP-hard problem of computing the chromatic index of a graph. For the same reason, it is NP-hard to find a coloring that achieves an approximation ratio better than 4/3. However, an approximation ratio of two can be achieved by a greedy coloring algorithm, because the chromatic number of a claw-free graph is greater than half its maximum degree. A generalization of the edge list coloring conjecture states that, for claw-free graphs, the list chromatic number equals the chromatic number; these two numbers can be far apart in other kinds of graphs.
The claw-free graphs are χ-bounded, meaning that every claw-free graph of large chromatic number contains a large clique. More strongly, it follows from Ramsey's theorem that every claw-free graph of large maximum degree contains a large clique, of size roughly proportional to the square root of the degree. For connected claw-free graphs that include at least one three-vertex independent set, a stronger relation between chromatic number and clique size is possible: in these graphs, there exists a clique of size at least half the chromatic number.
Although not every claw-free graph is perfect, claw-free graphs satisfy another property, related to perfection. A graph is called domination perfect if it has a minimum dominating set that is independent, and if the same property holds in all of its induced subgraphs. Claw-free graphs have this property. To see this, let be a dominating set in a claw-free graph, and suppose that and are two adjacent vertices in . Then the set of vertices dominated by but not by must be a clique (else would be the center of a claw). If every vertex in this clique is already dominated by at least one other member of , then can be removed producing a smaller independent dominating set, and otherwise can be replaced by one of the undominated vertices in its clique producing a dominating set with fewer adjacencies. By repeating this replacement process one eventually reaches a dominating set no larger than , so in particular when the starting set is a minimum dominating set this process forms an equally small independent dominating set.
Despite this domination perfectness property, it is NP-hard to determine the size of the minimum dominating set in a claw-free graph. However, in contrast to the situation for more general classes of graphs, finding the minimum dominating set or the minimum connected dominating set in a claw-free graph is fixed-parameter tractable: it can be solved in time bounded by a polynomial in the size of the graph multiplied by an exponential function of the dominating set size.
Structure
overview a series of papers in which they prove a structure theory for claw-free graphs, analogous to the graph structure theorem for minor-closed graph families proven by Robertson and Seymour, and to the structure theory for perfect graphs that Chudnovsky, Seymour and their co-authors used to prove the strong perfect graph theorem. The theory is too complex to describe in detail here, but to give a flavor of it, it suffices to outline two of their results. First, for a special subclass of claw-free graphs which they call quasi-line graphs (equivalently, locally co-bipartite graphs), they state that every such graph has one of two forms:
A fuzzy circular interval graph, a class of graphs represented geometrically by points and arcs on a circle, generalizing proper circular arc graphs.
A graph constructed from a multigraph by replacing each edge by a fuzzy linear interval graph. This generalizes the construction of a line graph, in which every edge of the multigraph is replaced by a vertex. Fuzzy linear interval graphs are constructed in the same way as fuzzy circular interval graphs, but on a line rather than on a circle.
Chudnovsky and Seymour classify arbitrary connected claw-free graphs into one of the following:
Six specific subclasses of claw-free graphs. Three of these are line graphs, proper circular arc graphs, and the induced subgraphs of an icosahedron; the other three involve additional definitions.
Graphs formed in four simple ways from smaller claw-free graphs.
Antiprismatic graphs, a class of dense graphs defined as the claw-free graphs in which every four vertices induce a subgraph with at least two edges.
Much of the work in their structure theory involves a further analysis of antiprismatic graphs. The Schläfli graph, a claw-free strongly regular graph with parameters srg(27,16,10,8), plays an important role in this part of the analysis. This structure theory has led to new advances in polyhedral combinatorics and new bounds on the chromatic number of claw-free graphs, as well as to new fixed-parameter-tractable algorithms for dominating sets in claw-free graphs.
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
Claw-free graphs, Information System on Graph Class Inclusions
Graph families
Matching (graph theory) | Claw-free graph | Mathematics | 3,163 |
5,993,712 | https://en.wikipedia.org/wiki/Koide%20formula | The Koide formula is an unexplained empirical equation discovered by Yoshio Koide in 1981. In its original form, it is not fully empirical but a set of guesses for a model for masses of quarks and leptons, as well as CKM angles. From this model it survives the observation about the masses of the three charged leptons; later authors have extended the relation to neutrinos, quarks, and other families of particles.
Formula
The Koide formula is
where the masses of the electron, muon, and tau are measured respectively as , , and ; the digits in parentheses are the uncertainties in the last digits. This gives .
No matter what masses are chosen to stand in place of the electron, muon, and tau, the ratio is constrained to . The upper bound follows from the fact that the square roots are necessarily positive, and the lower bound follows from the Cauchy–Bunyakovsky–Schwarz inequality. The experimentally determined value, , lies at the center of the mathematically allowed range. But note that removing the requirement of positive roots, it is possible to fit an extra tuple in the quark sector (the one with strange, charm and bottom).
The mystery is in the physical value. Not only is the result peculiar, in that three ostensibly arbitrary numbers give a simple fraction, but also in that in the case of electron, muon, and tau, is exactly halfway between the two extremes of all possible combinations: (if the three masses were equal) and (if one mass dwarfs the other two). is a dimensionless quantity, so the relation holds regardless of which unit is used to express the magnitudes of the masses.
Robert Foot also interpreted the Koide formula as a geometrical relation, in which the value is the squared cosine of the angle between the vector and the vector (see Dot product). That angle is almost exactly 45 degrees:
When the formula is assumed to hold exactly (), it may be used to predict the tau mass from the (more precisely known) electron and muon masses; that prediction is .
While the original formula arose in the context of preon models, other ways have been found to derive it (both by Sumino and by Koide – see references below). As a whole, however, understanding remains incomplete. Similar matches have been found for triplets of quarks depending on running masses. With alternating quarks, chaining Koide equations for consecutive triplets, it is possible to reach a result of for the mass of the top quark.
Notable properties
Permutation symmetry
The Koide relation exhibits permutation symmetry among the three charged lepton masses , , and . This means that the value of remains unchanged under any interchange of these masses. Since the relation depends on the sum of the masses and the sum of their square roots, any permutation of , , and leaves invariant:
for any permutation of .
Scale invariance
The Koide relation is scale invariant; that is, multiplying each mass by a common constant does not affect the value of . Let for . Then:
Therefore, remains unchanged under scaling of the masses by a common factor.
Speculative extension
Carl Brannen has proposed the lepton masses are given by the squares of the eigenvalues of a circulant matrix with real eigenvalues, corresponding to the relation
for = 0, 1, 2, ...
which can be fit to experimental data with = 0.500003(23) (corresponding to the Koide relation) and phase = 0.2222220(19), which is almost exactly . However, the experimental data are in conflict with simultaneous equality of = and = .
This kind of relation has also been proposed for the quark families, with phases equal to low-energy values = × and = × , hinting at a relation with the charge of the particle family and for quarks vs. = 1 for the leptons, where
Origins
The original derivation
postulates with the conditions
from which the formula follows. Besides, masses for neutrinos and down quarks were postulated to be proportional to while masses for up quarks were postulated to be
The published model justifies the first condition as part of a symmetry breaking scheme, and the second one as a "flavor charge" for preons in the interaction that causes this symmetry breaking.
Note that in matrix form with and the equations are simply and
Similar formulae
There are similar formulae which relate other masses.
Quark masses depend on the energy scale used to measure them, which makes an analysis more complicated.
Taking the heaviest three quarks, charm (), bottom () and top (), regardless of their uncertainties, one arrives at the value cited by F. G. Cao (2012):
This was noticed by Rodejohann and Zhang in the preprint of their 2011 article, but the observation was removed in the published version, so the first published mention is in 2012 from Cao.
The relation
is published as part of the analysis of Rivero, who notes (footnote 3 in the reference) that an increase of the value for charm mass makes both equations, heavy and middle, exact.
The masses of the lightest quarks, up (), down (), and strange (), without using their experimental uncertainties, yield
a value also cited by Cao in the same article. An older article, H. Harari, et al., calculates theoretical values for up, down and strange quarks, coincidentally matching the later Koide formula, albeit with a massless up-quark.
This could be considered the first appearance of a Koide-type formula in the literature.
Running of particle masses
In quantum field theory, quantities like coupling constant and mass "run" with the energy scale.
That is, their value depends on the energy scale at which the observation occurs, in a way described by a renormalization group equation (RGE).
One usually expects relationships between such quantities to be simple at high energies (where some symmetry is unbroken) but not at low energies, where the RG flow will have produced complicated deviations from the high-energy relation. The Koide relation is exact (within experimental error) for the pole masses, which are low-energy quantities defined at different energy scales. For this reason, many physicists regard the relation as "numerology".
However, the Japanese physicist Yukinari Sumino has proposed mechanisms to explain origins of the charged lepton spectrum as well as the Koide formula, e.g., by constructing an effective field theory with a new gauge symmetry that causes the pole masses to exactly satisfy the relation.
Koide has published his opinions concerning Sumino's model.
François Goffinet's doctoral thesis gives a discussion on pole masses and how the Koide formula can be reformulated to avoid using square roots for the masses.
As solutions to a cubic equation
A cubic equation usually arises in symmetry breaking when solving for the Higgs vacuum, and is a natural object when considering three generations of particles. This involves finding the eigenvalues of a 3 × 3 mass matrix.
For this example, consider a characteristic polynomial
with roots that must be real and positive.
To derive the Koide relation, let and the resulting polynomial can be factored into
or
The elementary symmetric polynomials of the roots must reproduce the corresponding coefficients from the polynomial that they solve, so and Taking the ratio of these symmetric polynomials, but squaring the first so we divide out the unknown parameter we get a Koide-type formula: Regardless of the value of the solutions to the cubic equation for must satisfy
so
and
Converting back to
For the relativistic case, Goffinet's dissertation presented a similar method to build a polynomial with only even powers of
Higgs mechanism
Koide proposed that an explanation for the formula could be a Higgs particle with flavour charge given by:
with the charged lepton mass terms given by Such a potential is minimised when the masses fit the Koide formula. Minimising does not give the mass scale, which would have to be given by additional terms of the potential, so the Koide formula might indicate existence of additional scalar particles beyond the Standard Model's Higgs boson.
In fact one such Higgs potential would be precisely which when expanded out the determinant in terms of traces would simplify using the Koide relations.
Footnotes
See also
References
Further reading
(See the article's references links to "The lepton masses" and "Recent results from the MINOS experiment".)
External links
Wolfram Alpha, link solves for the predicted tau mass from the Koide formula.
Leptons
Unsolved problems in physics
Empirical laws
1980s in science | Koide formula | Physics | 1,810 |
44,429,959 | https://en.wikipedia.org/wiki/Temporal%20network | A temporal network, also known as a time-varying network, is a network whose links are active only at certain points in time. Each link carries information on when it is active, along with other possible characteristics such as a weight. Time-varying networks are of particular relevance to spreading processes, like the spread of information and disease, since each link is a contact opportunity and the time ordering of contacts is included.
Examples of time-varying networks include communication networks where each link is relatively short or instantaneous, such as phone calls or e-mails. Information spreads over both networks, and some computer viruses spread over the second. Networks of physical proximity, encoding who encounters whom and when, can be represented as time-varying networks. Some diseases, such as airborne pathogens, spread through physical proximity. Real-world data on time resolved physical proximity networks has been used to improve epidemic modeling. Neural networks and brain networks can be represented as time-varying networks since the activation of neurons are time-correlated.
Time-varying networks are characterized by intermittent activation at the scale of individual links. This is in contrast to various models of network evolution, which may include an overall time dependence at the scale of the network as a whole.
Applicability
Time-varying networks are inherently dynamic, and used for modeling spreading processes on networks. Whether using time-varying networks will be worth the added complexity depends on the relative time scales in question. Time-varying networks are most useful in describing systems where the spreading process on a network and the network itself evolve at similar timescales.
Let the characteristic timescale for the evolution of the network be , and the characteristic timescale for the evolution of the spreading process be . A process on a network will fall into one of three categories:
Static approximation – where . The network evolves relatively slowly, so the dynamics of the process can be approximated using a static version of the network.
Time-varying network – where . The network and the process evolve at comparable timescales so the interplay between them becomes important.
Annealed approximation – where . The network evolves relatively rapidly, so the dynamics of the process can be approximated using a time averaged version of the network.
The flow of data over the internet is an example for the first case, where the network changes very little in the fraction of a second it takes for a network packet to traverse it. The spread of sexually transmitted diseases is an example of the second, where the prevalence of the disease spreads in direct correlation to the rate of evolution of the sexual contact network itself. Behavioral contagion is an example of the third case, where behaviors spread through a population over the combined network of many day-to-day social interactions.
Representations
There are three common representations for time-varying network data.
Contact sequences – if the duration of interactions are negligible, the network can be represented as a set of contacts where and are the nodes and the time of the interaction. Alternatively, it can be represented as an edge list where each edge is a pair of nodes and has a set of active times .
Interval graphs – if the duration of interactions are non-negligible, becomes a set of intervals over which the edge is active.
Snapshots – time-varying networks can also be represented as a series of static networks, one for each time step.
Properties
The measures used to characterize static networks are not immediately transferable to time-varying networks. See Path, Connectedness, Distance, Centrality. However, these network concepts have been adapted to apply to time-varying networks.
Time respecting paths
Time respecting paths are the sequences of links that can be traversed in a time-varying network under the constraint that the next link to be traversed is activated at some point after the current one. Like in a directed graph, a path from to does not mean there is a path from to . In contrast to paths in static and evolving networks, however, time respecting paths are also non-transitive. That is to say, just because there is a path from to and from to does not mean that there is a path from to . Furthermore, time respecting paths are themselves time-varying, and are only valid paths during a specific time interval.
Reachability
While analogous to connectedness in static networks, reachability is a time-varying property best defined for each node in the network. The set of influence of a node is the set of all nodes that can be reached from via time respecting paths, note that it is dependent on the start time . The source set of a node is the set of all nodes that can reach via time respecting paths within a given time interval. The reachability ratio can be defined as the average over all nodes of the fraction of nodes within the set of influence of .
Connectedness of an entire network is less conclusively defined, although some have been proposed. A component may be defined as strongly connected if there is a directed time respecting path connecting all nodes in the component in both directions. A component may be defined as weakly connected if there is an undirected time respecting path connecting all nodes in the component in both directions. Also, a component may be defined as transitively connected if transitivity holds for the subset of nodes in that component.
Causal fidelity
Causal fidelity quantifies the goodness of the static approximation of a temporal network. Such a static approximation is generated by aggregating the edges of a temporal network over time. The idea of causal fidelity is to compare the number of paths between all node pairs in the temporal network (that is, all time respecting paths) with the number of paths between all nodes in the static approximation of the network. The causal fidelity is then defined by
.
Since in only time respecting paths are considered, , and consequently . A high causal fidelity means that the considered temporal network is well approximated by its static (aggregated) counterpart. If , then most node pairs that are reachable in the static representation are not connected by time respecting paths in the temporal network.
Latency
Also called temporal distance, latency is the time-varying equivalent to distance. In a time-varying network any time respecting path has a duration, namely the time it takes to follow that path. The fastest such path between two nodes is the latency, note that it is also dependent on the start time. The latency from node to node beginning at time is denoted by .
Centrality measures
Measuring centrality on time-varying networks involves a straightforward replacement of distance with latency. For discussions of the centrality measures on a static network see Centrality.
Closeness centrality is large for nodes that are close to all other nodes (i.e. have small latency for all )
Betweenness centrality is large for nodes that are often a part of the smallest latency paths between other pairs of nodes. It is defined as the ratio of the number of smallest latency paths from and that pass through to the total number of smallest latency paths from and
The time-varying nature of latency, specifically that it will become infinity for all node pairs as the time approaches the end of the network interval used, makes an alternative measure of closeness useful. Efficiency uses instead the reciprocal of the latency, so the efficiency approaches zero instead of diverging. Higher values for efficiency correspond to more central nodes in the network.
Temporal patterns
Time-varying network allow for analysis of explicit time dependent properties of the network. It is possible to extract recurring and persistent patterns of contact from time-varying data in many ways. This is an area of ongoing research.
Characteristic times of the system can be found by looking for distinct changes in a variable, such as the reachability ratio. For example, if one allows only a finite waiting time at all nodes in calculating latency, one can find interesting patterns in the resulting reachability ratio. For a mobile call network, the reachability ratio has been found to increase dramatically if one allows delays of at least two days, and for the airline network the same effect has been found at around 30 minutes. Moreover, the characteristic time scale of a temporal network is given by the mode of the distribution of shortest path durations. This distribution can be calculated using the reachability between all node pairs in the network.
Persistent patterns are ones that reoccur frequently in the system. They can be discovered by averaging over different across the time interval of the system and looking for patterns that reoccur over a specified threshold.
Motifs are specific temporal patterns that occur more often the expected in a system. The time-varying network of Facebook wall postings, for example, has higher frequency of chains, stars, and back and forth interactions that could be expected for a randomized network.
Egocentric Temporal motifs can be used to exploit temporal ego-networks. Due to their first-order complexity can be counted in large graphs in a reasonable execution time. For example, Longa et al. show how to use Egocentric Temporal Motifs for measuring distances among face-to-face interaction networks in different social contexts.
Detecting missing links
Dynamics
Time-varying networks allow for the analysis of an entirely new dimension of dynamic processes on networks. In cases where the time scales of evolution of the network and the process are similar, the temporal structure of time-varying networks has a dramatic impact on the spread of the process over the network.
Burstiness
The time between two consecutive events, for an individual node or link, is called the inter-event time. The distribution of inter-event times of a growing number of important, real-world, time-varying networks have been found to be bursty, meaning inter-event times are very heterogeneous – they have a heavy-tailed distribution. This translates to a pattern of activation where activity comes in bursts separated by longer stretches of inactivity.
Burstiness of inter-event times can dramatically slow spreading processes on networks, which has implications for the spread of disease, information, ideas, and computer viruses. However, burstiness can also accelerate spreading processes, and other network properties also have an effect on spreading speed. Real-world time-varying networks may thus promote spreading processes despite having a bursty inter-event time distribution.
Burstiness as an empirical quantity can be calculated for any sequence of inter-event times, , by comparing the sequence to one generated by a Poisson process. The ratio of the standard deviation, , to the mean, , of a Poisson process is 1. This measure compares to 1.
Burstiness varies from −1 to 1. B = 1 indicates a maximally bursty sequence, B = 0 indicates a Poisson distribution, and B = −1 indicates a periodic sequence.
See also
Complex contagion
Complex network
Epidemic model
Directed percolation
Dynamic network analysis
Exponential random graph models
Link-centric preferential attachment
Scale-free network
Percolation theory
References
Network theory | Temporal network | Mathematics | 2,202 |
8,652,895 | https://en.wikipedia.org/wiki/CCL21 | Chemokine (C-C motif) ligand 21 (CCL21) is a small cytokine belonging to the CC chemokine family. This chemokine is also known as 6Ckine (because it has six conserved cysteine residues instead of the four cysteines typical to chemokines), exodus-2, and secondary lymphoid-tissue chemokine (SLC). CCL21 elicits its effects by binding to a cell surface chemokine receptor known as CCR7. The main function of CCL21 is to guide CCR7 expressing leukocytes to the secondary lymphoid organs, such as lymph nodes and Peyer´s patches.
Gene
The gene for CCL21 is located on human chromosome 9. CCL21 is classified as a homeostatic chemokine, it is produced constitutively. However, its expression increases during inflammation.
Protein structure
Chemokine CCL21 contains an extended C-terminus which is not found in CCL19, another ligand of CCR7. C-terminal tail is composed of 37 amino acids rich in positively charged residues and therefore, it has high affinity for negatively charged molecules of the extracellular matrix. The cleavage of the C-terminal tail by peptidases produces a soluble form of CCL21. The soluble CCL21 occurs also in physiological conditions. It does not bind to extracellular matrix and therefore, its function differs from the function of the full-length CCL21.
Function
Migration to secondary lymphoid organs
Naïve T cells circulate through secondary lymphoid organs until they encounter the antigen. CCL21 is a chemokine involved in the recruitment of T cells into secondary lymphoid organs. It is produced by lymphatic endothelial cells and lymph node stromal cells. Full-length CCL21 is bound to glycosaminoglycans, and endothelial cells and it induces the chemotactic migration of T cells and the cell adhesion caused by integrin activation. In contrast, the soluble CCL21 is not involved in the induction of the cell adhesion. After T cells enter the lymph nodes through high endothelial venules, they are attracted to the T cell zone, where fibroblastic reticular cells are the abundant source of CCL21.
CCL21/CCR7 interaction also plays a role in the migration of dendritic cells to the secondary lymphoid organs. Dendritic cells upregulate the expression of CCR7 during their maturation. CCL21 is bound to the lymphatic vessels and attracts CCR7 expressing dendritic cells from peripheral tissues. Then they migrate along the chemokine gradient to the lymph node where they present the antigen to T cells. Interactions between dendritic cells and T cells are necessary for the initiation of the adaptive immune response. When CCL21 is not recognized by the cells (for example in CCR7-deficient mice), a delayed and reduced adaptive immune response occurs due to reduced interactions between dendritic cells and T cells in the lymph nodes. Semi-mature dendritic cells express CCR7 in the absence of a danger signal. They use CCL21 chemokine gradient for the migration to the lymph nodes even when there is no inflammation in the body, and they contribute to peripheral tolerance.
Other cells using chemokine CCL21 for the migration to the lymph nodes are B cells. However, they are less dependent on it in comparison to T cells.
T cell development in the thymus
CCL21/CCR7 interaction plays a role in the T cell development in the thymus. CCL21 is produced in the thymus medulla by medullary thymic epithelial cells, and it attracts single positive thymocytes from the thymus cortex to the medulla, where they undergo negative selection to delete autoreactive thymocytes.
References
External links
Further reading
Cytokines | CCL21 | Chemistry | 860 |
72,387,940 | https://en.wikipedia.org/wiki/Olivia%20Sanchez%20Brown | Olivia Sanchez Brown is a multimedia Chicana artist and curator in the Los Angeles area and has been active since the early 1970s.
Art
Olivia Sanchez Brown was one of the first Chicana Artists to be featured in a juried exhibition in The Woman's Building in 1973, around the time that The Woman's Building was first established. And was included in the "Madre Tierra" exhibit in collaboration with several Chicano artists.
References
External links
American artists of Mexican descent
Living people
Year of birth missing (living people)
Artists from Los Angeles
American art curators
20th-century American women artists
21st-century American women artists
Multimedia artists
Chicana feminists | Olivia Sanchez Brown | Technology | 133 |
3,352,673 | https://en.wikipedia.org/wiki/Ghost-canceling%20reference | Ghost-canceling reference (GCR) is a special sub-signal on a television channel that receivers can use to compensate for the ghosting effect of a television signal distorted by multipath propagation between transmitter and receiver.
In the United States, the GCR signal is a chirp in frequency of the modulating signal from 0 Hz to 4.2 MHz, transmitted during the vertical blanking interval over one video line (line 19 in the U.S.), shifted in phase by 180° once per frame, with this pattern inverted every four lines. Television receivers generate their own local versions of this signal and use the comparison between the local and remote signals to tune an adaptive equalizer that removes ghost images on the screen.
GCR was introduced after its recommendation in 1993 by the Advanced Television Systems Committee.
References
External links
Official GCR specification
Television technology | Ghost-canceling reference | Technology | 174 |
480,634 | https://en.wikipedia.org/wiki/Absorbance | Absorbance is defined as "the logarithm of the ratio of incident to transmitted radiant power through a sample (excluding the effects on cell walls)". Alternatively, for samples which scatter light, absorbance may be defined as "the negative logarithm of one minus absorptance, as measured on a uniform sample". The term is used in many technical areas to quantify the results of an experimental measurement. While the term has its origin in quantifying the absorption of light, it is often entangled with quantification of light which is "lost" to a detector system through other mechanisms. What these uses of the term tend to have in common is that they refer to a logarithm of the ratio of a quantity of light incident on a sample or material to that which is detected after the light has interacted with the sample.
The term absorption refers to the physical process of absorbing light, while absorbance does not always measure only absorption; it may measure attenuation (of transmitted radiant power) caused by absorption, as well as reflection, scattering, and other physical processes. Sometimes the term "attenuance" or "experimental absorbance" is used to emphasize that radiation is lost from the beam by processes other than absorption, with the term "internal absorbance" used to emphasize that the necessary corrections have been made to eliminate the effects of phenomena other than absorption.
History and uses of the term absorbance
Beer-Lambert law
The roots of the term absorbance are in the Beer–Lambert law. As light moves through a medium, it will become dimmer as it is being "extinguished". Bouguer recognized that this extinction (now often called attenuation) was not linear with distance traveled through the medium, but related by what we now refer to as an exponential function.
If is the intensity of the light at the beginning of the travel and is the intensity of the light detected after travel of a distance the fraction transmitted, is given by
where is called an attenuation constant (a term used in various fields where a signal is transmitted though a medium) or coefficient. The amount of light transmitted is falling off exponentially with distance. Taking the natural logarithm in the above equation, we get
For scattering media, the constant is often divided into two parts, separating it into a scattering coefficient and an absorption coefficient obtaining
If a size of a detector is very small compared to the distance traveled by the light, any light that is scattered by a particle, either in the forward or backward direction, will not strike the detector. (Bouguer was studying astronomical phenomena, so this condition was met.) In such case, a plot of as a function of wavelength will yield a superposition of the effects of absorption and scatter. Because the absorption portion is more distinct and tends to ride on a background of the scatter portion, it is often used to identify and quantify the absorbing species. Consequently, this is often referred to as absorption spectroscopy, and the plotted quantity is called "absorbance", symbolized as Some disciplines by convention use decadic (base 10) absorbance rather than Napierian (natural) absorbance, resulting in: (with the subscript 10 usually not shown).
Absorbance for non-scattering samples
Within a homogeneous medium such as a solution, there is no scattering. For this case, researched extensively by August Beer, the concentration of the absorbing species follows the same linear contribution to absorbance as the path-length. Additionally, the contributions of individual absorbing species are additive. This is a very favorable situation, and made absorbance an absorption metric far preferable to absorption fraction (absorptance). This is the case for which the term "absorbance" was first used.
A common expression of the Beer's law relates the attenuation of light in a material as: where is the absorbance; is the molar attenuation coefficient or absorptivity of the attenuating species; is the optical path length; and is the concentration of the attenuating species.
Absorbance for scattering samples
For samples which scatter light, absorbance is defined as "the negative logarithm of one minus absorptance (absorption fraction: ) as measured on a uniform sample". For decadic absorbance, this may be symbolized as If a sample both transmits and remits light, and is not luminescent, the fraction of light absorbed remitted and transmitted add to 1: Note that and the formula may be written as For a sample which does not scatter, and yielding the formula for absorbance of a material discussed below.
Even though this absorbance function is very useful with scattering samples, the function does not have the same desirable characteristics as it does for non-scattering samples. There is, however, a property called absorbing power which may be estimated for these samples. The absorbing power of a single unit thickness of material making up a scattering sample is the same as the absorbance of the same thickness of the material in the absence of scatter.
Optics
In optics, absorbance or decadic absorbance is the common logarithm of the ratio of incident to radiant power through a material, and spectral absorbance or spectral decadic absorbance is the common logarithm of the ratio of incident to spectral radiant power through a material. Absorbance is dimensionless, and in particular is not a length, though it is a monotonically increasing function of path length, and approaches zero as the path length approaches zero.
Mathematical definitions
Absorbance of a material
The absorbance of a material, denoted , is given by
where
is the radiant flux by that material,
is the radiant flux by that material, and
is the transmittance of that material.
Absorbance is a dimensionless quantity. Nevertheless, the absorbance unit or AU is commonly used in ultraviolet–visible spectroscopy and its high-performance liquid chromatography applications, often in derived units such as the milli-absorbance unit (mAU) or milli-absorbance unit-minutes (mAU×min), a unit of absorbance integrated over time.
Absorbance is related to optical depth by
where is the optical depth.
Spectral absorbance
Spectral absorbance in frequency and spectral absorbance in wavelength of a material, denoted and respectively, are given by
where
is the spectral radiant flux in frequency by that material;
is the spectral radiant flux in frequency by that material;
is the spectral transmittance in frequency of that material;
is the spectral radiant flux in wavelength by that material;
is the spectral radiant flux in wavelength by that material; and
is the spectral transmittance in wavelength of that material.
Spectral absorbance is related to spectral optical depth by
where
is the spectral optical depth in frequency, and
is the spectral optical depth in wavelength.
Although absorbance is properly unitless, it is sometimes reported in "absorbance units", or AU. Many people, including scientific researchers, wrongly state the results from absorbance measurement experiments in terms of these made-up units.
Relationship with attenuation
Attenuance
Absorbance is a number that measures the attenuation of the transmitted radiant power in a material. Attenuation can be caused by the physical process of "absorption", but also reflection, scattering, and other physical processes. Absorbance of a material is approximately equal to its attenuance when both the absorbance is much less than 1 and the emittance of that material (not to be confused with radiant exitance or emissivity) is much less than the absorbance. Indeed,
where
is the radiant power transmitted by that material,
is the radiant power attenuated by that material,
is the radiant power received by that material, and
is the radiant power emitted by that material.
This is equivalent to
where
is the transmittance of that material,
is the of that material,
is the emittance of that material.
According to the Beer–Lambert law, , so
and finally
Attenuation coefficient
Absorbance of a material is also related to its decadic attenuation coefficient by
where
is the thickness of that material through which the light travels, and
is the decadic attenuation coefficient of that material at .
If a(z) is uniform along the path, the attenuation is said to be a linear attenuation, and the relation becomes
Sometimes the relation is given using the molar attenuation coefficient of the material, that is its attenuation coefficient divided by its molar concentration:
where
is the molar attenuation coefficient of that material, and
is the molar concentration of that material at .
If is uniform along the path, the relation becomes
The use of the term "molar absorptivity" for molar attenuation coefficient is discouraged.
Measurements
Logarithmic vs. directly proportional measurements
The amount of light transmitted through a material diminishes exponentially as it travels through the material, according to the Beer–Lambert law (). Since the absorbance of a sample is measured as a logarithm, it is directly proportional to the thickness of the sample and to the concentration of the absorbing material in the sample. Some other measures related to absorption, such as transmittance, are measured as a simple ratio so they vary exponentially with the thickness and concentration of the material.
Instrument measurement range
Any real measuring instrument has a limited range over which it can accurately measure absorbance. An instrument must be calibrated and checked against known standards if the readings are to be trusted. Many instruments will become non-linear (fail to follow the Beer–Lambert law) starting at approximately 2 AU (~1% transmission). It is also difficult to accurately measure very small absorbance values (below ) with commercially available instruments for chemical analysis. In such cases, laser-based absorption techniques can be used, since they have demonstrated detection limits that supersede those obtained by conventional non-laser-based instruments by many orders of magnitude (detection has been demonstrated all the way down to ). The theoretical best accuracy for most commercially available non-laser-based instruments is attained in the range near 1 AU. The path length or concentration should then, when possible, be adjusted to achieve readings near this range.
Method of measurement
Typically, absorbance of a dissolved substance is measured using absorption spectroscopy. This involves shining a light through a solution and recording how much light and what wavelengths were transmitted onto a detector. Using this information, the wavelengths that were absorbed can be determined. First, measurements on a "blank" are taken using just the solvent for reference purposes. This is so that the absorbance of the solvent is known, and then any change in absorbance when measuring the whole solution is made by just the solute of interest. Then measurements of the solution are taken. The transmitted spectral radiant flux that makes it through the solution sample is measured and compared to the incident spectral radiant flux. As stated above, the spectral absorbance at a given wavelength is
The absorbance spectrum is plotted on a graph of absorbance vs. wavelength.
An Ultraviolet-visible spectroscopy#Ultraviolet–visible spectrophotometer will do all this automatically. To use this machine, solutions are placed in a small cuvette and inserted into the holder. The machine is controlled through a computer and, once it has been "blanked", automatically displays the absorbance plotted against wavelength. Getting the absorbance spectrum of a solution is useful for determining the concentration of that solution using the Beer–Lambert law and is used in HPLC.
Shade number
Some filters, notably welding glass, are rated by shade number (SN), which is 7/3 times the absorbance plus one:
For example, if the filter has 0.1% transmittance (0.001 transmittance, which is 3 absorbance units), its shade number would be 8.
See also
Absorptance
Tunable Diode Laser Absorption Spectroscopy (TDLAS)
Densitometry
Neutral density filter
Mathematical descriptions of opacity
References
Spectroscopy
Optical filters
Logarithmic scales of measurement
Physical quantities | Absorbance | Physics,Chemistry,Mathematics | 2,466 |
63,707,472 | https://en.wikipedia.org/wiki/ATPase%20Domain%203B | ATPase Domain 3B (ATAD3B) is a protein that in humans is encoded by the ATAD3B gene. ATAD3 is part of the AAA protein family. The function of ATAD3B is not yet well understood by the scientific community. In humans the gene is located at 1p36.33.
Function
ATAD3B is associated with the mitochondria. The C terminus is anchored in the mitochondrial inter membrane space.
The protein is linked with the pluripotency of stem cells. The ATAD3A gene is targeted by c-Myc which is one of four factors needed to create induced pluripotent stem (i-PS) cells from mouse embryonic fibroblasts(MEFs).
Its expression is linked to cell cycle function and tumor growth. When ATAD3B was overexpressed, cell duplication took an extra three hours by spending a longer time in G1 phase. Abnormal expression levels of ATAD3B has been linked to chemoresistance. Overexpression of ATAD3B was the found to be the strongest factor in breast cancer survival rates.
Characteristics
A mutation in the stop codon means that ATAD3B has a 62 amino acid longer UTR.
References
Induced stem cells
Proteins | ATPase Domain 3B | Chemistry,Biology | 265 |
77,270,791 | https://en.wikipedia.org/wiki/Alternative%20title%20%28publishing%29 | An alternative title or alternate title in book publishing refers to a title that is presented alongside the primary title. It often uses a semi-colon or the term "or" in book titles, typically seen in the form "Title: or, Subtitle." This was a practice that started in the 17th century, and was common in both English and American literature. During this period, many books aimed to appeal to a broader audience by using more descriptive subtitles.
As an example, Mary Shelley gave her most famous novel the title Frankenstein; or, The Modern Prometheus, where or, The Modern Prometheus is the alternative title, by which she references the Greek Titan as a hint of the novel's themes. More examples are On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life and Moby-Dick; or, The Whale. This is contrasted to a subtitle, which is a portion of the title itself. The subtitle is considered to add extra explanation for the title.
This convention started to decline in the 19th century as book titles became more concise and marketing strategies evolved.
References
Book design
Publishing
Names | Alternative title (publishing) | Engineering | 243 |
36,107,188 | https://en.wikipedia.org/wiki/Chronic%20radiation%20syndrome | Chronic radiation syndrome (CRS), or chronic radiation enteritis, is a constellation of health effects of radiation that occur after months or years of chronic exposure to high amounts of radiation. Chronic radiation syndrome develops with a speed and severity proportional to the radiation dose received (i.e., it is a deterministic effect of exposure to ionizing radiation), unlike radiation-induced cancer. It is distinct from acute radiation syndrome, in that it occurs at dose rates low enough to permit natural repair mechanisms to compete with the radiation damage during the exposure period. Dose rates high enough to cause the acute form (> ~0.1 Gy/h) are fatal long before onset of the chronic form. The lower threshold for chronic radiation syndrome is between 0.7 and 1.5 Gy, at dose rates above 0.1 Gy/yr. This condition is primarily known from the Kyshtym disaster, where 66 cases were diagnosed. It has received little mention in Western literature; but see the ICRP’s 2012 Statement.
In 2013, Alexander V. Akleyev described the chronology of the clinical course of CRS while presenting at ConRad in Munich, Germany. In his presentation, he defined the latent period as being 1–5 years, and the formation coinciding with the period of maximum radiation dose. The recovery period was described as being 3–12 months after exposure ceased. He concluded that "CRS represents a systemic response of the body as a whole to the chronic total body exposure in man." In 2014, Akleyev's book "Comprehensive analysis of chronic radiation syndrome, covering epidemiology, pathogenesis, pathoanatomy, diagnosis and treatment" was published by Springer.
Symptoms of chronic radiation syndrome would include, at an early stage, impaired sense of touch and smell and disturbances of the vegetative functions. At a later stage, muscle and skin atrophy and eye cataract follow, with possible fibrous formations on the skin, in case of previous radiation burns. Solid cancer or leukemia due to genetic damage may appear at any time.
References
Radiation health effects | Chronic radiation syndrome | Chemistry,Materials_science | 435 |
77,658,557 | https://en.wikipedia.org/wiki/Implantable%20bulking%20agent | Implantable bulking agents are self-expanding solid prostheses which are implanted in the tissues around the anal canal. It is a surgical treatment for fecal incontinence and represents a newer evolution of the similar procedure which uses perianal injectable bulking agents.
History
The implantable bulking agents represent the most recent stage of development of a similar procedure which uses perianal injectable bulking agents. That procedure in turn was developed from use of injectable bulking agents in urology to treat urinary incontinence. Many different injectable materials have been used. The biggest problem with injectable materials is that they seem to have only a temporary effect. Over time, the material degrades and may migrate away from the injection site. For example, one study investigated the outcome and ultrasound appearance of 3 of the commonly used injectable bulking agents (Durasphere, PTQ, Solesta) after an average of 7 years. The researchers reported that typically about 14% of the original volume of material was still identifiable on ultrasound, and that complete disappearance of the materials on ultrasound was correlated with poorer clinical outcomes.
Implantable bulking agents use multiple cylindrical HYEXPAN (polyacrylonitrile) implants. Marketed as "Gatekeeper" by Medtronic, Minneapolis, USA, it was first used to treat gastro-esophageal reflux disease. Production of the implant system was transferred to THD S.p.A., Correggio, Italy. Gatekeeper now has a CE marking, and was registered for the treatment of fecal incontinence in 2010. The first publication describing use of these implants in FI was in 2011. Several other publications appeared and the results were initially promising.
In the original description, Gatekeeper used four self-expandable, solid, thin cylinders. Subsequently, six of the prostheses were used. An advancement of the procedure was described in 2016, marketed as "SphinKeeper". SphinKeeper uses 10 prostheses which are slightly thicker and longer compared to those used in the Gatekeeper implant system. One publication stated that new generation SphinKeeper implant system has replaced use of Gatekeeper, and that the use of 10 prostheses represents change in the paradigm of injectable and implantable bulking agents. SphinKeeper is a permanent implantable device rather than a bulking agent, and aims to create a kind of artificial neosphincter. Previous techniques aimed to simply augment the internal anal sphincter. The first systematic review on implantable bulking agents was published in 2022. However, no randomized placebo controlled trials have been published yet.
Material
The implants are made of polyacrylonitrile (HYEXPAN). This material is inert, non-allergenic, non-immunogenic, nondegradable and noncarcinogenic. The material is hydrophilic, which allows the implants to slowly absorb water and change in dimensions, which occurs within 48 hours once they are implanted in the tissues.
This polyacrylonitrile material is thought to meet the criteria for the "ideal bulking agent", and therefore may overcome the disadvantages of other bulking agents.
Indications
Implantable bulking agents are indicated for passive fecal incontinence, caused by interior anal sphincter dysfunction or damage. The onset of incontinence should be at least 6 months ago. It has been recommended that this procedure should be attempted only if non-surgical options have failed (such as pharmacologic, behavioral, pelvic floor rehabilitation), and also if injectable bulking agents were unsuccessful. Researchers are investigating the use of GK and SK in patients with a wider range of causes of fecal incontinence.
Contraindications have been suggested by different authors, and include:
Active perianal sepsis (infection)
Inflammatory bowel diseases which involve the anorectal region (Crohn’s disease, ulcerative colitis)
Active treatment of anal, rectal or colon cancer
Rectal bleeding of unknown cause
Rectal prolapse
Uncontrolled blood coagulation disorders
Radiotherapy involving the pelvis
Immunosuppression
Pregnant patient or patient planning pregnancy in the next 12 months.
Lesion involving >60° of the internal anal sphincter and/or >90° of the external anal sphincter, as shown on ultrasound.
Severe anal scarring.
Diabetes mellitus, pudendal neuropathy, and previous implantation of sacral nerve stimulation device are not contraindications to the use of implantable bulking agents.
Procedure
The procedure is carried out under local or regional anesthesia (Spinal anaesthesia) with or without sedation or general anesthesia. It takes 30-40 minutes and may be done as a day case on an outpatient basis. Intravenous antibiotics may given at the start of the procedure. The patient is usually put into the lithotomy position.
The surgeon identifies the interior anal sphincter and the intersphincteric groove while using an anal retractor such as the Eisenhammer retractor. 2 cm away from the anal verge a 2 mm incision is made in the perianal skin. This location may minimize the possibility of contamination of the wounds during bowel movements. The prostheses are implanted using a custom "gun" which consists of a delivery system and a dispenser which holds one prosthesis at a time. This device is specific to the type of implant being used: Gatekeeper or SphinKeeper. The needle (cannula / sheath) is inserted into the incision and pushed into the intersphincteric space through a short subcutaneous tunnel. It is thought that the path of this created "tunnel" should not be a straight line, in order to prevent extrusion of the prosthesis along the insertion track. The needle is advanced to a depth just beyond the level of the dentate line. This will correspond to the upper part of the anal canal, at the level of the puborectalis muscle. The exact position of the needle tip is confirmed by direct vision or with the guidance of endoanal ultrasound. The procedure is described as relatively simple to perform from a technical perspective, and one author stated that ultrasound guidance during placement is not necessary if the surgeon is experienced. Firing the "gun" causes the cannula to retract completely into the delivery system, leaving the prosthesis in the target location. The prosthesis is then placed into the intersphincteric space.
It is thought that placement of the implants in the intersphincteric space pushes the external anal sphincter outwards and the internal anal sphincter inwards. This may increase the length of the sarcomeres, which theoretically increases the contractility of the muscle. In terms of physiological measurements, the resting anal pressure and the length of the high pressure zone in the anal canal may be improved. In other words, the bulking effect may improve the seal of the anal canal and the length of the anal canal.
The surgical procedure is almost identical for Gatekeeper and SphinKeeper. The difference between Gatekeeper and SphinKeeper is in the size and number of the individual prostheses. Gatekeeper uses 4-6 prostheses. SphinKeeper uses up to 10 prostheses.
Within 48 hours of implantation, the implant material absorbs water from the tissues because of its hydrophilic properties. Each prosthesis becomes thicker and shorter in shape. This rapid increase in volume allows the prostheses to self-fix in position and prevents displacement and migration (in most cases). The prostheses also become softer in consistency and compliant to external pressures, but are still able to maintain their original shape.
In the dehydrated state, Gatekeeper prostheses are thin cylinders, 2 mm in diameter and 22 mm long. After implantation, they become 6.5 mm in diameter and 17 mm long. Their volume increases by 750% from 70 mm3 to 500 mm3.
SphinKeeper prostheses are slightly thicker and longer cylinders. In the dehydrated state, SphinKeeper prostheses are 3 mm in diameter and 29 mm long. After implantation, they become 7 mm in diameter and 23 mm long. SphinKeeper implants are long enough to restore the normal length of the anal canal. They are also wide enough to make sure there is good filling ability. Therefore, SphinKeeper allows for surgical correction of larger defects of the internal anal sphincter or external anal sphincter.
The process is repeated for each individual prosthesis, placing them into incisions made at the intersphincteric groove at equidistant intervals depending upon the total number of prostheses to be deployed. For example, if 4 prostheses are to be used, incisions may be placed at 12, 3, 6, and 9 o’clock. If 6 prostheses are to be used, incisions may be placed at 1, 3, 5, 7, 9, and 11 o’clock. The exact number of prostheses used is arbitrary, but placing 10 prostheses enables the creation of a circumferential ring of prostheses around the anal canal in the intersphincteric space. This effectively creates a situation similar to an artificial anal sphincter. One publication reported improved outcome with Gatekeeper when more prostheses were used.
Interestingly, it is thought that the exact spacing of the prostheses does not influence the outcome of the procedure, and the important factor appears to be that the prostheses are distributed equally around the anal canal. This is the case even in the presence of tears of the external anal sphincter or internal anal sphincter. Therefore, the implants are placed in the above locations for convenience of the surgeon, even for patients with a tear in a specific part of the sphincter.
The incisions are closed with resorbable sutures. Endoanal ultrasound may be used to confirm the location of each prosthesis. A course of oral antibiotics (e.g. metronidazole) may be given after the procedure. Oral laxatives (e.g. lactulose) may be given, which prevents straining and constipation. Anal trauma (e.g. receptive anal intercourse) should be avoided for at least 72 hours after the procedure. Patients are usually advised to rest in bed, with the aim of reducing the risk of dislocation of the prostheses.
After healing, the implants continue to be palpable and are visible on endoanal ultrasound. Each prosthesis appears as hyperechoic dot with a hypoechoic shadow behind it. Three dimensional endoanal ultrasound has also been used to visualize the implants, wherein the prostheses appear as a continuous hyperechoic line.
Complications
Compared to other surgical treatment options for fecal incontinence, implantable bulking agents appear to be safe. Therefore, it is also suitable for elderly or frail patients. However, complications are sometimes reported. For example, acute sepsis (infection) at the implantation site has been rarely recorded.
The most important complication is displacement of a prosthesis (also referred to as migration / dislocation / dislodgement / extrusion). The rate of displacement of at least 1 of the prostheses has been reported to be as low as 0%, or as high as 91% of cases. The patient may report pain, swelling and/or no improvement in symptoms when there is prosthesis displacement. This is potentially noteworthy, since improved symptoms after use of injectable bulking agents have been attributed at least in part to the placebo effect.
Placement in the intersphincteric space is thought to be less liable to extrusion or migration of the prostheses or other complications such as erosion, ulceration or fistula formation in the anal canal. If the implants were in the submucosal layer, they would be more vulnerable to such complications. Furthermore, displacement is less likely because of the rapid increase in size of the prostheses, allowing them self-fix in position in most cases.
A systematic review found that in total, migration / dislodgement / dislocation was reported in 41 out of 154 patients (26.6%) across 7 studies. The same researchers reported that some kind of adverse event occurred in 48 out of 166 patients (28.9%). Sometimes, a prosthesis had to be removed, however it is possible to implant a new one in the correct position.
Effectiveness
The first systematic review on GK and SK was published in 2022. It combined results from 8 studies published before 2020 – a total of 166 patients. All studies were judged to be at moderate to high risk of bias. The reviewers reported that severity of FI improved in 5 out of 7 of the studies which used the Cleveland Clinic FI Score and in 3 out of 5 of the studies which used the Vaizey score. Quality of life improved in 2 studies which measured that outcome. They concluded that GK and SK may be effective, safe and minimally invasive options for fecal incontinence in those cases where non surgical treatments have failed. The reviewers called for controlled trials to be conducted.
References
External links
Colorectal surgery
Management of fecal incontinence
Gastroenterology
Digestive system surgery
Symptoms and signs: Digestive system and abdomen
Incontinence | Implantable bulking agent | Biology | 2,858 |
3,241,211 | https://en.wikipedia.org/wiki/Wieland%E2%80%93Miescher%20ketone | The Wieland–Miescher ketone is a racemic bicyclic diketone (enedione) and is a versatile synthon which has so far been employed in the total synthesis of more than 50 natural products, predominantly sesquiterpenoids, diterpenes and steroids possessing possible biological properties including anticancer, antimicrobial, antiviral, antineurodegenerative and immunomodulatory activities. The reagent is named after two chemists from Ciba Geigy, Karl Miescher and Peter Wieland (not to be confused with Heinrich Otto Wieland). Examples of syntheses performed using the optically active enantiomer of this diketone as a starting material are that of ancistrofuran and the Danishefsky total synthesis of Taxol.
Most advances in total synthesis methods starting from Wieland–Miescher ketone were fueled by the search for alternative methods for the industrial synthesis of contraceptive and other medicinally relevant steroids, an area of research that flourished in the 1960s and 1970s. Wieland–Miescher ketone contains the AB-ring structure of steroids and is for this reason an attractive starting material for the steroid skeleton, an approach used in one synthesis of adrenosterone.
The original Wieland–Miescher ketone is racemic and prepared in a Robinson annulation of 2-methyl-1,3-cyclohexanedione and methyl vinyl ketone. The intermediate alcohol is not isolated. An enantioselective synthesis employs L-proline as an organocatalyst:
This reaction was reported in 1971 by Z. G. Hajos and D. R. Parrish. In their patent, the isolation and characterization of the above pictured optically active intermediate bicyclic ketol (in parentheses) has also been described, because they worked at ambient temperature in anhydrous dimethylformamide (DMF) solvent. Working in DMSO solvent does not allow isolation of the bicyclic ketol intermediate, it leads directly to the optically active bicyclic dione. The reaction is called the Hajos-Parrish reaction or the Hajos-Parrish-Eder-Sauer-Wiechert reaction.
This reaction has also been performed in a one-pot procedure, leading to 49% yield and 76% enantiomeric excess (ee):
Other proline-based catalysts have been investigated.
References
Ketones
Bicyclic compounds | Wieland–Miescher ketone | Chemistry | 534 |
14,440,221 | https://en.wikipedia.org/wiki/Neuropeptide%20FF%20receptor%201 | Neuropeptide FF receptor 1, also known as NPFF1 is a human protein, encoded by the NPFFR1 gene.
See also
Neuropeptide FF receptor
References
Further reading
External links
G protein-coupled receptors | Neuropeptide FF receptor 1 | Chemistry | 50 |
78,103,499 | https://en.wikipedia.org/wiki/Tom%20Leinster | Thomas "Tom" Stephen Hampden Leinster (born 1971) is a British mathematician, known for his work on category theory.
Education and career
Leinster graduated in 2000 with a Ph.D. from the University of Cambridge. His Ph.D. thesis Operads in Higher-Dimensional Category Theory was supervised by Martin Hyland. After teaching at the University of Glasgow, Leinster became, and is now, a professor at the University of Edinburgh. He published textbooks on category theory and higher categories and operads. In the 2010s, he was mainly concerned with a generalization of the Euler characteristic in category theory, the magnitude. He also considered such generalizations in metric spaces with application in biology (measurement of biodiversity).
Award and honour
Leinster groups (i.e., finite groups whose order is equal to the sum of the orders of their normal subgroups) are named in his honour. He received the 2019 Chauvenet Prize for Rethinking Set Theory (based upon an axiomatization published in 1964 by F. William Lawvere). He is a frequent author and moderator for the academic group blog n-Category Café, where topics from mathematics, science and philosophy are discussed, often from the perspective of category theory. International media attention resulted from a 2014 article by Leinster in the New Scientist. Leinster's article called, on the basis of ethics, for mathematicians to refuse to work for intelligence agencies. In German-speaking countries, this was reported by, among others, Der Spiegel and Zeit Online.
Selected publications
References
External links
Homepage, University of Edinburgh (with many links to publications, talks, & notes)
Tom Leinster in the database zbMATH
1971 births
Living people
20th-century British mathematicians
21st-century British mathematicians
British bloggers
Category theorists
Alumni of the University of Cambridge
Academics of the University of Edinburgh | Tom Leinster | Mathematics | 375 |
17,015,288 | https://en.wikipedia.org/wiki/Central%20African%20red%20colobus | Central African red colobus is the traditional name for several species of red colobus monkey that had formerly been considered a single species, Piliocolobus foai. Central African red colobus monkeys are found in humid forests in the Democratic Republic of the Congo, the Republic of the Congo, the Central African Republic and South Sudan.
Species that have at times been included within the Central African red colobus include:
Foa's red colobus (Piliocolobus foai)
Lang's red colobus (Piliocolobus langi)
Oustalet's red colobus (Piliocolobus oustaleti)
Lomami red colobus (Piliocolobus parmentieri)
Tana River red colobus (Piliocolobus rufomitratus)
Semliki red colobus (Piliocolobus semlikiensis)
Ugandan red colobus (Piliocolobus tephrosceles)
Central African red colobus monkeys are endemic to tropical central Africa, including the Republic of Congo, the southern part of the Central African Republic, the Democratic Republic of the Congo and the southern part of South Sudan. The southern limit of the range is the Congo River, the eastern limit is the Aruwimi River and the northern limit is the savannah woodlands north of the Uele River. They are found in lowland primary forest, particularly swampy areas, open woodland and gallery forest. They spend about seventy percent of their time in the canopy, and the rest of the time on the ground. They sometimes wade into water to collect aquatic bulbs.
References
Central African red colobus
Fauna of Central Africa
Mammals of the Republic of the Congo
Mammals of the Democratic Republic of the Congo
Mammals of the Central African Republic
Mammals of South Sudan
Paraphyletic groups | Central African red colobus | Biology | 386 |
57,834,314 | https://en.wikipedia.org/wiki/Companion%20robot | A companion robot is a robot created to create real or apparent companionship for human beings. Target markets for companion robots include the elderly and single children. Companions robots are expected to communicate with non-experts in a natural and intuitive way. They offer a variety of functions, such as monitoring the home remotely, communicating with people, or waking people up in the morning. Their aim is to perform a wide array of tasks including educational functions, home security, diary duties, entertainment and message delivery services, etc.
The idea of companionship with robots has already existed on science fictions of 1970s, like R2-D2. Starting from the late 20th century, companion robots started to came out on reality, mostly as robotic pets. Besides entertainment purposes, interactive robots were also introduced as a personal service robot for elderly care around 2000.
Empowered with AI, allows these robots to understand and respond to human emotions. Through machine learning, they can analyze speech, tone, and facial expressions to detect whether you're happy, stressed, or feeling a bit down.
Characteristics
The companion robots try to interact with users. They gathers information of users based on their interactions, and make the feedbacks. This procedure slightly varies based on their specific roles. For example, social companion robots make some simple conversations, while pet companion robots mimics as they are real pets.
Types
Companion robots can perform a variety of tasks and they are produced in a specialized manner according to their purpose or target audience in order to increase convenience and end user satisfaction.
Social companion robots
Social companion robots are designed to provide companionship and be a solution for unwanted solitude. They often mimic adult human, child or pet behaviours appealing to the user base. Robots which are specifically devised for simple conversations, conveying emotions and respond to user feelings fall under this category.
Assistive companion robots
Assistive companion robots are aimed at people who require constant care because of age, disability or rehabilitation purposes. Such robots can help disadvantaged users with their daily tasks, act as reminders (e.g., for regular medication) and facilitate mobility in everyday actions. Assistive companion robots reduce the intensity of labour that should be performed by caretakers, nurses and legal guardians.
Educational companion robots
Educational companion robots perform tutorship for students, regardless of their ages, and can teach desired subjects with activities tailored for the user such as interactive assignments and games. Rather than replacing teachers and instructors, educational companion robots are aides to them.
Therapeutic companion robots
Designed for individuals coping with stress (PTSD in severe cases), anxiety and loneliness; therapeutic companion robots support users' emotional and mental wellbeing. Such robots can be utilized in hospitals and care facilities as well as dwellings where the distressed user may need the most help. Therapeutic companion robots bear a vast resemblance to assistive companion robots to the extent of being a branch of them; the nuance between these two types of companion robots is that the former is for long-term/lifetime usage while the latter is mostly for the duration of the therapy received by the user.
Pet companion robots
Pet companion robots are for individuals who seek an alternative to live pets as live animals demand a considerable amount of care and may not be eligible for people with allergies. These robots aim to be perfect imitations of a pet while diminishing the chore aspect of having one.
Entertainment companion robots
Entertainment companion robots are designed solely for entertainment and can provide numerous ways of entertainment, ranging from dancing to playing games with the user. People who would appreciate an individual to have fun with are the main audience of such products.
Personal assistant robots
Personal assistant robots help people with daily tasks, management, scheduling, reminding etc. Their area of activity can be offices as well as homes and public spaces.
Examples
There are several companion robot prototypes, and these include Paro, CompanionAble, and EmotiRob, among others.
Paro
Paro is a pet-type robot system developed by Japan's National Institute of Advanced Industrial Science and Technology (AIST). The robot, which looked like a small harp seal, was designed as a therapeutic tool for use in hospitals and nursing homes. The robot is programmed to cry for attention and respond to its name. Experiments showed that Paro facilitated elderly residents to communicate with each other, which led to psychological improvements.
CompanionAble
This robot is classified as an FP 7 EU project. It is built to "cooperate with Ambient Assistive Living environment". The autonomous device, which is also built to support the elderly, helps its owner interact with smart home environment as well as caregivers. The robot functions as a mobile friend, by which natural interaction is possible via speech and the touchscreen to detect and track people at home.
EmotiRob
EmotiRob is developed in a robotics project which is the continuity of the MAPH (Active Media For the Handicap) project in emotion synthesis. The aim of the project was to maintain emotional interaction with children. EmotiRob designed in a way that a child can hold it in a his/her arms and with which he/she could interact by talking to it, and then the robot would express itself through body postures or facial expressions. It has cognitive capabilities, which are further extended so that the robot can have a natural linguistic interaction with its owner through the DRAGON speech-recognition software developed by a company called NUANCE. Such interaction is expected to facilitate a child's cognitive development and develop new learning patterns.
LOVOT
Lovot is a Japanese company robot whose only purpose is "to make you happy". It features over 50 sensors that mimic the behavior of a human baby or small pet, a 360° camera with a microphone, the ability to distinguish humans from objects, neoteny eyes, and an internal warmth of 30° celsius. An interactive Lovot Café was opened in Japan October 3, 2020.
NICOBO
Nicobo was developed by Panasonic and was influenced by the loneliness of lockdowns created as a measure of the COVID-19 pandemic. It was designed to appear vulnerable, which creates empathy in its owners. Nicobo's name derives from the Japanese word for "smile". It wags its tail, engages in baby talk, and stays as a housemate.
Hyodol
Hyodol is an advanced care robot designed to support the elderly by reminding them to take their medications and monitoring their movements to keep their guardians informed. Additionally, this innovative robot can detect and respond to the emotional states of its elderly users, adding a layer of personalized care. Hyodol is designed with the appearance and speech style of a 7-year-old Korean grandchild, featuring a soft fabric exterior and user interaction methods such as striking the head or patting the back. It is equipped with various sensors and wireless communication technologies to collect and process data, supporting mobile apps and PC web monitoring systems for remote monitoring from anywhere.
In South Korea, approximately 10,000 Hyodol robots are deployed to the homes of elderly individuals living alone, providing essential support and companionship. Local governments, including provincial and county offices, have embraced Hyodol as a solution to address social challenges stemming from the country's rapidly aging society.Furthermore, the robot is widely utilized in the treatment of dementia patients at a university hospital in Gangwon province.
Hyodol was honored with the Mobile World Congress (MWC) Global Mobile Awards (GLOMO) in the "Best Mobile Innovation for Connected Health and Wellbeing" category on February 29th, 2024.
Criticisms and concerns
The advent of companion robots has faced public criticism and concern, particularly regarding the ethical dilemmas of dependency on these devices. There are fears that such dependency could threaten the crucial child-caregiver bond. Concerns also extend to the potential impact of robot-assisted childcare on child development, with specific worries about the adverse effects on emotional and social skills due to reduced human interaction.
Privacy and security issues have gained prominence as well, with companion robots like iPal capable of collecting and transmitting data, thus raising the risks of surveillance and data misuse. This situation underscores the need for robust cybersecurity measures to prevent hacking and unauthorized data access.
Besides concerns in child care, the integration of companion robots into the lives of the elderly and other vulnerable groups also raises concerns, specifically a problem of hallucinatory relationships. Individuals may begin to perceive these robots as sentient beings, leading to emotional attachments that blur the line between simulation and reality and dependency on robots. This could in turn worsen feelings of loneliness and detachment from humans. This highlights the necessity of implementing ethical guidelines and psychological support systems to navigate the complexities introduced by this technology, ensuring they serve as enhancements rather than replacements for human relationships.
See also
Virtual pet
List of robotic dogs
Furby
Socially assistive robot
References
Robotics
Interpersonal relationships | Companion robot | Engineering,Biology | 1,796 |
16,052,374 | https://en.wikipedia.org/wiki/Coturnism | Coturnism is an illness featuring muscle tenderness and rhabdomyolysis (muscle cell breakdown) after consuming quail (usually common quail, Coturnix coturnix, from which the name derives) that have fed on poisonous plants.
Causes
From case histories it is known that the toxin is stable, as four-month-old pickled quail have been poisonous. Humans vary in their susceptibility; only one in four people who consumed quail soup containing the toxin fell ill. It is apparently fat-soluble, as potatoes fried in quail fat are also poisonous.
Coniine from hemlock consumed by quail has been suggested as the cause of coturnism, though quail resist eating hemlock. Hellebore has also been suggested as the source of the toxin. It has also been asserted that this evidence points to the seeds of the annual woundwort (Stachys annua) being the causal agent. It has been suggested that Galeopsis ladanum seeds are not responsible.
Epidemiology
Migration routes and season may affect quail risk. Quail are never poisonous outside the migration season nor are the vast majority poisonous while migrating. European common quail migrate along three different flyways, each with different poisoning characteristics, at least in 20th-century records. The western flyway across Algeria to France is associated with poisonings only on the spring migration and not on the autumn return. The eastern flyway, which funnels down the Nile Valley, is the reverse. Poisonings were only reported in the autumn migration before the quail had crossed the Mediterranean. The central flyway across Italy had no associated poisonings.
History
The condition was certainly known by the 4th century BC to the ancient Greek (and subsequently Roman) naturalists, physicians, and theologians. The Bible (Numbers 11:31-34) mentions an incident where the Israelites became ill after having consumed large amounts of quail in Sinai. Philo gives a more detailed version of the same Biblical story (The Special Laws: 4: 120–131). Early writers used quail as the standard example of an animal that could eat something poisonous to man without ill effects for themselves. Aristotle (On Plants 820:6-7), Philo (Geoponics: 14: 24), Lucretius (On the Nature of Things: 4: 639–640), Galen (De Temperamentis: 3:4) and Sextus Empiricus (Outlines of Pyrrhonism: 1: 57) all make this point.
Central to these ancient accounts is the idea that quail became toxic to humans after consuming seeds from hellebore or henbane (Hyoscyamus niger). However Sextus Empiricus suggested that quail ate hemlock (Conium maculatum), an idea revived in the 20th century. Confirmation that the ancients understood the problem comes from a 10th-century text, Geoponica, based on ancient sources. This states, "Quails may graze hellebore putting those who afterwards eat them at risk of convulsions and vertigo....".
References
External links
Toxic effect of noxious substances eaten as food
Quails
Bird problems with humans
Bird feeding
Plant toxins | Coturnism | Chemistry | 668 |
2,819,660 | https://en.wikipedia.org/wiki/Downstream%20processing | Downstream processing refers to the recovery and the purification of biosynthetic products, particularly pharmaceuticals, from natural sources such as animal tissue, plant tissue or fermentation broth, including the recycling of salvageable components as well as the proper treatment and disposal of waste. It is an essential step in the manufacture of pharmaceuticals such as antibiotics, hormones (e.g. insulin and human growth hormone), antibodies (e.g. infliximab and abciximab) and vaccines; antibodies and enzymes used in diagnostics; industrial enzymes; and natural fragrance and flavor compounds. Downstream processing is usually considered a specialized field in biochemical engineering, which is itself a specialization within chemical engineering. Many of the key technologies were developed by chemists and biologists for laboratory-scale separation of biological and synthetic products, whilst the role of biochemical and chemical engineers is to develop the technologies towards larger production capacities.
Downstream processing and analytical bioseparation both refer to the separation or purification of biological products, but at different scales of operation and for different purposes. Downstream processing implies manufacture of a purified product fit for a specific use, generally in marketable quantities, while analytical bioseparation refers to purification for the sole purpose of measuring a component or components of a mixture, and may deal with sample sizes as small as a single cell.
Stages
A widely recognized heuristic for categorizing downstream processing operations divides them into four groups which are applied in order to bring a product from its natural state as a component of a tissue, cell or fermentation broth through progressive improvements in purity and concentration.
Removal of insolubles is the first step and involves the capture of the product as a solute in a particulate-free liquid, for example the separation of cells, cell debris or other particulate matter from fermentation broth containing an antibiotic. Typical operations to achieve this are filtration, centrifugation, sedimentation, precipitation, flocculation, electro-precipitation, and gravity settling. Additional operations such as grinding, homogenization, or leaching, required to recover products from solid sources such as plant and animal tissues, are usually included in this group.
Product isolation is the removal of those components whose properties vary considerably from that of the desired product. For most products, water is the chief impurity and isolation steps are designed to remove most of it, reducing the volume of material to be handled and concentrating the product. Solvent extraction, adsorption, ultrafiltration, and precipitation are some of the unit operations involved.
Product purification is done to separate those contaminants that resemble the product very closely in physical and chemical properties. Consequently, steps in this stage are expensive to carry out and require sensitive and sophisticated equipment. This stage contributes a significant fraction of the entire downstream processing expenditure. Examples of operations include affinity, size exclusion, reversed phase chromatography, ion-exchange chromatography, crystallization and fractional precipitation.
Product polishing describes the final processing steps which end with packaging of the product in a form that is stable, easily transportable and convenient. Crystallization, desiccation, lyophilization and spray drying are typical unit operations. Depending on the product and its intended use, polishing may also include operations to sterilize the product and remove or deactivate trace contaminants which might compromise product safety. Such operations might include the removal of viruses or depyrogenation.
A few product recovery methods may be considered to combine two or more stages. For example, expanded bed adsorption (Vennapusa et al. 2008) accomplishes removal of insolubles and product isolation in a single step. Affinity chromatography often isolates and purifies in a single step.
See also
Fermentation (biochemistry)
Separation process
Unit operation
Validation (drug manufacture)
Biorefinery
References
Chemical processes
Drug manufacturing | Downstream processing | Chemistry | 799 |
35,910,856 | https://en.wikipedia.org/wiki/OUP-16 | OUP-16 is a histamine agonist selective for the H4 subtype.
References
Guanidines
Histamine agonists
Nitriles
Tetrahydrofurans
Imidazoles | OUP-16 | Chemistry | 45 |
4,260,539 | https://en.wikipedia.org/wiki/Labtec | Labtec Enterprises Inc. was an American manufacturer of computer accessories active as an independent company from 1980 to 2001. They were best known for their budget range of peripherals such as keyboards, mice, microphones, speakers and webcams. In the United States, the company had cornered the market for computer speakers and headphones for much of the 1990s before being acquired by Logitech in 2001.
History
Labtec Enterprises Inc. was founded in 1980 by Charles Dunn and based in Vancouver, Washington, for most of its independent existence. The company was initially focused on providing audio gear (primarily headsets) for the airline industry before branching out to providing peripherals for personal computers in 1990. By the mid-1990s Labtec catered to three segments: the personal computer buyer, providing speakers and microphones; the airline industry, providing headphones and headsets; and the professional audiovisual and telephonics industry, providing audio cables, switches, and junction boxes. The company employed 20 people domestically at the company's combined headquarters and warehouse in Vancouver, Washington, in 1993. Meanwhile, the bulk of the company's products were manufactured overseas in Hong Kong and Taiwan. In 1993, the company was selling about 150,000 speakers to consumers a month.
In 1998, Labtec merged with Spacetec IMC Corporation, becoming a new publicly traded corporation in the process. The combined company changed its name to Labtec Inc. in February 1999. Spacetec IMC had manufactured 6DOF controllers for use with CAD software. A Spaceball 2003 controller was used to control the Mars Pathfinder spacecraft in 2000.
In 2001, Logitech bought Labtec for approximately USD$125 million in cash, stock and debt in order to expand its line of audio products for personal computers and other devices.
References
External links
Telecommunications companies of the United States
Telecommunications equipment vendors
Videotelephony
Companies based in Vancouver, Washington
Telecommunications companies established in 1981
Technology companies disestablished in 2001
Logitech
Defunct computer companies of the United States
Defunct computer hardware companies | Labtec | Technology | 413 |
1,112,620 | https://en.wikipedia.org/wiki/Chloric%20acid | Chloric acid, HClO3, is an oxoacid of chlorine, and the formal precursor of chlorate salts. It is a strong acid (pKa ≈ −2.7) and an oxidizing agent.
Properties
Chloric acid is thermodynamically unstable with respect to disproportionation.
Chloric acid is stable in cold aqueous solution up to a concentration of approximately 30%, and solution of up to 40% can be prepared by careful evaporation under reduced pressure. Above these concentrations, chloric acid solutions decompose to give a variety of products, for example:
Hazards
Chloric acid is a powerful oxidizing agent. Most organics and flammables will deflagrate on contact.
Production
It may be produced from barium chlorate through its reaction with sulfuric acid, which results in a solution of chloric acid and insoluble barium sulfate precipitate:
The chlorate must be dissolved in boiling water and the acid should be somewhat diluted in water and heated before mixing.
Another method which can be used to produce solutions up to 10% concentration is by the use of cation exchange resins and a soluble salt such as , where the Na+ cation will exchange with H+.
Another method is the heating of hypochlorous acid, producing chloric acid and hydrogen chloride:
Any way it is produced, the acid may be concentrated up to 40% in a vacuum dessicator over .
See also
Chlorate
Hypochlorous acid
Chlorous acid
Perchloric acid
Oxidizing acid
Dichlorine pentoxide
References
Additional Information
Chlorates
Halogen oxoacids
Mineral acids
Oxidizing acids | Chloric acid | Chemistry | 369 |
22,215,967 | https://en.wikipedia.org/wiki/Norpropoxyphene | Norpropoxyphene is a major metabolite of the opioid analgesic drug dextropropoxyphene, and is responsible for many of the side effects associated with use of this drug, especially the unusual toxicity seen during dextropropoxyphene overdose. It has weaker analgesic effects than dextropropoxyphene itself, but is a relatively potent pro-convulsant and blocker of sodium and potassium channels, particularly in heart tissue, which produces prolonged intracardiac conduction time and can lead to heart failure following even relatively minor overdoses. The toxicity of this metabolite makes dextropropoxyphene up to 10 times more likely to cause death following overdose compared to other similar mild opioid analgesics, and has led to dextropropoxyphene being withdrawn from the market in some countries.
Because norpropoxyphene has a long half-life in the body of up to 36 hours (compared to around 6–12 hours for dextropropoxyphene), it can accumulate in tissues during chronic use of dextropropoxyphene-containing medications, especially in people whose excretion of drugs is slower than normal such as young children, the elderly, and individuals with reduced kidney or liver function, and so side effects including serious adverse events are more common in these groups and use of dextropropoxyphene should be avoided where possible.
References
Synthetic opioids
Opioid metabolites
Propionate esters
Amines
Potassium channel blockers
Sodium channel blockers
HERG blocker | Norpropoxyphene | Chemistry | 333 |
16,559,873 | https://en.wikipedia.org/wiki/School%20of%20Mathematics%20and%20Naval%20Construction | The Central School of Mathematics and Naval Construction was a short-lived shipbuilding college at Portsmouth Dockyard on the south coast of England. It was founded in 1848 but only lasted five years, until 1853. The first Principal was Joseph Woolley, who in 1864 would found the Royal School of Naval Architecture and Marine Engineering in South Kensington that became part of the Royal Naval College, Greenwich in 1873.
Building
The school was sited in the dockyard at Portsea, Portsmouth in the building formerly used by the School of Naval Architecture (1816–1832), facing the Commissioner’s house and Old Naval Academy. It is long by wide and high, to a design by Edward/Edmund Hall. Construction began in 1815 and was completed in 1817. The building has since seen use as a residence, Port Admirals Office, Tactical School, War College, NATO and Naval HQ and C in C Western Fleet Offices.
Education
The School of Mathematics and Naval Construction was intended as a finishing school for a select number of shipwright apprentices, to prepare them as officers in the dockyards. They were sent to the school for the final three years of their seven-year apprenticeship, to be taught mathematics by Wooley and shipbuilding by the master shipwright of the dockyard. Unusually, they were also taught chemistry in a laboratory created at the back of the school for the use of W.J. Hay, the chemical assistant of the dockyard.
Alumni
Sir Edward James Reed - Chief Constructor of the Royal Navy from 1863 until 1870
Sir Nathaniel Barnaby - Reed's successor and brother-in-law
Frederick Kynaston Barnes - Naval Architect
References
Journal of the Statistical Society of London, Volume 16 (1853) p 210
Further reading
H. W.Dickinson, 'Joseph Woolley - Pioneer of British Naval Education; 1848 - 1873', Education Research and Perspectives (2007) 34(1) pages 1–26
External links
1848 establishments in England
Marine engineering organizations
History of the Royal Navy
Education in Portsmouth
Former training establishments of the Royal Navy | School of Mathematics and Naval Construction | Engineering | 403 |
33,562,060 | https://en.wikipedia.org/wiki/Cuscuta%20pacifica | Cuscuta pacifica is a species of dodder. Its common name is goldenthread.
Distribution
The plant is native to the coast of western North America from British Columbia to Baja California. It is a halophyte, living in coastal salt marsh habitats, such as the San Francisco Bay.
Description
Cuscuta pacifica is a slender annual vine with yellowish thread-like stems that wrap tightly around other plants. The leaves are reduced to tiny scales, and it possesses no roots because it is a parasitic plant, like all Cuscuta, and taps nutrients from host plants with its haustoria.
The salt marsh dodder produces flowers with bell-shaped, white glandular corollas with five-pointed triangular lobes. It tends to parasitize Salicornia, but also may be found on other species such as Jaumea carnosa and Grindelia stricta.
Recently, it has become clear that waterfowl might be involved in the dispersal of the species' seeds, as has been confirmed for C. campestris.
Varieties
The species includes two varieties.
Cuscuta pacifica var. pacifica is significantly more common, occurring throughout the species range.
Cuscuta pacifica var. papillata is a very rare endemic of sand-dune habitats in Mendocino County.
Previous treatments included this species as two varieties of a more broadly defined Cuscuta salina, but they were recently recognized to be a distinct species with clear habitat and host affinities and reproductively isolated from Cuscuta salina var. salina.
References
External links
CalFlora Database: Cuscuta pacifica (Goldenthread)
Cuscuta pacifica — CalPhotos gallery
pacifica
Halophytes
Flora of the West Coast of the United States
Flora of British Columbia
Flora of Baja California
Flora of California
Flora of Oregon
Flora of Washington (state)
Natural history of the California chaparral and woodlands
Natural history of the San Francisco Bay Area
Plants described in 2009
Flora without expected TNC conservation status | Cuscuta pacifica | Chemistry | 415 |
3,189,101 | https://en.wikipedia.org/wiki/Intermediate%20bulk%20container | Intermediate bulk containers (also known as IBC, IBC tote, or pallet tank) are industrial-grade containers engineered for the mass handling, transport, and storage of liquids, semi-solids, pastes, or granular solids. There are several types of IBCs with the two main categories being flexible IBCs and rigid IBCs. Many IBCs are reused with proper cleaning and reconditioning or repurposed.
IBCs are roughly pallet sized and either attach to a pallet or have integral pallet handling features. This type of packaging is frequently certified for transporting dangerous goods or hazardous materials. Proper shipment requires the IBC to comply with all applicable regulations.
Types
Rigid IBC
Rigid intermediate bulk containers are a type of Bulk box. They can be reusable, versatile containers with an integrated pallet base mount that provides forklift and pallet jack manoeuvrability. These containers can be made from metal, plastic, or a composite construction of the two materials. Rigid IBC design types are manufactured across a volume range that is in between those of standard shipping drums and intermodal tank containers, hence the title "intermediate" bulk container. IBC totes are authorized per Title 49 CFR codes to be fabricated of a volume up to while maintaining the IBC name and their federal shipping and handling permits. Many rigid IBCs are designed to be stackable.
Containers for hazardous and dangerous fluids must carry UN-recognized markings that enable the operator and those handling the container to ensure suitability with the contained cargo and associated handling requirements.
IBC tank capacities generally used are often . Intermediate bulk containers are standardized shipping containers often UN/DOT certified for the transport handling of hazardous and non-hazardous, packing group II and packing group III commodities. Many IBC totes are manufactured according to federal and NSF/ANSI regulations and mandates and are often IMDG approved as well for domestic and maritime transport. Metal alloy IBC tanks are also manufactured according to NFPA and UL142 certification standards for extensive storage of materials labelled as flammable and/or combustible.
Intermediate bulk containers can be manufactured from various materials based on the requirements of the application or service the IBC will be used for. Traditional materials include:
Plastic (high-density polyethylene)
Composite: galvanized steel and plastic
Carbon steel
Stainless steel (304 and 316/316L grades)
Collapsible IBC
Collapsible IBC tanks are designed so that they can be folded when needed to save space when empty or used for return transport. They can also be stacked to save storage space. The replaceable plastic bags with a typical volume of 500 or 1000 liters make the container easy to clean and reuse, which is needed for use with food, as strict hygiene regulations must be observed. The space-saving intermediate bulk containers are used in the pharmaceutical, cosmetics and food industries. Several designed have been developed.
Caged IBC
Caged IBCs are composite structures and are in very common usage. Caged IBCs are often utilized as one-use containers, especially when it comes to hazardous materials, but are also suitable for cleaning and reuse under many conditions. This IBC type often features an interior liner, blow-mold manufactured from polyethylene, that is structurally supported by a protective cage frame, often of galvanized steel composition. Caged IBCs are engineered for the bulk handling of liquids, semi-solids, as well as solid materials. All materials can present certain safety and compatibility concerns, especially hazardous liquids, and proper guidance is always recommended whenever using caged IBC totes for harsh chemicals.
Caged IBC totes are thermoplastic blow-mold engineered, often, from virgin high-density polyethylene (HDPE), a BPA free, strong plastic. Caged tote engineering is a top port inlet with cap for filling of cargo (commonly 6") with a bottom discharge outlet port--common is 2" ball valves--and an integrated pallet base skid for maneuvering the IBC. The pallet base of composite IBCs often features four-way access channels for universal handling by moving equipment such as forklifts and pallet jacks.
Caged IBC engineering has been to produce a multi-industry use container that is mobile, convenient, consistent, durable, as well as compatible. The high-density polyethylene used in the construction of rigid, poly caged IBC totes is a durable thermoplastic chosen for its compatibility with many chemicals and materials often employed throughout industries, commercial applications, agriculture as well as consumer-based uses, as caged IBCs are often repurposed for aquaponic gardening.
Caged intermediate bulk containers are standardized for manufacture to near a commonly-accepted pallet size. Caged IBCs are often 1,200 x 1,000 x 1,150 (45" x 40" x 46") for 1,000 L and 1,200 x 1,000 x 1,350 mm (48" x 40" x 53") for 1,250 L, where both volume types are available in either new, rebottled, or reconditioned model types, where: rebottled means a brand new HDPE liner in a previously-used but certified steel cage, and; reconditioned means a previously-used but cleaned and certified HDPE liner and cage
Flexible IBC - big bag
A standard flexible intermediate bulk container can hold and manufacturers offer bags with a volume of .
Flexible intermediate bulk containers are made of woven polyethylene or polypropylene or other heavy polymers. Bags are designed for storing or transporting dry, flowable products, such as sand, fertilizer, and plastic granules. they typically have lifting straps but are frequently handled on a pallet.
Engineered design
Most IBCs are cube-shaped and this cube-shaped engineering contributes to the packaging, stacking, storing, shipping, and overall space efficiency of intermediate bulk containers. Rigid IBC totes feature integrated pallet bases with dimensions that are generally near the common pallet standard dimension of or . IBC container’s pallet base is designed for universal maneuverability via forklift/pallet jack channels. Almost all rigid IBCs are designed so they can be stacked vertically one atop the other using a forklift. Most have a built-in tap (valve, spigot, or faucet) at the base of the container to which hoses can be attached, or through which the contents can be poured into smaller containers.
The most common IBC sizes of 275 and 330 US gallons fit on a single pallet of similar dimensions to pallets which hold 4 drums (220 US gallons), providing an extra 55-110 gallons of product in the IBC over drum storage, a 25%-50% increase for the same storage footprint. Additionally, IBCs can be manufactured to a customer's exact requirements in terms of capacity, dimensions, and material.
Advantages
There are many advantages to the engineering and design of the IBC model:
Being cubic in form, they can transport more material in the same footprint compared to cylindrical-shaped containers, and far more than might be shipped in the same space compared to packaging in consumer quantities.
Composite IBCs rely on plastic liners that can be filled and discharged with a variety of systems.
The manufacturer/processor of a product can bulk package a product in one country and ship to many other countries at a reasonably low cost where it is subsequently packaged in final consumer form in accordance with the regulations of that country and in a form and language suitable for that country.
High organization, mobility, integration capabilities.
Increase logistic and handling timelines, efficiencies, and capacity through single container filling, moving, loading, transit, and dispensing.
Potential long term assets given the durability of IBC construction materials.
Provides a reliable and consistent way to handle or store materials.
Uses
IBCs are often used to ship, handle, and/or store:
Bulk chemicals including hazardous materials or dangerous goods
Commodities and raw materials used in industrial production
Liquid, granulated, and powdered food ingredients
Food syrups, such as corn syrup, maple syrup or molasses
Petrochemical products, such as oil, gas, solvents, detergents, or adhesives
Rainwater
Used IBCs are the basic building blocks for many home aquaponic systems
Paints and industrial coatings
Pharmaceutical compounds, ingredients, intermediates, batch products
Healthcare related items, solid commodities, bio-waste, waste materials
Vineyards, wine fermentation, spirits production
Agriculture, nursery, greenhouse uses and chemicals
Many water, waste water, process water applications across industry sectors
Used by land owners to support firefighting activities
Acquisition and disposal
Intermediate bulk containers may be purchased or leased. Bar code and RFID tracking systems are available with associated software.
An IBC can be purchased as a new unit (bottle and cage), a rebottled unit (new bottle and washed cage) or a washed unit (both bottles and cages have been washed). A washed unit is typically less expensive, with the new unit being the most expensive, and the rebottled unit near the mid-point. In many cases, a customer may purchase a mix (“blend”) of these types of units under a single price, to simplify the accounting.
The customer's choice of unit primarily depends on either actual or perceived sensitivity of their product to contamination, and the overall ability to clean their specific product type from the bottle. Those with a lower contamination risk are prime candidates for the washed units. With the exception of products produced in "clean rooms" (GMP - good manufacturing practices), the decision of a washed over a new is usually a matter of availability or appearance.
An IBC can be leased in a closed-loop (using only the IBCs which were used by a given customer and washed or rebottled) or the most common open-loop system (where the origin of the rebottled or wash unit is flexible). For plastic composite units, the trip lease has largely been replaced by a blended purchase.
Single use flexible IBC's such as those used for aggregate transportation in the construction industry are a major source of plastic pollution. Most aggregate suppliers do not offer a scheme to refund a deposit upon return of empty IBC's and in the UK they are frequently fly tipped and seen abandoned roadside.
Safety
When exposed to fire as in a warehouse event, plastic IBCs containing combustible or flammable liquids can melt or burn fairly rapidly, releasing their entire contents and increasing the fire hazard by the sudden addition of combustible fuel. Rigid plastic (as high-density polyethylene) IBCs that transport and house flammable/combustibles are recommended to have clear labeling and stored within properly secured structures and according to federal regulations, such as NFPA and OSHA. Metal IBCs (as carbon steel and stainless steel) are often approved per UL 142 requirements for housing these materials long term. Accordingly, metal IBC tanks can be used for Class I materials, while rigid plastic IBCs can be used for Class II/III materials.
Concerning the mechanical stability and sloshing of intermediate bulk containers during transport, some research has been performed through the U.S. Department of Transportation which seems to indicate that IBC containers perform overall very well during transit in terms of sloshing and mechanical stability.
For metal IBCs, test reports by the German Bundesanstalt für Materialforschung und -prüfung (BAM) show that a metal IBC can withstand fire for at least 30 minutes, if it is equipped with a pressure venting device.
See also
Barrel
Bulk box
CargoBeamer
Drum
Drum pump
Dutch flower bucket
Flexible intermediate bulk container
Intermodal container
Liquid hydrogen tanktainer
Roller container
Spill pallet
Tank container
Wooden box
References
Standards
ASTM D1693- Standard Test Method for Environmental Stress-Cracking of Ethylene Plastics
ASTM D6179- Standard Test Methods for Rough Handling of Unitized Loads and Large Shipping Cases and Crates
ASTM D7387- Standard Test Method for Vibration Testing of Intermediate Bulk Containers (IBCs) Used for Shipping Liquid Hazardous Materials (Dangerous Goods)
ISO 13274 - Packaging — Transport packaging for dangerous goods — Plastics compatibility testing for packaging and IBCs
Further reading
Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009,
External links
Packaging
Shipping containers
Mechanical standards | Intermediate bulk container | Engineering | 2,570 |
16,314,447 | https://en.wikipedia.org/wiki/Robert%20McCarley | Robert W. McCarley, MD, (1937–2017) was Chair and Professor of Psychiatry at Harvard Medical School and the VA Boston Healthcare System. He is also Director of the Laboratory of Neuroscience located at the Brockton VA Medical Center and the McLean Hospital. McClarley was a prominent researcher in the field of sleep and dreaming as well as schizophrenia.
McCarley graduated from Harvard College in 1959 and Harvard Medical School in 1964. During his residency at Massachusetts Mental Health Center, he studied with J. Allan Hobson. In 1977, Hobson and McCarley developed the activation synthesis theory of dreaming that said that dreams do not have meanings and are the result of the brain attempting to make sense of random neuronal firing in the cortex. McCarley has extensively studied the brainstem mechanisms that control REM sleep. Additionally, he has studied the buildup of adenosine in the basal forebrain following sleep deprivation.
In the area of schizophrenia, McCarley has studied brain abnormalities in patients with schizophrenia. McCarley and Martha Shenton published a classic paper in 1992 that described a relationship in a reduction in the volume of the left superior temporal gyrus and thought disorder in patients with schizophrenia.
McCarley has been presented with many awards for his research. In 1998, he received William S. Middleton Award which is the highest honor awarded to a VA biomedical research scientist. He has also been presented awards from the Sleep Research Society, American Psychiatric Association, and American Academy of Sleep Medicine.
In 2007, McCarley was ranked as the ninth most cited author in the field of schizophrenia research over the past decade. McCarley has published around 300 research articles and several books and book chapters such as Brain Control of Wakefulness and Sleep.
References
External links
An ESSAY with Dr. Robert McCarley
Faculty Profile, Harvard Medical School
1937 births
2017 deaths
American psychiatrists
Sleep researchers
Harvard Medical School faculty
Harvard Medical School alumni
People from Mayfield, Kentucky
Harvard College alumni
McLean Hospital physicians | Robert McCarley | Biology | 403 |
3,093,672 | https://en.wikipedia.org/wiki/Common%20beta%20emitters | Various radionuclides emit beta particles, high-speed electrons or positrons, through radioactive decay of their atomic nucleus. These can be used in a range of different industrial, scientific, and medical applications. This article lists some common beta-emitting radionuclides of technological importance, and their properties.
Fission products
Strontium
Strontium-90 is a commonly used beta emitter used in industrial sources. It decays to yttrium-90, which is itself a beta emitter. It is also used as a thermal power source in radioisotope thermoelectric generator (RTG) power packs. These use heat produced by radioactive decay of strontium-90 to generate heat, which can be converted to electricity using a thermocouple. Strontium-90 has a shorter half-life, produces less power, and requires more shielding than plutonium-238, but is cheaper as it is a fission product and is present in a high concentration in nuclear waste and can be relatively easily chemically extracted. Strontium-90 based RTGs have been used to power remote lighthouses. As strontium is water-soluble, the perovskite form strontium titanate is usually employed as it is not water-soluble and has a high melting point.
Strontium-89 is a short-lived beta emitter which has been used as a treatment for bone tumors, this is used in palliative care in terminal cancer cases. Both strontium-89 and strontium-90 are fission products.
Neutron activation products
Tritium
Tritium is a low-energy beta emitter commonly used as a radiotracer in research and in traser self-powered lightings. The half-life of tritium is 12.3 years. The electrons from beta emission from tritium are so low in energy (average decay energy 5.7 keV) that a Geiger counter cannot be used to detect them. An advantage of the low energy of the decay is that it is easy to shield, since the low energy electrons penetrate only to shallow depths, reducing the safety issues in deal with the isotope.
Tritium can also be found in metal work in the form of a tritiated rust, this can be treated by heating the steel in a furnace to drive off the tritium-containing water.
Tritium can be made by the neutron irradiation of lithium.
Carbon
Carbon-14 is also commonly used as a beta source in research, it is commonly used as a radiotracer in organic compounds. While the energy of the beta particles is higher than those of tritium they are still quite low in energy. For instance the walls of a glass bottle are able to absorb it. Carbon-14 is made by the np reaction of nitrogen-14 with neutrons. It is generated in the atmosphere by the action of cosmic rays on nitrogen. Also a large amount was generated by the neutrons from the air bursts during nuclear weapons testing conducted in the 20th century. The specific activity of atmospheric carbon increased as a result of the nuclear testing but due to the exchange of carbon between the air and other parts of the carbon cycle it has now returned to a very low value. For small amounts of carbon-14, one of the favoured disposal methods is to burn the waste in a medical incinerator, the idea is that by dispersing the radioactivity over a very wide area the threat to any one human is very small.
Phosphorus
Phosphorus-32 is a short-lived high energy beta emitter, which is used in research in radiotracers. It has a half-life of 14 days. It can be used in DNA research. Phosphorus-32 can be made by the neutron irradiation (np reaction) of sulfur-32 or from phosphorus-31 by neutron capture.
Nickel
Nickel-63 is a radioisotope of nickel that can be used as an energy source in Radioisotope Piezoelectric Generators. It has a half-life of 100.1 years. It can be created by irradiating nickel-62 with neutrons in a nuclear reactor.
See also
Commonly used gamma-emitting isotopes
Betavoltaics
References
External links
List of Pure Beta Emitters, (U. Wisconsin Madison)
Nuclear physics
Nuclear chemistry
Radioactivity
Isotopes
Nuclear materials | Common beta emitters | Physics,Chemistry | 891 |
5,929,146 | https://en.wikipedia.org/wiki/PikeOS | PikeOS is a commercial hard real-time operating system (RTOS) which has a separation kernel-based hypervisor that supports multiple logical partition types for various operating systems (OS) and applications, each referred to as a GuestOS. PikeOS is engineered to support the creation of certifiable smart devices for the Internet of Things (IoT), ensuring compliance with industry standards for quality, safety, and security across various sectors. In instances where memory management units (MMU) are not present but memory protection units (MPU) are available on controller-based systems, PikeOS for MPU is designed for critical real-time applications and provides up-to-standard safety and security.
Overview
PikeOS was introduced in 2005 and combines a real-time operating system (RTOS) with a virtualization platform and Eclipse-based integrated development environment (IDE) for embedded system (embedded systems). It is a commercial clone of the L4 microkernel family. PikeOS has been developed for safety and security-critical applications with certification needs in the fields of aerospace, defense, automotive, transport, industrial automation, medical, network infrastructures, and consumer electronics. The PikeOS separation kernel (v5.1.3) is certified against Common Criteria at EAL5+.
One of the key features of PikeOS is its ability to safely execute applications with different safety and security levels concurrently on the same computing platform. This is done by strict spatial and temporal segregation of these applications via software partitions. A software partition can be seen as a container with pre-allocated privileges that can have access to memory, central processing unit (CPU) time, input/output (I/O), and a predefined list of OS services. With PikeOS, the term application refers to an executable linked against the PikeOS application programming interface (API) library and running as a process inside a partition. The nature of the PikeOS application programming interface (API) allows applications to range from simple control loops up to full paravirtualized guest operating systems like Linux or hardware virtualized guests.
Software partitions are also called virtual machines (VMs), because it is possible to implement a complete guest operating system inside a partition which executes independently from other partitions and thus can address use cases with mixed criticality. PikeOS can be seen as a Type-1 hypervisor.
Supported toolchain, IDE CODEO
The Eclipse-based IDE CODEO supports system architects with graphical configuration tools, providing all the components that software engineers will need to develop embedded applications, as well as including comprehensive wizards to help embedded project development in a time-saving and cost-efficient way:
Guided configuration
Remote debugging (down to the hardware instruction level)
Target monitoring
Remote application software deployment
Timing analysis
Several dedicated graphical editing views support the system integrator to always keep the overview on important aspects of the PikeOS system configuration showing partition types, scheduling, communication channels, shared memory and IO device configuration within partitions.
Projects can be easily defined with the help of reusable templates and distributed to the development groups. Users can configure predefined components for their project and can also define and add other components during the development process.
Main benefits
Real-time operating system including Type-1 hypervisor defined for flexible configuration
Supports fast or secure boot
Supporting mixed criticality via separation kernel in one system
Configuration of partitions with time and hardware resources
Kernel driver and user space drivers supported
Hardware independence between processor types and families
Easy migration processes and high portability on single- and multi-core
Developed to support certification according to multiple safety and security standards
Reduced time to market via standard development and verification tools
No export restriction: European solution
Certification standards
Safety certification standards according to:
Radio Technical Commission for Aeronautics (RTCA) – DO-178B/C
International Organization for Standardization (ISO) – 26262
International Electrotechnical Commission (IEC) – 62304, 61508
EN – 50128, 50657
Security certification standards according to:
Common Criteria
SAR (?)
Partner ecosystem
SYSGO is committed to establish the technology and business partnerships that will help software engineers to achieve their goals. , SYSGO is working with about 100 partners globally.
An excerpt of partners per category is mentioned below:
Board vendors: Curtiss-Wright Controls Embedded Computing, Kontron, MEN or ABACO
Silicon vendors: NXP, Renesas, Texas Instruments (TI), Xilinx, Infineon, NVidia or Intel
Software partners: CoreAVI, wolfSSL, Aicas, AdaCore, Esterel, Apex.AI, RTI, PrismTech, Datalight, Systerel, Imagination Technologies or RAPITA
Tool partners: Lauterbach, Vector Software, Rapita, iSYSTEM
Supported architectures: ARM, PowerPC, x86, or SPARC (on request)
Supported GuestOS types
Linux or Android (ideally SYSGO Linux distribution ELinOS)
POSIX PSE51 with PSE52 extensions
ARINC 653
RTEMS
Java
AUTOSAR
Ada, including Ravenscar profile
and others
End-of-life overview
References
External links
PikeOS Official Product Site
PikeOS Product Note (PDF)
PikeOS Flyer (PDF)
Real-time operating systems
Microkernels
Virtualization software
Embedded operating systems
ARM operating systems
Microkernel-based operating systems
IA-32 operating systems
X86-64 operating systems | PikeOS | Technology | 1,112 |
37,019,012 | https://en.wikipedia.org/wiki/64%20Eridani | 64 Eridani is a single, yellow-white hued star in the constellation Eridanus having variable star designation S Eridani. It is faintly visible to the naked eye with an apparent visual magnitude of 4.77. The annual parallax shift is measured at , which equates to a distance of about . In addition to its proper motion, it is moving closer to the Sun with a radial velocity of around −9 km/s.
This is an F-type main-sequence star with a stellar classification of F0 V. It is catalogued a low amplitude Delta Scuti variable with a primary period of 0.273 days. It was originally classified, tentatively, as an RR Lyrae variable of type 'c'.
64 Eridani is spinning rapidly with a projected rotational velocity of 212 km/s. This is giving the star an oblate shape with an equatorial bulge; its equatorial radius is 8% larger than its polar radius. The star is an estimated 644 million years old with 1.5 times the mass of the Sun. It is radiating 80 times the Sun's luminosity from its photosphere at an effective temperature of roughly 7,346 K.
References
F-type main-sequence stars
Delta Scuti variables
RR Lyrae variables
Eridanus (constellation)
Eridani, S
Durchmusterung objects
Eridani, 64
032045
023231
1611 | 64 Eridani | Astronomy | 295 |
2,495,244 | https://en.wikipedia.org/wiki/Field%20recording | Field recording is the production of audio recordings outside recording studios, and the term applies to recordings of both natural and human-produced sounds. It can also include the recording of electromagnetic fields or vibrations using different microphones like a passive magnetic antenna for electromagnetic recordings or contact microphones, or underwater field recordings made with hydrophones to capture the sounds and/or movements of whales, or other sealife. These recordings are often regarded as being very useful for sound designers and foley artists.
Field recording of natural sounds, also called phonography (a term chosen because of the similarity of the practice to photography), was originally developed as a documentary adjunct to research work in the field, and Foley work for film. With the introduction of high-quality, portable recording equipment, it has subsequently become an evocative artform in itself. In the 1970s, both processed and natural phonographic recordings, (pioneered by Irv Teibel's Environments series), became popular.
"Field recordings" may also refer to simple monaural or stereo recordings taken of musicians in familiar and casual surroundings, such as the ethnomusicology recordings pioneered by John Lomax, Nonesuch Records, and Vanguard Records.
Techniques
General
Field recording often involves the capture of ambient noises that are low level and complex, and, in response, the requirements from the field recordist have often pushed the technical limits of recording equipment, that is, demanding low noise and extended frequency response in a portable, battery-powered unit. For this reason, field recordists have favoured high-quality (usually professional) recorders, microphones, and microphone pre-amplifiers. The history of the equipment used in this area closely tracks the development of professional portable audio recording technology. Modern accessories used in the field include, but are not limited to: windscreens (foam, fur, hair, parabolic reflector), shock mounts, microphone cables, digital audio recorders and so on.
Field recording is typically recorded in the same channel format as the desired result, for instance, stereo recording equipment will yield a stereo product. In contrast, a multitrack remote recording captures many microphones on multiple channels, later to be creatively modified, augmented, and mixed down to a specific consumer format.
Field recording experienced a rapid increase in popularity during the early 1960s, with the introduction of high-quality, portable recording equipment, (e.g., the Uher, and Nagra portable reel-to-reel decks). The arrival of the DAT (Digital Audio Tape) in the 1980s introduced a new level of audio recording fidelity with extended frequency response and low self-noise. In addition to these technologies, other popular means for field recording have included the analog cassette (CAC), the DCC (Digital Compact Cassette), and the MiniDisc. The latest generation of recorders are completely digital-based (hard disk/Flash). It is also possible to use personal electronic devices, (e.g., a smartphone or tablet), with software, to do field recording and editing.
In addition to recording and editing, the process of field recording also involves these skills. Ability to monitor (observe the relevant signals to ensure recording and settings are correct), control levels (correct decibel range and headroom), create neat documentation (handling, annotating, and tagging the recorded material), clean up (cutting out unwanted noises, processing, etc.), and file management.
Basic Techniques
There are three basic techniques that involve the placement of field recording microphones which result in varying directivity. The three techniques are known as A/B, XY, and M/S.
A/B
In other words, A/B is known as the spaced pair. A/B, or the spaced pair is formed by setting two separate microphones (either cardioid or omnidirectional) in parallel with one another. There is intentional space left between the two microphones in order to capture a wide stereo image of a desired sound. This technique is often utilized in an indoor recording of multi-string instrumental settings, music ensembles, and so on.
XY
XY is the most frequently used stereo recording technique. It typically involves setting a complement pair of microphones in a coincident (XY) pattern. This technique is used to replicate the way in which our ears function (see binaural hearing). XY as a technique is dependent on the delay of sound that arrives at one of the microphones a minuscule fraction of a second sooner than the other. This way, the technique produces a remarkably rich sense of ambiance. However, there is a downside to this technique, as it is fixed in the way that widening or shrinking to control the ambiance is not a possibility.
M/S
Unlike XY, the M/S technique was created to allow for control over the level of ambiance. The logic behind it is that the Mid microphone functions as a center channel, and the Side microphone adds additional ambiance that can either be intensified or subtracted. This can take place either live during the recording or afterwards during editing.
Physically the layout involves a directional microphone as the center, with an omnidirectional microphone placed 90-degrees off-axis from the sound source. This layout captures the central microphone's signals from the side. During the editing phase, the audio track from the Side microphone is required to split into two channels, with left (panning set to 100% L), and right (at 100% R).
One of the two sides (right or left), should be processed by reversing their phase. Visually, it involves flipping the desired wave upside down, which increases the sense of ambiance due to a minute misalignment between the two.
New techniques
Newly developed techniques include the creative placement of microphones, (including contact microphones and hydrophones, for example), the diffusion of captured sounds, and individual approaches.
Career
A field recordist is an individual that works to produce field recordings. Typically the work involves recording sound outside of a controlled environment like a studio (field recording is an analog of studio recording), to be used or repurposed as sound effects that get inserted into all sorts of media, such as plays, video games, films, and television shows. A career as a professional field recordist is a tough, but potentially rewarding one. A field recordist must often face ever-changing weather, be patient, and willing to capture sounds in potentially dangerous locations. A typical day could range from recording ambient noise in a library to recording the thundering sounds of a grand waterfall. Just as the recordings can vary, the amount of work can as well. Most typically work as freelancers with other side jobs to support the slow periods of recording.
Brief early history of field recordings
The earliest known field recording is of a Shama bird. It was recorded in 1889 by Ludwig Koch using a wax cylinder recording. This was the first documented recording of a non-human subject. The distinction between whether field recordings are art or music is still ambiguous, as they still serve both purposes. Some early proponents of this important, yet unknown field consist of examples like Walter Ruttman's Weekend (which was a radio piece put together from recording of daily life in Berlin), and Ludwig Koch's 'sound-books' (which educated listeners in species identification using gramophone records of birdsong). These field recordings and many others ended up being stored in vinyl to be sold to enthusiasts, hobbyists, and tourists alike a few decades later in the 1950s, 60s, and 70s.
Research
Ethnomusicology
Field recording was originally a way to document oral presentations and ethnomusicology projects (pioneered by Béla Bartók, Charles Seeger, and John Lomax). In the case of Bartók, his own studies helped alter the generally unfavorable view of Eastern European folk music at that time. He grew to admire numerous regional styles from both firsthand experience and recordings, eventually incorporating these styles into his own compositional works.
Bioacoustics
Field recording is an important tool in bioacoustics and biomusicology, most commonly in research on bird song. Animals in the wild can display very different vocalizations from those in captivity. Ambient noise in urban environments have also shown to alter the vocalizations of local bird populations.
In addition to birds, whales have also been frequently observed using field recordings. In recent years, COVID-19 has had largely negative effects on the world as a whole, but through recent field recordings, it has been shown that whales have been less stressed and generally more healthy. This is due to a large decline in international commerce and naval shipping during the pandemic, and by extension much less noise and disturbance in the ocean's soundscape.
Art
Music
The use of field recordings in avant-garde, musique concrète, experimental, and, more recently, ambient music was evident almost from the birth of recording technology. Most noteworthy for pioneering the conceptual and theoretical framework with art music that most openly embraced the use of raw sound material and field recordings was Pierre Schaeffer, who was developing musique concrète as early as 1940. Further impetus was provided by the World Soundscape Project, initiated by Canadian composer R. Murray Schafer in the 1970s; this work involved studying the acoustic ecology of a particular location by the use of field recordings.
Field recordings are now a common source material for a range of musical results, from contemporary musique concrète compositions to film soundtracks, video game soundtracks, and effects. Chris Watson, formerly of Cabaret Voltaire, is now perhaps the world's leading exponent of this art, with his recordings used for David Attenborough's series for the BBC, programmes for BBC Radio, and many other outlets. Another notable application of field recordings as of contemporary music is its inclusion in some vaporwave tracks, commonly recordings of public areas such as malls or grocery stores to add atmosphere.
Another example of the use of field recordings is by the American musician Stuart Hyatt who combines his field recordings with the experimental music of himself and other musicians.
The sounds recorded by any device, and then transferred to digital format, are used by some musicians through their performance with MIDI-interfaced instruments. A contemporary artist with great success for his compositions is Christian Fennesz.
Earlier innovators who are noted for the importance and boldness of their projects are Luigi Russolo, who, in 1913, with his manifesto, L'arte dei rumori (The Art of Noises), gave musical value to environmental noise. He also designed and built the Intonarumori—the first instruments for making noise. Francesco Balilla Pratella utilized the Intonarumori in his opera, L'aviatore Dro, which was written in close collaboration with Filippo Tommaso Marinetti, (the founder of the Futurist movement).
Radio documentary
Radio documentaries often use recordings from the field, e.g., a locomotive engine running, for evocative effect. This type of sound functions as the non-fictional counterpart to the sound effect.
Politics
During the early years of commercial recordings, the speeches of politicians sold well, since few people had radios. The HMV ("His Master's Voice") catalogue for 1914–1918 lists over a dozen such records. Probably the last time such records sold well was in 1965, when the LP, The Voice of Churchill, reached number 7 in the UK album charts. This was immediately after Churchill's death.
See also
Biomusic
Lowercase
The Freesound Project
Sound art
Soundscape
Sound map
References
External links
Early history
Phonography.org
Sound Transit
Audio engineering
Data collection in research
Field recording
Hobbies
Sound recording | Field recording | Engineering | 2,406 |
851,084 | https://en.wikipedia.org/wiki/Koru | The is a spiral shape evoking a newly unfurling frond from a silver fern frond. It is an integral symbol in Māori art, carving and tattooing, where it symbolises new life, growth, strength and peace.
Its shape "conveys the idea of perpetual movement," while the inner coil "suggests returning to the point of origin".
Use in traditional design
The koru is the integral motif of the symbolic and seemingly abstract kōwhaiwhai designs traditionally used to decorate wharenui (meeting houses). There are numerous semi-formal designs, representing different features of the natural world.
More recent adaptations
The logo of Air New Zealand, the national carrier, incorporates a koru design — based on the Ngaru (Ngāti Kahungunu) kōwhaiwhai pattern — as a symbol of New Zealand flora. The logo was introduced in 1973 to coincide with the arrival of the airline's first McDonnell Douglas DC-10 wide-body jet. Several other nationwide organisations also use a koru in their logos, among them the New Zealand Department of Conservation.
In 1983, Friedensreich Hundertwasser based his proposed design for a secondary New Zealand flag on the symbol. It also formed the basis for a notable series of artworks by Gordon Walters. Koru swirls are also reminiscent of the Tomoe symbol in Japan.
The New Zealand national korfball team is nicknamed The Korus.
References
Māori art
Māori words and phrases
Visual motifs
National symbols of New Zealand
Spirals | Koru | Mathematics | 312 |
4,567,165 | https://en.wikipedia.org/wiki/Transfer%20matrix | In applied mathematics, the transfer matrix is a formulation in terms of a block-Toeplitz matrix of the two-scale equation, which characterizes refinable functions. Refinable functions play an important role in wavelet theory and finite element theory.
For the mask , which is a vector with component indexes from to ,
the transfer matrix of , we call it here, is defined as
More verbosely
The effect of can be expressed in terms of the downsampling operator "":
Properties
See also
Hurwitz determinant
References
(contains proofs of the above properties)
Wavelets
Numerical analysis | Transfer matrix | Mathematics | 126 |
51,179,097 | https://en.wikipedia.org/wiki/Dangyangyu%20Kiln | The Dangyangyu Kiln (当阳峪窑 Dangyangyu yao) was a private kiln in operation during the Northern Song dynasty, producing Cizhou ware. It is located in Xiuwu in Henan province, China, and is also known as the Xiuwu Kiln (修武窑 Xiuwu yao) or Xiuwu kiln. Dangyangyu is sometimes presented as Tangyangyu, Tangyang Yu or Tang Yang Yu.
The site of the Dangyangyu kiln (当阳峪窑址 Dangyangyu yaozhi) was placed on a list of protected sites of Henan in 2006. According to
Li Huibing (李辉柄) of the China Research Centre for Ancient Ceramics (中国古陶瓷研究会 Zhongguo gu taoci yanjiuhui) it stands together with the famous kilns: Ru yao, Jun yao and Ding yao, which produced Ru ware, Jun ware and Ding ware.
Ceramics from the Dangyangyu Kiln
Ceramic ware from the Dangyangyu Kiln is present in collections of ceramics in China, and outside of China. For example, at the British Museum, and the Harvard Art Museums.
Further reading
刘涛 : 《当阳峪窑剔划花瓷器》 [Engraved ceramics from the Dangyangyu Kiln], Zhongyuan Wenwu 2000.01 (Beijing, 2000).
Zhongguo da baike quanshu: Kaoguxue (Archäologie). Beijing: Zhongguo da baike quanshu chubanshe, 1986 (Online-Text)
X-ray emission fluorescence (XRF) analysis of origin of raw materials of light dark reddish brown porcelain and porcelain with black flower on a white background of Dangyangyu kiln (in English)
Jiaozuo Dangyangyu ciyao yizhi jianjie (in Chinese)
Li Huibing: Porcelain Exportation and Production in China (in English)
Notes
Chinese pottery kiln sites | Dangyangyu Kiln | Chemistry,Engineering | 417 |
1,404,572 | https://en.wikipedia.org/wiki/John%20Ramsbottom%20%28engineer%29 | John Ramsbottom (11 September 1814 – 20 May 1897) was an English mechanical engineer. Born in Todmorden, then on the county border of Yorkshire and Lancashire. He was the Chief Mechanical Engineer for the London and North Western Railway for 14 years. He created many inventions for railways but his main legacy is the split metal piston ring, virtually all reciprocating engines continue to use these today.
Early life
John Ramsbottom was born on 11 September 1814 in Todmorden. He was the third child of six and the youngest son. His parents were Henry and Sarah Ramsbottom, Henry was a cotton spinner. His grandfather was John Ramsbottom, a tailor.
He had little in the way of schooling having "a short time spent in a dame's school he went to four schoolmasters in succession, then to a baptist minister, then to a colleague of the latter, who taught him as far as simple equations".
Todmorden
His grandfather and father leased land in the Salford area of Todmorden, and in 1804-1805 established the "Steam Factory", the first steam powered cotton spinning mill in Todmorden.
His father gave him a lathe which he used to construct models of various steam engines for his own education and to entertain friends. He used these skills in the workplace rebuilding and modifying the mill engine. He was interested in other technologies, installing the 'new' coal gas illumination in the mill and manufacturing a machine to produce cut nails. In partnership with his uncle, Richard Holt, he took out a patent (No. 6644) in 1834 for an improvement to a power loom, described in Pennie (2007) as a "vertical loom and weft-fork". In 1836 he took out another patent (No. 6975) for "roving, spinning and doubling of fibres".
Manchester
In 1839 he went as a journeyman to Sharp, Roberts & Co. in Manchester. The company manufactured textile machinery and machine tools at their Atlas Works in Manchester. By the time Ramsbottom joined them they were also a successful manufacturer of steam locomotives and here he "gained practical knowledge of the design and construction of steam locomotives".
Three years later, in May 1842, on the recommendation of Charles Beyer (later to become co-founder of Beyer, Peacock and Company in 1854) Ramsbottom was appointed locomotive superintendent of the newly opened Manchester and Birmingham Railway (M&BR) at their new works at Longsight, Manchester. In November 1843 he was promoted to take charge of the newly formed locomotive and rolling stock department when the M&BR separated their civil and mechanical engineering departments, at a salary of £170.
In 1846 the M&BR merged and became part of the London and North Western Railway (L&NWR), and Ramsbottom became District Superintendent North Eastern Division, remaining at Longsight at an increased salary of £300.
Crewe
In 1857 the North and North Eastern divisions of the L&NWR were amalgamated into the Northern Division (lines north of Rugby), Ramsbottom became Northern locomotive superintendent, based at Crewe Works.
In 1862 the L&NWR Northern and Southern divisions were amalgamated with Ramsbottom becoming Chief Mechanical Engineer (CME) for the whole system. Shortly after a Bessemer Steel Works was authorised for installation at Crewe so that the L&NWR could produce its own steel for locomotive construction and rails.
During his time as CME he oversaw the enlargement and modernisation of Crewe works, he was responsible for the bulk production of cheap but capable locomotives, introduced ancillary works including the steel plant and a brickworks, and installed an internal narrow-gauge railway to facilitate the movement of material.
Post L&NWR
Ramsbottom retired in 1871 ostensibly because of ill health but likely because his request for a salary increase was turned down by the L&NWR Board of Directors. The L&NWR continued to pay Ramsbottom an annual stipend of £1,000 for several years after he left, valuing his expertise as a consultant if not as an employee.
However, in 1883 Ramsbottom became a consulting engineer and a director of the Lancashire and Yorkshire Railway (L&YR), where his major legacy was the establishment of Horwich Works on a greenfield site near Bolton.
Locomotive designs
During his time at Crewe he was responsible for the design and production of the following locomotives;
L&NWR 2-2-2 Cornwall, rebuilt to his design in 1858
L&NWR DX Goods class 0-6-0
L&NWR Lady of the Lake Class 2-2-2
L&NWR 4ft Shunter 0-4-0ST
L&NWR Samson Class 2-4-0
L&NWR Newton Class 2-4-0
L&NWR Special Tank 0-6-0ST
He laid the foundation for the L&NWR engine design by adopting inside frames, his safety valves, screw reversing gear and left-hand drive.
Innovations
Ramsbottom applied for 24 patents between 1834 and 1880 of which 23 were approved, they are listed below:
1834 -In partnership with his uncle, Richard Holt, he took out patent No. 6644 for an improvement to a power loom, described in Pennie (2007) as a "Vertical loom and weft-fork".
1836 - he took out patent No. 6975 for "Roving, spinning and doubling of fibres".
1848 - In partnership with William Baker patent No. 12384 "Railway wheels, and turntables with thrust races."
1852 - he invented the split piston ring, which provided a tight seal of the piston against the cylinder with low friction described in patent No. 767 as "Metallic piston and piston rings, and hydraulic throttle vale.
1854 - Patent No. 309 for a "Hydraulic hoist for rolling stock."
1854 - Patent No. 408 for "Improvements in welding."
1855 - Patent No. 322 for "Piston ring improvements."
1855 - The Ramsbottom tamper-proof safety valve was introduced, patent No. 1299 was for "Safety valves, and feed water cistern.".
1857 - Patent No. 1047 for "Wrought iron rail chair."
1860 - He patented a design for a water trough and water pick-up apparatus (patent No. 1527 "Water trough and scoop"), introducing the first one on 23 June 1860 at Mochdre, Conwy, on the London & North Western Railway's (L&NWR) North Wales Coast Line, midway between and . Finding that the system was at its most efficient at he invented a speed indicator!
1860 - He introduced the "Displacement lubricator", patent No. 2460.
1863 - Patent No. 924 saw improvements in the design of steam hammers with the invention of a "Duplex steam hammer & cogging mill" which negated the requirement to provide an anvil ten times the weight of the hammer.
1864 - Ramsbottom patented, No. 48, a system for the "Manufacture of hoops and tyres".
1864 - Patent No. 3073 saw "Bessemer Converter improvements".
1865 - Patent No. 89 brought "Steam hammer improvements", No. 375 introduced "Hammering and rolling machinery", No. 736 saw improvements to 1863, No. 924, No. 1425 brought improvements to 1864, No. 48 and No. 1975 saw "Improvement processes for hoops and tyres".
1867 - Patent N. 342 for "Supporting ingots for steam hammer" and No. 386 "Traverser for rolling stock".
1869 - Patent No. 820 was for "Ventilating tunnels", which introduced a mechanical ventilation system that was used in the tunnel between and , Ramsbottom presented a paper to the Institution of Mechanical Engineers on this subject.
1880 - Patent No. 1060 for "Trip gear for steam and gas engines".
The patent application that did not get past the provisional stage of the process was applied for in 1868 for "Communication cord".
Publications
Ramsbottom presented twelve papers to the Institution of Mechanical Engineers which were published in the Institution's Journal as follows:
1847 - On an improved locomotive boiler.
1853 - Description of an improved coking crane for supplying locomotives.
1854 - On an improved piston for steam engines.
1855 - On the construction of packing rings for pistons.
1856 - On an improved safety valve.
1857 - Description of a safety escape pipe for steam boilers.
1861 - Description of a method of supplying water to locomotive tenders whilst running.
1864 - On the improved traversing cranes at Crewe Locomotive Works.
1866 - Description of an improved reversing rolling mill.
1866 - On an improved mode of manufacture of steel tyres.
1867 - Description of a 30-ton horizontal steam hammer.
1871 - On the mechanical ventilation of the Liverpool Passenger Tunnel on the London and North Western Railway.
Appointments
He was president of the Crewe Mechanics Institute from 1857 to 1871. He was a founder member of the Institution of Mechanical Engineers in 1847 and became its president in 1870-71. Ramsbottom became a member of the Institution of Civil Engineers in 1866.
Ramsbottom was appointed a life governor of Owens College in Manchester (now the University of Manchester) where he assisted in expanding the mechanical engineering department. His interest in the College was such that he established a scholarship, tenable for two years to be competed for by young men employed in the locomotive department of the L&NWR.
He was a director of Beyer-Peacock from 1885 where his two sons John and George held positions. He took up a directorship in the L&YR in 1885 after working as a consultant for them on their new locomotive works.
In 1890 he received an honorary degree of Master of Engineering from Dublin University.
Family
He married Mary Peckett, the eldest daughter of William Peckett, a Quaker linen manufacturer of Barnsley on 29 April 1851. They lived near Longsight depot where Ramsbottom was based at the time, their son William Henry was born here on 28 February 1852.
Shortly after he moved to Crewe in 1857 his wife, who was still at Longsight, died of "venous congestion of the lungs".
He married again on 12 April 1859 to Mary Anne Goodfellow and they had a son, John Goodfellow, in 1860, daughters Margaret Holt in 1861, Jane in 1863, Mary Edith in 1865, a son George Holt in 1868, daughters Eliza in 1874 and Hannah Mary in 1878. Margaret and Eliza died as children.
References
Notes
Citations
Bibliography
External links
1814 births
1897 deaths
Locomotive builders and designers
English railway mechanical engineers
London and North Western Railway people
English inventors
Tribologists
People from Todmorden | John Ramsbottom (engineer) | Materials_science | 2,199 |
64,372,846 | https://en.wikipedia.org/wiki/88%20Leonis | 88 Leonis is a wide binary star system in the equatorial constellation of Leo, the lion. The system is near the lower limit of visibility to the naked eye with an apparent visual magnitude of 6.27. It is located at a distance of 77 light years from the Sun based on parallax, but is drifting closer with a radial velocity of −4.8 km/s. It has a relatively high proper motion, traversing the celestial sphere at the rate of 0.379 arc seconds per annum.
The primary member of the system, component A, is a yellow-white hued F-type main-sequence star with a stellar classification of F9.5V. It is an estimated 5.7 billion years old and is spinning with a rotation period of 14.3 days. The star has a short magnetic activity cycle that averages around 3.5 years. A second cycle appears to vary over time, lasting 13.7 years at the start of observations then decreasing to 8.6 years over a span of 34 years of measurement. The star has 1.06 times the mass of the Sun and 1.10 times the Sun's radius. It is radiating 1.47 times the luminosity of the Sun from its photosphere at an effective temperature of 6,060 K.
The secondary, component B, is a magnitude 9.22 star at an angular separation of from the primary along a position angle of 326°. It has a class of G5 and 74% of the Sun's mass. The pair share a common proper motion through space with a projected separation of .
References
F-type main-sequence stars
Binary stars
Leo (constellation)
Durchmusterung objects
Leonis, 88
100180
056242
4437 | 88 Leonis | Astronomy | 362 |
75,620,851 | https://en.wikipedia.org/wiki/Gyrotrema%20papillatum | Gyrotrema papillatum is a little-known species of corticolous (bark-dwelling), crustose lichen in the family Harpidiaceae. It is known from a single collection in a lowland rainforest region of Costa Rica.
Taxonomy
Gyrotrema papillatum was described as new to science in 2011 by the German lichenologist Robert Lücking. The type specimen of this lichen was collected by the author in Costa Rica, in the Los Patos section of Corcovado National Park (Puntarenas Province). This location is part of the Osa Conservation Area on the Osa Peninsula, situated approximately southeast of San José and west-southwest of Golfito; there, in a lowland rainforest zone, at an elevation between , it was found growing on the bark of a partially shaded lower tree trunk. At the time of its original publication, the lichen was only known from the type locality. It species epithet alludes to the nature of its thallus (i.e., covered with papillae, which are small, conically rounded growths).
Description
Gyrotrema papillatum has a grey-green to olive-green thallus adorned with numerous white . The cortex is –made of a dense, tightly interwoven layer of fungal hyphae. The and the medulla beneath often contain clusters of calcium oxalate crystals. The apothecia of G. papillatum are prominent, with a rounded to shape, measuring 1–1.5 mm in diameter. The exposed of the apothecia is a cinnabar-red colour. Around this disc, the margin is to and fused, sharing the same cinnabar-red hue on the inside. This species lacks a , but instead has concentric rings of tissue that separate rings of old hymenia. The youngest hymenium ring is situated closest to the margin. The excipulum is prosoplectenchymatous and , and lacks . The hymenium stands 80–100 μm high, and the are unbranched.
Each ascus contains eight that have between 5 and 9 septa (internal partitions), measuring 25–30 by 6–8 μm. These ellipsoid spores have thick septa and lens-shaped , are colourless, and has a violet-blue reaction when treated with iodine (amyloid reaction). The apothecial disc contains an unidentified type of anthraquinone substance.
References
Graphidaceae
Lichen species
Lichens described in 2011
Lichens of Central America
Taxa named by Robert Lücking
Species known from a single specimen | Gyrotrema papillatum | Biology | 554 |
1,960,186 | https://en.wikipedia.org/wiki/Error%20catastrophe | Error catastrophe refers to the cumulative loss of genetic information in a lineage of organisms due to high mutation rates. The mutation rate above which error catastrophe occurs is called the error threshold. Both terms were coined by Manfred Eigen in his mathematical evolutionary theory of the quasispecies.
The term is most widely used to refer to mutation accumulation to the point of inviability of the organism or virus, where it cannot produce enough viable offspring to maintain a population. This use of Eigen's term was adopted by Lawrence Loeb and colleagues to describe the strategy of lethal mutagenesis to cure HIV by using mutagenic ribonucleoside analogs.
There was an earlier use of the term introduced in 1963 by Leslie Orgel in a theory for cellular aging, in which errors in the translation of proteins involved in protein translation would amplify the errors until the cell was inviable. This theory has not received empirical support.
Error catastrophe is predicted in certain mathematical models of evolution and has also been observed empirically.
Like every organism, viruses "make mistakes" (or mutate) during replication. The resulting mutations increase biodiversity among the population and can confer advantages such as helping to subvert the ability of a host's immune system to recognise it in a subsequent infection. The more mutations the virus makes during replication, the more likely it is to avoid recognition by the immune system and the more diverse its population will be (see the article on biodiversity for an explanation of the selective advantages of this). However, mutations are not, as a general rule, beneficial, and if it accumulates too many harmful mutations, it may lose some of its biological features which have evolved to its advantage, including its ability to reproduce at all.
The question arises: how many mutations can occur during each replication before the population of viruses begins to lose the ability to survive?
Basic mathematical model
Consider a virus which has a genetic identity modeled by a string of ones and zeros (e.g. 11010001011101....). Suppose that the string has fixed length L and that during replication the virus copies each digit one by one, making a mistake with probability q independently of all other digits.
Due to the mutations resulting from erroneous replication, there exist up to 2L distinct strains derived from the parent virus. Let xi denote the concentration of strain i; let ai denote the rate at which strain i reproduces; and let Qij denote the probability of a virus of strain i mutating to strain j.
Then the rate of change of concentration xj is given by
At this point, we make a mathematical idealisation: we pick the fittest strain (the one with the greatest reproduction rate aj) and assume that it is unique (i.e. that the chosen aj satisfies aj > ai for all i ≠ j); and we then group the remaining strains into a single group. Let the concentrations of the two groups be x , y with reproduction rates a>b, respectively; let Q be the probability of a virus in the first group (x) mutating to a member of the second group (y) and let R be the probability of a member of the second group returning to the first (via an unlikely and very specific mutation). The equations governing the development of the populations are:
We are particularly interested in the case where L is very large, so we may safely neglect R and instead consider:
Then setting z = x/y we have
.
Assuming z achieves a steady concentration over time, z settles down to satisfy
(which is deduced by setting the derivative of z with respect to time to zero).
So the important question is under what parameter values does the original population persist (continue to exist)? The population persists if and only if the steady state value of z is strictly positive. i.e. if and only if:
This result is more popularly expressed in terms of the ratio of a:b and the error rate q of individual digits: set b/a = (1-s), then the condition becomes
Taking a logarithm on both sides and approximating for small q and s one gets
reducing the condition to:
RNA viruses which replicate close to the error threshold have a genome size of order 104 (10000) base pairs. Human DNA is about 3.3 billion (109) base units long. This means that the replication mechanism for human DNA must be orders of magnitude more accurate than for the RNA of RNA viruses.
Information-theory based presentation
To avoid error catastrophe, the amount of information lost through mutation must be less than the amount gained through natural selection. This fact can be used to arrive at essentially the same equations as the more common differential presentation.
The information lost can be quantified as the genome length L times the replication error rate q. The probability of survival, S, determines the amount of information contributed by natural selection— and information is the negative log of probability. Therefore, a genome can only survive unchanged when
For example, the very simple genome where L = 1 and q = 1 is a genome with one bit which always mutates. Since Lq is then 1, it follows that S has to be or less. This corresponds to half the offspring surviving; namely the half with the correct genome.
Applications
Some viruses such as polio or hepatitis C operate very close to the critical mutation rate (i.e. the largest q that L will allow). Drugs have been created to increase the mutation rate of the viruses in order to push them over the critical boundary so that they lose self-identity. However, given the criticism of the basic assumption of the mathematical model, this approach is problematic.
The result introduces a Catch-22 mystery for biologists, Eigen's paradox: in general, large genomes are required for accurate replication (high replication rates are achieved by the help of enzymes), but a large genome requires a high accuracy rate q to persist. Which comes first and how does it happen? An illustration of the difficulty involved is L can only be 100 if q' is 0.99 - a very small string length in terms of genes.
See also
Error threshold
Extinction vortex
Mutational meltdown
References
External links
Error catastrophe and antiviral strategy
Examining the theory of error catastrophe
Pathology
Population genetics | Error catastrophe | Biology | 1,284 |
19,050,295 | https://en.wikipedia.org/wiki/PCA3 | Prostate cancer antigen 3 (PCA3, also referred to as DD3) is a gene that expresses a non-coding RNA. PCA3 is only expressed in human prostate tissue, and the gene is highly overexpressed in prostate cancer. Because of its restricted expression profile, the PCA3 RNA is useful as a tumor marker.
Use as biomarker
The most frequently used biomarker for prostate cancer today is the serum level of prostate-specific antigen (PSA), or derived measurements. However, since PSA is prostate-specific but not cancer-specific, it is an imperfect biomarker. For example, PSA can increase in older men with benign prostatic hyperplasia. Several new biomarkers are being investigated to improve the diagnosis of prostate cancer. Some of these can be measured in urine samples, and it is possible that a combination of several urinary biomarkers will replace PSA in the future.
Compared to serum PSA, PCA3 has a lower sensitivity but a higher specificity and a better positive and negative predictive value. It is independent of prostate volume, whereas PSA is not. It should be measured in the first portion of urine after prostate massage with digital rectal examination.
PCA3 has been shown to be useful to predict the presence of malignancy in men undergoing repeat prostate biopsy. This means that it could be useful clinically for a patient for whom digital rectal examination and PSA suggest possible prostate cancer, but the first prostate biopsy returns a normal result. This occurs in approximately 60% of cases, and on repeat testing, 20-40% have an abnormal biopsy result.
Other uses that are being studied for PCA3 include its correlation with adverse tumor features such as tumor volume, grading (Gleason score) or extracapsular extension. These studies have so far produced conflicting results.
Society and culture
A commercial kit called the Progensa PCA3 test is marketed by the Californian company Gen-Probe. Gen-Probe acquired rights to the PCA3 test from Diagnocure in 2003. In April 2012, Hologic bought Gen-Probe for $3.75 billion by cash.
Discovery
PCA3 was discovered to be highly expressed by prostate cancer cells in 1999.
References
Further reading
External links
Non-coding RNA
RNA
Tumor markers | PCA3 | Chemistry,Biology | 479 |
34,754,337 | https://en.wikipedia.org/wiki/Zyron | Zyron is a registered trademark for specialty gases marketed to the global electronics industry by DuPont.
History
Freon was used as the original brand name for electronic gases produced and marketed by DuPont. With the depletion of the ozone layer and the subsequent phase-out of chlorofluorocarbon (CFC) gas compounds, the company rebranded this product line to differentiate from refrigerant gases that had been using the same Freon brand name.
The name was developed in October 1991 by DuPont employees Paul Bechly and Dr. Nicholas Nazarenko, was first used in commerce on June 12, 1992, and became a registered trademark of DuPont in 1993.
Naming System
The Zyron product naming system was also developed by Bechly and Nazarenko in October 1991. The system is based upon the historical industry numbering system for fluorinated alkanes to identify the chemical compound, followed by an (N) suffix component that specified product purity. The naming system was intentionally not trademarked to allow for industrial adoption.
As example for Zyron 116 N5: the 116 represents the compound hexafluoroethane, and N5 represents "5 nines" or 99.999% purity.
As example for Zyron 23 N3 the 23 represents the compound trifluoromethane, and N3 represents "3 nine" or 99.99% purity.
Products
Zyron 116 N5 (hexafluoroethane)
Zyron 318 N4 (octafluorocyclobutane)
Zyron 23 N5 (trifluoromethane)
Zyron 23 N3 (trifluoromethane)
Historical products included: Zyron 14 (tetrafluoromethane), Zyron 32 (difluoromethane), Zyron 125 (pentafluoroethane), and Zyron NF3 (nitrogen trifluoride).
Applications
The primary applications of these gases for the electronics industry are for etching of silicon wafer, and cleaning of chemical vapor deposition chamber tools. These are all plasma-based product applications where the product is essentially destroyed in use.
Alternative products to the Zyron line for various application in electronics include the chemical compounds HCl, BCl3, CF4, ClF3, CH2F2, GeH4, C4F6, NF3, C5F8, PH3, C3H6, SiH4, and WF6.
In the news
Zyron expansion announcement (1999)
Zyron 2nd expansion announcement (2001)
Zyron NF3 announcement (2002)
Zyron team award announcement (2007)
Semicon Taiwan announcement (2010)
See also
Semiconductor device fabrication
Fluorocarbons
Perfluorinated compound
Greenhouse gas
Global warming
References
External links
DuPont Zyron website
Semiconductor Industry Association
United Nations Framework Convention on Climate Change
Paul L Bechly papers (Accession 2723), Hagley Museum and Library
Brand name materials
Organofluorides
Semiconductor device fabrication | Zyron | Materials_science | 635 |
3,358,541 | https://en.wikipedia.org/wiki/European%20Conference%20on%20Artificial%20Intelligence | The biennial European Conference on Artificial Intelligence (ECAI) is the leading conference in the field of Artificial Intelligence in Europe, and is commonly listed together with IJCAI and AAAI as one of the three major general AI conferences worldwide. The conference series has been held without interruption since 1974, originally under the name AISB.
The conferences are held under the auspices of the European Coordinating Committee for Artificial Intelligence (ECCAI) and organized by one of the member societies. The journal AI Communications, sponsored by the same society, regularly publishes special issues in which conference attendees report on the conference.
Publication of a paper in ECAI is considered by some journals to be archival: the paper should be considered equivalent to a journal publication and that the contents of ECAI papers cannot be reformulated as separate journal submissions unless a significant amount of new material is added.
List of ECAI conferences
ECAI-1992 took place in Vienna, Austria.
ECAI-1996 took place in Budapest, Hungary.
ECAI-1998 tool place in Brighton, United Kingdom.
ECAI-2000 took place in Berlin, Germany.
ECAI-2004 took place in Valencia, Spain.
ECAI-2006 took place in Riva del Garda, Italy.
ECAI-2008 took place in Patras, Greece.
ECAI-2010 took place in Lisbon, Portugal.
ECAI-2012 took place in Montpellier, France.
ECAI-2014 took place in Prague, Czech Republic.
ECAI-2016 took place in The Hague, Netherlands.
ECAI-2018 took place in Stockholm, Sweden.
ECAI-2020 took place in Santiago de Compostela, Spain.
ECAI-2022 took place in Vienna, Austria.
ECAI-2023 took place in Kraków, Poland.
ECAI-2024 took place in Santiago de Compostela, Spain.
References
External links
Artificial intelligence conferences
Information technology organizations based in Europe | European Conference on Artificial Intelligence | Technology | 385 |
68,097,758 | https://en.wikipedia.org/wiki/HD%20176693 | HD 176693, also known as Kepler-408, is a star with a close orbiting exoplanet in the northern constellation of Draco. It is located at a distance of 291 light years from the Sun based on parallax measurements, but it is drifting closer with a radial velocity of −55 km/s. The star is predicted to come as close as in 1.6 million years. It has an apparent visual magnitude of 8.83, which is too faint to be viewed with the naked eye.
The spectrum of HD 176693 matches an F-type main-sequence star with a stellar classification of F8V. The star is older than the Sun, at . It is slightly and uniformly depleted in heavy elements compared to the Sun, having about 75% of the solar abundance of iron and other heavy elements. HD 176693 is a chromospherically inactive star, although there is weak evidence for tidal spin-up due to star-planet interaction.
HD 176693 is 5% more massive than the Sun and has a 25% larger radius. It is radiating 1.9 times the luminosity of the Sun from its photosphere at an effective temperature of 6,080 K. The star is spinning with a rotation period of 12.89 days. As of 2016, multiplicity surveys have not detect any stellar companions to HD 176693.
Planetary system
In 2014, a transiting Sub-Earth planet b was detected on a tight 2.5 day orbit. Initially reported with a relatively low confidence of 97.9%, it was confirmed in 2016.
The planetary orbit is inclined to the equatorial plane of the star by 41.7°. Such strong spin-orbit misalignment is unique for a sub-Earth transiting planet, and needs either additional giant planets in the system or a history of close stellar encounters to explain it. The planet may also be a captured body originating from elsewhere.
References
F-type main-sequence stars
Planetary systems with one confirmed planet
Planetary transit variables
Draco (constellation)
BD+48 2806
1612
J18590868+4825236
176693 | HD 176693 | Astronomy | 443 |
5,102,797 | https://en.wikipedia.org/wiki/Drinker%20paradox | The drinker paradox (also known as the drinker's theorem, the drinker's principle, or the drinking principle) is a theorem of classical predicate logic that can be stated as "There is someone in the pub such that, if he or she is drinking, then everyone in the pub is drinking." It was popularised by the mathematical logician Raymond Smullyan, who called it the "drinking principle" in his 1978 book What Is the Name of this Book?
The apparently paradoxical nature of the statement comes from the way it is usually stated in natural language. It seems counterintuitive both that there could be a person who is causing the others to drink, or that there could be a person such that all through the night that one person were always the last to drink. The first objection comes from confusing formal "if then" statements with causation (see Correlation does not imply causation or Relevance logic for logics that demand relevant relationships between premise and consequent, unlike classical logic assumed here). The formal statement of the theorem is timeless, eliminating the second objection because the person the statement holds true for at one instant is not necessarily the same person it holds true for at any other instant.
The formal statement of the theorem is
where D is an arbitrary predicate and P is an arbitrary nonempty set.
Proofs
The proof begins by recognizing it is true that either everyone in the pub is drinking, or at least one person in the pub is not drinking. Consequently, there are two cases to consider:
Suppose everyone is drinking. For any particular person, it cannot be wrong to say that if that particular person is drinking, then everyone in the pub is drinking—because everyone is drinking. Because everyone is drinking, then that one person must drink because when that person drinks everybody drinks, everybody includes that person.
Otherwise at least one person is not drinking. For any nondrinking person, the statement if that particular person is drinking, then everyone in the pub is drinking is formally true: its antecedent ("that particular person is drinking") is false, therefore the statement is true due to the nature of material implication in formal logic, which states that "If P, then Q" is always true if P is false. (These kinds of statements are said to be vacuously true.)
A slightly more formal way of expressing the above is to say that, if everybody drinks, then anyone can be the witness for the validity of the theorem. And if someone does not drink, then that particular non-drinking individual can be the witness to the theorem's validity.
Explanation of paradoxicality
The paradox is ultimately based on the principle of formal logic that the statement is true whenever A is false, i.e., any statement follows from a false statement (ex falso quodlibet).
What is important to the paradox is that the conditional in classical (and intuitionistic) logic is the material conditional. It has the property that is true only if B is true or if A is false (in classical logic, but not intuitionistic logic, this is also a sufficient condition).
So as it was applied here, the statement "if they are drinking, everyone is drinking" was taken to be correct in one case, if everyone was drinking, and in the other case, if they were not drinking—even though their drinking may not have had anything to do with anyone else's drinking.
History and variations
Smullyan in his 1978 book attributes the naming of "The Drinking Principle" to his graduate students. He also discusses variants (obtained by replacing D with other, more dramatic predicates):
"there is a woman on earth such that if she becomes sterile, the whole human race will die out." Smullyan writes that this formulation emerged from a conversation he had with philosopher John Bacon.
A "dual" version of the Principle: "there is at least one person such that if anybody drinks, then he does."
As "Smullyan's ‘Drinkers’ principle" or just "Drinkers' principle" it appears in H.P. Barendregt's "The quest for correctness" (1996), accompanied by some machine proofs. Since then it has made regular appearance as an example in publications about automated reasoning; it is sometimes used to contrast the expressiveness of proof assistants.
Non-empty domain
In the setting with empty domains allowed, the drinker paradox must be formulated as follows:
A set P satisfies
if and only if it is non-empty.
Or in words:
If and only if there is someone in the pub, there is someone in the pub such that, if they are drinking, then everyone in the pub is drinking.
See also
List of paradoxes
Reification (linguistics)
Temporal logic
Relevance logic
References
Predicate logic
Logical paradoxes | Drinker paradox | Mathematics | 996 |
3,187,506 | https://en.wikipedia.org/wiki/List%20of%20podcast%20clients | A podcast client, podcatcher, or podcast app, is a computer program or mobile app used to stream or download podcasts, via an RSS or XML feed.
While podcast clients are best known for streaming and downloading audio podcasts, many can also download video podcasts, newsfeeds, text, and pictures. Some of these podcast clients can also automate the transfer of received audio or video files to a portable media player. Although most include a searchable directory of podcasts (usually populated by either Apple Podcasts or Podcast Index), they also allow users to manually subscribe directly to a podcast RSS feed by providing the URL.
The core concepts were developing since 2000, and the first commercial podcast client software was developed in 2001.
Podcasts were made popular when Apple added podcasts to its iTunes 4.9 software and iPod portable media player in June 2005. Apple Podcasts is currently included in all Apple devices, such as iPhone, iPad and Mac computers.
Podcast clients
See also
Comparison of audio player software
Notes
References
Podcasting software
Podcast clients
Clients | List of podcast clients | Technology | 220 |
15,333,552 | https://en.wikipedia.org/wiki/C-minimal%20theory | In model theory, a branch of mathematical logic, a C-minimal theory is a theory that is "minimal" with respect to a ternary relation C with certain properties. Algebraically closed fields with a (Krull) valuation are perhaps the most important example.
This notion was defined in analogy to the o-minimal theories, which are "minimal" (in the same sense) with respect to a linear order.
Definition
A C-relation is a ternary relation that satisfies the following axioms.
A C-minimal structure is a structure M, in a signature containing the symbol C, such that C satisfies the above axioms and every set of elements of M that is definable with parameters in M is a Boolean combination of instances of C, i.e. of formulas of the form , where b and c are elements of M.
A theory is called C-minimal if all of its models are C-minimal. A structure is called strongly C-minimal if its theory is C-minimal. One can construct C-minimal structures which are not strongly C-minimal.
Example
For a prime number p and a p-adic number a, let p denote its p-adic absolute value. Then the relation defined by is a C-relation, and the theory of Qp with addition and this relation is C-minimal. The theory of Qp as a field, however, is not C-minimal.
References
Model theory | C-minimal theory | Mathematics | 300 |
26,420,043 | https://en.wikipedia.org/wiki/Peritoneal%20washing | Peritoneal washing is a procedure used to look for malignant cells, i.e. cancer, in the peritoneum.
Peritoneal washes are routinely done to stage abdominal and pelvic tumours, e.g. ovarian cancer.
See also
Peritoneal lavage
Additional images
References
External links
Peritoneal washing
Pathology | Peritoneal washing | Biology | 79 |
10,886,610 | https://en.wikipedia.org/wiki/Trivialism | Trivialism is the logical theory that all statements (also known as propositions) are true and, consequently, that all contradictions of the form "p and not p" (e.g. the ball is red and not red) are true. In accordance with this, a trivialist is a person who believes everything is true.
In classical logic, trivialism is in direct violation of Aristotle's law of noncontradiction. In philosophy, trivialism is considered by some to be the complete opposite of skepticism. Paraconsistent logics may use "the law of non-triviality" to abstain from trivialism in logical practices that involve true contradictions.
Theoretical arguments and anecdotes have been offered for trivialism to contrast it with theories such as modal realism, dialetheism and paraconsistent logics.
Overview
Etymology
Trivialism, as a term, is derived from the Latin word trivialis, meaning commonplace, in turn derived from the trivium, the three introductory educational topics (grammar, logic, and rhetoric) expected to be learned by all freemen. In logic, from this meaning, a "trivial" theory is something regarded as defective in the face of a complex phenomenon that needs to be completely represented. Thus, literally, the trivialist theory is something expressed in the simplest possible way.
Theory
In symbolic logic, trivialism may be expressed as the following:
The above would be read as "given any proposition, it is a true proposition" through universal quantification (∀).
A claim of trivialism may always apply its fundamental truth, otherwise known as a truth predicate:
The above would be read as a "proposition if and only if a true proposition", meaning that all propositions are believed to be inherently proven as true. Without consistent use of this concept, a claim of advocating trivialism may not be seen as genuine and complete trivialism; as to claim a proposition is true but deny it as probably true may be considered inconsistent with the assumed theory.
Taxonomy of trivialisms
Luis Estrada-González in "Models of Possibilism and Trivialism" lists four types of trivialism through the concept of possible worlds, with a "world" being a possibility and "the actual world" being reality. It is theorized a trivialist simply designates a value to all propositions in equivalence to seeing all propositions and their negations as true. This taxonomy is used to demonstrate the different strengths and plausibility of trivialism in this context:
(T0) Minimal trivialism: At some world, all propositions have a designated value.
(T1) Pluralist trivialism: In some worlds, all propositions have a designated value.
(T2) Actualist trivialism: In the actual world, all propositions have a designated value.
(T3) Absolute trivialism: In all worlds, all propositions have a designated value.
Arguments against trivialism
The consensus among the majority of philosophers is descriptively a denial of trivialism, termed as non-trivialism or anti-trivialism. This is due to it being unable to produce a sound argument through the principle of explosion and it being considered an absurdity (reductio ad absurdum).
Aristotle
Aristotle's law of noncontradiction and other arguments are considered to be against trivialism. Luis Estrada-González in "Models of Possiblism and Trivialism" has interpreted Aristotle's Metaphysics Book IV as such: "A family of arguments between 1008a26 and 1007b12 of the form 'If trivialism is right, then X is the case, but if X is the case then all things are one. But it is impossible that all things are one, so trivialism is impossible.' ... these Aristotelian considerations are the seeds of virtually all subsequent suspicions against trivialism: Trivialism has to be rejected because it identifies what should not be identified, and is undesirable from a logical point of view because it identifies what is not identical, namely, truth and falsehood."
Priest
Graham Priest considers trivialism untenable: "a substantial case can be made for dialetheism; belief in [trivialism], though, would appear to be grounds for certifiable insanity".
He formulated the "law of non-triviality" as a replacement for the law of non-contradiction in paraconsistent logic and dialetheism.
Arguments for trivialism
There are theoretical arguments for trivialism argued from the position of a devil's advocate:
Argument from possibilism
Paul Kabay has argued for trivialism in "On the Plenitude of Truth" from the following:
Above, possibilism (modal realism; related to possible worlds) is the oft-debated theory that every proposition is possible. With this assumed to be true, trivialism can be assumed to be true as well according to Kabay.
Paradoxes
The liar's paradox, Curry's paradox, and the principle of explosion all can be asserted as valid and not required to be resolved and used to defend trivialism.
Philosophical implications
Comparison to skepticism
In Paul Kabay's comparison of trivialism to schools of philosophical skepticism (in "On the Plenitude of Truth")—such as Pyrrhonism—who seek to attain a form of ataraxia, or state of imperturbability; it is purported the figurative trivialist inherently attains this state. This is claimed to be justified by the figurative trivialist seeing every state of affairs being true, even in a state of anxiety. Once universally accepted as true, the trivialist is free from any further anxieties regarding whether any state of affairs is true.
Kabay compares the Pyrrhonian skeptic to the figurative trivialist and claims that as the skeptic reportedly attains a state of imperturbability through a suspension of belief, the trivialist may attain such a state through an abundance of belief.
In this case—and according to independent claims by Graham Priest—trivialism is considered the complete opposite of skepticism. However, insofar as the trivialist affirms all states of affairs as universally true, the Pyrrhonist neither affirms nor denies the truth (or falsity) of such affairs.
Impossibility of action
It is asserted by both Priest and Kabay that it is impossible for a trivialist to truly choose and thus act. Priest argues this by the following in Doubt Truth to Be a Liar: "One cannot intend to act in such a way as to bring about some state of affairs, s, if one believes s already to hold. Conversely, if one acts with the purpose of bringing s about, one cannot believe that s already obtains." Due to their suspension of determination upon striking equipollence between claims, the Pyrrhonist has also remained subject to apraxia charges.
Advocates
Paul Kabay, an Australian philosopher, in his book A Defense of Trivialism has argued that various philosophers in history have held views resembling trivialism, although he stops short of calling them trivialists. He mentions various pre-Socratic Greek philosophers as philosophers holding views resembling trivialism. He mentions that Aristotle in his book Metaphysics appears to suggest that Heraclitus and Anaxagoras advocated trivialism. He quotes Anaxagoras as saying that all things are one. Kabay also suggests Heraclitus' ideas are similar to trivialism because Heraclitus believed in a union of opposites, shown in such quotes as "the way up and down is the same". Kabay also mentions a fifteenth century Roman Catholic cardinal, Nicholas of Cusa, stating that what Cusa wrote in De Docta Ignorantia is interpreted as stating that God contained every fact, which Kabay argues would result in trivialism, but Kabay admits that mainstream Cusa scholars would not agree with interpreting Cusa as a trivialist. Kabay also mentions Spinoza as a philosopher whose views resemble trivialism. Kabay argues Spinoza was a trivialist because Spinoza believed everything was made of one substance which had infinite attributes. Kabay also mentions Hegel as a philosopher whose views resemble trivialism, quoting Hegel as stating in The Science of Logic "everything is inherently contradictory."
Azzouni
Jody Azzouni is a purported advocate of trivialism in his article The Strengthened Liar by claiming that natural language is trivial and inconsistent through the existence of the liar paradox ("This sentence is false"), and claiming that natural language has developed without central direction. Azzouni implies that every sentence in any natural language is true. "According to Azzouni, natural language is trivial, that is to say, every sentence in natural language is true...And, of course, trivialism follows straightforwardly from the triviality of natural language: after all, 'trivialism is true' is a sentence in natural language."
Anaxagoras
The Greek philosopher Anaxagoras is suggested as a possible trivialist by Graham Priest in his 2005 book Doubt Truth to Be a Liar. Priest writes, "He held that, at least at one time, everything was all mixed up so that no predicate applied to any one thing more than a contrary predicate."
Anti-trivialism
Luis Estrada-González in "Models of Possibilism and Trivialism" lists eight types of anti-trivialism (or non-trivialism) through the use of possible worlds:
(AT0) Actualist minimal anti-trivialism: In the actual world, some propositions do not have a value of true or false.
(AT1) Actualist absolute anti-trivialism: In the actual world, all propositions do not have a value of true or false.
(AT2) Minimal anti-trivialism: In some worlds, some propositions do not have a value of true or false.
(AT3) Pointed anti-trivialism (or minimal logical nihilism): In some worlds, every proposition does not have a value of true or false.
(AT4) Distributed anti-trivialism: In every world, some propositions do not have a value of true or false.
(AT5) Strong anti-trivialism: Some propositions do not have a value of true or false in every world.
(AT6) Super anti-trivialism (or moderate logical nihilism): All propositions do not have a value of true or false at some world.
(AT7) Absolute anti-trivialism (or maximal logical nihilism): All propositions do not have a value of true or false in every world.
See also
Discordianism
Doublethink
Factual relativism
Fatalism
Anekantavada
Syādvāda
Law of excluded middle
Laws of thought
Monism
Moral relativism
Principle of bivalence
References
Further reading
Concepts in logic
Non-classical logic
Philosophical schools and traditions
Theories of deduction
Theories of truth | Trivialism | Mathematics | 2,265 |
29,648 | https://en.wikipedia.org/wiki/Single-lens%20reflex%20camera | A single-lens reflex camera (SLR) is a camera that typically uses a mirror and prism system (hence "reflex" from the mirror's reflection) that permits the photographer to view through the lens and see exactly what will be captured. With twin lens reflex and rangefinder cameras, the viewed image could be significantly different from the final image. When the shutter button is pressed on most SLRs, the mirror flips out of the light path, allowing light to pass through to the light receptor and the image to be captured.
History
Until the development of SLR, all cameras with viewfinders had two optical light paths: one through the lens to the film, and another positioned above (TLR or twin-lens reflex) or to the side (rangefinder). Because the viewfinder and the film lens cannot share the same optical path, the viewing lens is aimed to intersect with the film lens at a fixed point somewhere in front of the camera. This is not problematic for pictures taken at a middle or longer distance, but parallax causes framing errors in close-up shots. Moreover, it is not easy to focus the lens of a fast reflex camera when it is opened to wider apertures (such as in low light or while using low-speed film).
Most SLR cameras permit upright and laterally correct viewing through use of a roof pentaprism situated in the optical path between the reflex mirror and viewfinder. Light, which comes both horizontally and vertically inverted after passing through the lens, is reflected upwards by the reflex mirror, into the pentaprism where it is reflected twice to correct the inversions caused by the lens, and align the image with the viewfinder. When the shutter is released, the mirror moves out of the light path, and the light shines directly onto the film (or in the case of a DSLR, the CCD or CMOS imaging sensor). Exceptions to the moving mirror system include the Canon Pellix and Sony SLT cameras, along with several special-purpose high-speed cameras (such as the Canon EOS-1N RS), whose mirror was a fixed beamsplitting pellicle.
Focus can be adjusted manually by the photographer or automatically by an autofocus system. The viewfinder can include a matte focusing screen located just above the mirror system to diffuse the light. This permits accurate viewing, composing and focusing, especially useful with interchangeable lenses.
Up until the 1990s, SLR was the most advanced photographic preview system available, but the recent development and refinement of digital imaging technology with an on-camera live LCD preview screen has overshadowed SLR's popularity. Nearly all inexpensive compact digital cameras now include an LCD preview screen allowing the photographer to see what the CCD is capturing. However, SLR is still popular in high-end and professional cameras because they are system cameras with interchangeable parts, allowing customization. They also have far less shutter lag, allowing photographs to be timed more precisely. Also the pixel resolution, contrast ratio, refresh rate, and color gamut of an LCD preview screen cannot compete with the clarity and shadow detail of a direct-viewed optical SLR viewfinder.
Large format SLR cameras were probably first marketed with the introduction of C.R. Smith's Monocular Duplex (U.S., 1884). SLRs for smaller exposure formats were launched in the 1920s by several camera makers. The first 35 mm SLR available to the mass market, Leica's PLOOT reflex housing along with a 200 mm f4.5 lens paired to a 35 mm rangefinder camera body, debuted in 1935. The Soviet Спорт (“Sport”), also a 24 mm by 36 mm image size, was prototyped in 1934 and went to market in 1937. K. Nüchterlein's Kine Exakta (Germany, 1936) was the first integrated 35 mm SLR to enter the market. Additional Exakta models, all with waist-level finders, were produced up to and during World War II. Another ancestor of the modern SLR camera was the Swiss-made Alpa, which was innovative, and influenced the later Japanese cameras. The first eye-level SLR viewfinder was patented in Hungary on August 23, 1943, by Jenő Dulovits, who then designed the first 35 mm camera with one, the Duflex, which used a system of mirrors to provide a laterally correct, upright image in the eye-level viewfinder. The Duflex, which went into serial production in 1948, was also the world's first SLR with an instant-return (a.k.a. autoreturn) mirror.
The first commercially produced SLR that employed a roof pentaprism was the Italian Rectaflex A.1000, shown in full working condition on Milan fair April 1948 and produced from September the same year, thus being on the market one year before the east German Zeiss Ikon VEB Contax S, announced on May 20, 1949, produced from September.
The Japanese adopted and further developed the SLR. In 1952, Asahi developed the Asahiflex and in 1954, the Asahiflex IIB. In 1957, the Asahi Pentax combined the fixed pentaprism and the right-hand thumb wind lever. Nikon, Canon and Yashica introduced their first SLRs in 1959 (the F, Canonflex, and Pentamatic, respectively).
Digital SLRs
Canon, Nikon and Pentax have all developed digital SLR cameras (DSLRs) using the same lens mounts as on their respective film SLR cameras. Konica Minolta did the same, and after having bought Konica Minolta's camera division in 2006. Sony continues using the Minolta AF lens mount in their DSLRs, including cameras built around a semi-transparent fixed mirror. Samsung builds DSLRs based on the Pentax lens mount. Olympus, on the other hand, chose to create a new digital-only Four Thirds System SLR standard, adopted later by Panasonic and Leica.
Contax came out with a DSLR model, the Contax N-Digital. This model was too late and too expensive to be competitive with other camera manufacturers. The Contax N-digital was the last Contax to use that maker's lens system, and the camera, while having impressive features such as a full-frame sensor, was expensive and lacked sufficient write-speed to the memory card for it to be seriously considered by some professional photographers.
The digital single-lens reflex camera have largely replaced film SLRs design in convenience, sales and popularity at the start of the 21st century.
Optical components
A cross-section (or 'side-view') of the optical components of a typical SLR camera shows how the light passes through the lens assembly, is reflected by the mirror placed at a 45-degree angle, and is projected on the matte focusing screen. Via a condensing lens and internal reflections in the roof pentaprism the image appears in the eyepiece. When an image is taken, the mirror moves upwards from its resting position in the direction of the arrow, the focal plane shutter opens, and the image is projected onto the film or sensor in exactly the same manner as on the focusing screen.
This feature distinguishes SLRs from other cameras as the photographer sees the image composed exactly as it will be captured on the film or sensor.
Most 35 mm SLRs use a roof pentaprism or penta-mirror to direct the light to the eyepiece, first used on the 1948 Duflex constructed by Jenő Dulovits and patented August 1943 (Hungary). With this camera also appeared the first instant-return mirror.
The first Japanese pentaprism SLR was the 1955 Miranda T, followed by the Asahi Pentax, Minolta SR-2, Zunow, Nikon F and the Yashica Pentamatic. Some SLRs offered removable pentaprisms with optional viewfinder capabilities, such as the waist-level finder, the interchangeable sports finders used on the Canon F1 and F1n; the Nikon F, F2, F3, F4 and F5; and the Pentax LX.
Another prism design was the porro prism system used in the Olympus Pen F, the Pen FT, the Pen FV half-frame 35 mm SLR cameras. This was later used on the Olympus EVOLT E-3x0 series, the Leica Digilux 3 and the Panasonic DMC-L1.
A right-angle finder is available that slips onto the eyepiece of most SLRs and D-SLRs and allows viewing through a waist-level viewfinder. There is also a finder that provides EVF remote capability.
Shutter mechanisms
Almost all contemporary SLRs use a focal-plane shutter located in front of the film plane, which prevents the light from reaching the film even if the lens is removed, except when the shutter is actually released during the exposure. There are various designs for focal plane shutters. Early focal-plane shutters designed from the 1930s onwards usually consisted of two curtains that travelled horizontally across the film gate: an opening shutter curtain followed by a closing shutter curtain. During fast shutter speeds, the focal-plane shutter would form a 'slit' whereby the second shutter curtain was closely following the first opening shutter curtain to produce a narrow, vertical opening, with the shutter slit moving horizontally. The slit would get narrower as shutter speeds were increased. Initially these shutters were made from a cloth material (which was in later years often rubberised), but some manufacturers used other materials instead. Nippon Kōgaku (now Nikon Corporation), for example, used titanium foil shutters for several of their flagship SLR cameras, including the Nikon F, F2, and F3.
Other focal-plane shutter designs, such as the Copal Square, travelled vertically — the shorter travelling distance of 24 millimetres (as opposed to 36 mm horizontally) meant that minimum exposure and flash synchronisation times could be reduced. These shutters are usually manufactured from metal, and use the same moving-slit principle as horizontally travelling shutters. They differ, though, in usually being formed of several slats or blades, rather than single curtains as with horizontal designs, as there is rarely enough room above and below the frame for a one-piece shutter. Vertical shutters became very common in the 1980s (though Konica, Mamiya, and Copal first pioneered their use in the 1950s and 1960s, and are almost exclusively used for new cameras. Nikon used Copal-made vertical plane shutters in their Nikomat/Nikkormat -range, enabling x-sync speeds from to while the only choice for focal plane shutters at that time was . Later, Nikon again pioneered the use of titanium for vertical shutters, using a special honeycomb pattern on the blades to reduce their weight and achieve world-record speeds in 1982 of second for non-sync shooting, and with x-sync. Nowadays most such shutters are manufactured from cheaper aluminium (though some high-end cameras use materials such as carbon-fibre and Kevlar).
Another shutter system is the leaf shutter, whereby the shutter is constructed of diaphragm-like blades and can be situated either between the lens or behind the lens. If the shutter is part of a lens assembly some other mechanism is required to ensure that no light reaches the film between exposures. An example of a behind-the-lens leaf shutter is found in the 35 mm SLRs produced by Kodak, with their Retina Reflex camera line; Topcon, with their Auto 100; and Kowa with their SE-R and SET-R reflexes. A primary example of a medium-format SLR with a between-the-lens leaf shutter system would be Hasselblad, with their 500C, 500 cm, 500 EL-M (a motorized Hasselblad) and other models (producing a 6 cm square negative). Hasselblads use an auxiliary shutter blind situated behind the lens mount and the mirror system to prevent the fogging of film. Other medium-format SLRs also using leaf shutters include the now discontinued Zenza-Bronica camera system lines such as the Bronica ETRs, the ETRs'i (both producing a 6 × 4.5 cm. image), the SQ and the SQ-AI (producing a 6 × 6 cm image like the Hasselblad), and the Zenza-Bronica G system (6 × 7 cm). Certain Mamiya medium-format SLRs, discontinued camera systems such as the Kowa 6 and a few other camera models also used between-the-lens leaf shutters in their lens systems. Thus, any time a photographer purchased one of these lenses, that lens included a leaf shutter in its lens mount. Because leaf shutters synchronized electronic flash at all shutter speeds especially at fast shutter speeds of of a second or faster, cameras using leaf shutters were more desirable to studio photographers who used sophisticated studio electronic flash systems. Some manufacturers of medium-format 120 film SLR cameras also made leaf-shutter lenses for their focal-plane-shutter models. Rollei made at least two such lenses for their Rolleiflex SL-66 medium format which was a focal-plane shutter SLR. Rollei later switched to a camera system of leaf-shutter design (e.g., the 6006 and 6008 reflexes) and their current medium-format SLRs are now all of the between-the-lens shutter design.
Further developments
Since the technology became widespread in the 1970s, SLRs have become the main photographic instrument used by dedicated amateur photographers and professionals. Some photographers of static subjects (such as architecture, landscape, and some commercial subjects), however, prefer view cameras because of the capability to control perspective. With a triple-extension bellows 4" × 5" camera such as the Linhof SuperTechnika V, the photographer can correct certain distortions such as "keystoning", where the image 'lines' converge (i.e., photographing a building by pointing a typical camera upward to include the top of the building). Perspective correction lenses are available in the 35 mm and medium formats to correct this distortion with film cameras, and it can also be corrected after the fact with photo software when using digital cameras. The photographer can also extend the bellows to its full length, tilt the front standard and perform photomacrography (commonly known as 'macro photography'), producing a sharp image with depth-of-field without stopping down the lens diaphragm.
Film formats
Early SLRs were built for large format photography, but this film format has largely lost favor among professional photographers. SLR film-based cameras have been produced for most film formats as well as for digital formats. These film-based SLRs use the 35 mm format as, this film format offers a variety of emulsions and film sensitivity speeds, usable image quality and a good market cost. 35 mm film comes in a variety of exposure lengths: 20 exposure, 24 exposure and 36 exposure rolls. Medium format SLRs provide a higher-quality image with a negative that can be more easily retouched than the smaller 35 mm negative, when this capability is required.
A small number of SLRs were built for APS such as the Canon IX series and the Nikon Pronea cameras. SLRs were also introduced for film formats as small as Kodak's 110, such as the Pentax Auto 110, which had interchangeable lenses.
The Narciss camera is an all-metal 16 mm subminiature single lens reflex camera made by Russian optic firm Krasnogorsky Mekhanichesky Zavod (KMZ) Narciss (Soviet Union; Нарцисс) between 1961 and 1965.
Common features
Other features found on many SLR cameras include through-the-lens (TTL) metering and sophisticated flash control referred to as "dedicated electronic flash". In a dedicated system, once the dedicated electronic flash is inserted into the camera's hot shoe and turned on, there is then communication between camera and flash. The camera's synchronization speed is set, along with the aperture. Many camera models measure the light that reflects off of the film plane, which controls the flash duration of the electronic flash. This is denoted TTL flash metering.
Some electronic flash units can send out several short bursts of light to aid the autofocus system or for wireless communication with off-camera flash units. A pre-flash is often used to determine the amount of light that is reflected from the subject, which sets the duration of the main flash at time of exposure. Some cameras also employ automatic fill-flash, where the flash light and the available light are balanced. While these capabilities are not unique to the SLR, manufacturers included them early on in the top models, whereas the best rangefinder cameras adopted such features later.
Design considerations
Many of the advantages of SLR cameras derive from viewing and focusing the image through the attached lens. Most other types of cameras do not have this function; subjects are seen through a viewfinder that is near the lens, making the photographer's view different from that of the lens. SLR cameras provide photographers with precision; they provide a viewing image that will be exposed onto the negative exactly as it is seen through the lens. There is no parallax error, and exact focus can be confirmed by eye—especially in macro photography and when photographing using long focus lenses. The depth of field may be seen by stopping down to the attached lens aperture, which is possible on most SLR cameras except for the least expensive models. Because of the SLR's versatility, most manufacturers have a vast range of lenses and accessories available for them.
Compared to most fixed-lens compact cameras, the most commonly used and inexpensive SLR lenses offer a wider aperture range and larger maximum aperture (typically to for a 50 mm lens). This allows photographs to be taken in lower light conditions without flash, and allows a narrower depth of field, which is useful for blurring the background behind the subject, making the subject more prominent. "Fast" lenses are commonly used in theater photography, portrait photography, surveillance photography, and all other photography requiring a large maximum aperture.
The variety of lenses also allows for the camera to be used and adapted in many different situations. This provides the photographer with considerably more control (i.e., how the image is viewed and framed) than would be the case with a view camera. In addition, some SLR lenses are manufactured with extremely long focal lengths, allowing a photographer to be a considerable distance away from the subject and yet still expose a sharp, focused image. This is particularly useful if the subject includes dangerous animals (e.g., wildlife); the subject prefers anonymity to being photographed; or else, the photographer's presence is unwanted (e.g., celebrity photography or surveillance photography). Practically all SLR and DSLR camera bodies can also be attached to telescopes and microscopes via an adapter tube to further enhance their imaging capabilities.
In most cases, single-lens reflex cameras cannot be made as small or as light as other camera designs—such as rangefinder cameras, autofocus compact cameras and digital cameras with electronic viewfinders (EVF)—owing to the mirror box and pentaprism/pentamirror. The mirror box also prevents lenses with deeply recessed rear elements from being mounted close to the film or sensor unless the camera has a mirror lockup feature; this means that simple designs for wide angle lenses cannot be used. Instead, larger and more complex retrofocus designs are required.
The SLR mirror 'blacks-out' the viewfinder image during the exposure. In addition, the movement of the reflex mirror takes time, limiting the maximum shooting speed. The mirror system can also cause noise and vibration. Partially reflective (pellicle) fixed mirrors avoid these problems and have been used in a very few designs including the Canon Pellix and the Canon EOS-1N RS, but these designs introduce their own problems. These pellicle mirrors reduce the amount of light travelling to the film plane or sensor and also can distort the light passing through them, resulting in a less-sharp image. To avoid the noise and vibration, many professional cameras offer a mirror lock-up feature, however, this feature totally disables the SLR's automatic focusing ability. Electronic viewfinders have the potential to give the 'viewing-experience' of a DSLR (through-the-lens viewing) without many of the disadvantages. More recently, Sony have resurrected the pellicle mirror concept in their "single-lens translucent" (SLT) range of cameras.
SLRs vary widely in their construction and typically have bodies made of plastic or magnesium. Most manufacturers do not cite durability specifications, but some report shutter life expectancies for professional models. For instance, the Canon EOS 1Ds MkII is rated for 200,000 shutter cycles and the Nikon D3 is rated for 300,000 with its exotic carbon fiber/kevlar shutter. Because many SLRs have interchangeable lenses, there is a tendency for dust, sand and dirt to get into the main body of the camera through the mirror box when the lens is removed, thus dirtying or even jamming the mirror movement mechanism or the shutter curtain mechanism itself. In addition, these particles can also jam or otherwise hinder the focusing feature of a lens if they enter into the focusing helicoid. The problem of sensor cleaning has been somewhat reduced in DSLRs as some cameras have a built-in sensor cleaning unit.
The price of SLRs in general also tends to be somewhat higher than that of other types of cameras, owing to their internal complexity. This is compounded by the expense of additional components, such as flashes or lenses. The initial investment in equipment can be prohibitive enough to keep some casual photographers away from SLRs, although the market for used SLRs has become larger particularly as photographers migrate to digital systems.
Future
The digital single-lens reflex camera has largely replaced the film SLR for its convenience, sales, and popularity at the start of the 21st century. These cameras were the marketing favorite among advanced amateur and professional photographers through the first two decades of the 2000s. Around 2010, the mirrorless technology utilized in point and shoot cameras made the way to the interchangeable lens cameras and slowly replaced DSLR technology.
As of 2022, all the major camera brands (Except Pentax) ceased development and production of DSLRs and moved on to mirrorless systems. These systems offer multiple advantages to the photographer with regards to autofocus systems as well as the ability to update the lens technologies due to the reduced distance between the back of the lens and the sensor resulting from the removal of the mirror.
Film-based SLRs are still used by a niche market of enthusiasts and format lovers.
See also
Asahi Pentax
Fujifilm
Lenses for SLR and DSLR cameras
Scheimpflug principle
Zeiss Ikon
References
Further reading
Spira, S. F. The History of Photography as Seen through the Spira Collection. New York: Aperture, 2001. .
Antonetto, Marco: "Rectaflex – The Magic Reflex". Nassa Watch Gallery, 2002.
External links
Photography in Malaysia's Contax History, Part II.
'Innovative Cameras' by Massimo Bertacchi
Rolleiflex SL 66 (Rolleiflex SL 66 Medium Format Single Lens Reflex camera).
Cameras by type | Single-lens reflex camera | Technology | 4,891 |
17,127,473 | https://en.wikipedia.org/wiki/6N14P | The 6N14P (Russian: 6Н14П) is a miniature Russian-made medium gain dual triode vacuum tube, intended for service as a low-noise cascode amplifier at HF through VHF frequencies. It is a direct equivalent of ECC84 and 6CW7 vacuum tubes. The construction of the tube is asymmetrical, with the control grid of the first triode section (pin no. 2) being internally connected to the internal RF shielding plate, thereby making the first section more suitable for common grid operation.
Basic data
Uf = 6.3V, If = 350 mA uM = 25 Ia = 10.5 mA S = 6.8 mA/V Pa = 1.5 W
History of use
In the Soviet Union 6N14P found application in the tuner section of commercial TV sets as the cascode RF front-end amplifier stage, as well as in special equipment and to a lesser extent in more-expensive FM receivers and television sets in the 1960s. However, in the majority of medium and low-grade Soviet-produced FM radios a more general purpose 6N3P (2C51) vacuum tube was utilized, as the lower frequency OIRT FM band used in the USSR (65.8 to 74 MHz) allowed for less stringent noise figure requirements towards the receiver design and its components than the higher frequency Western FM broadcast band (87.5-108 MHz) used in most of the rest of the world.
Currently, like many other vacuum tubes, 6N14P has found some use with DIY audio enthusiasts.
See also
6N3P
6DJ8
6N24P
External links
6N14P tube datasheet
ECC84 tube datasheet
ECC84 at the National Valve Museum
Vacuum tubes | 6N14P | Physics | 369 |
25,078,285 | https://en.wikipedia.org/wiki/Anilingus | Anilingus (also spelled analingus) is an oral and anal sex act (anal–oral contact or anal–oral sex) in which one person stimulates the anus of another by using their tongue or lips.
The anus has a relatively high concentration of nerve endings and can be an erogenous zone, and so the recipient may receive pleasure from external anal stimulation, whereas pleasure for the giver is usually based more on the principle of the act. People may engage in anilingus for its own sake, before anal penetration, or as part of foreplay. All sexual orientations may participate in the act. Studies confirm anilingus to be one of the sexual practices between women, though only practiced by a minority.
Safer sex practices generally revolve around hygiene so as to prevent fecal–oral route transmission of diseases. Extra precautions include STI testing, dental dams, or enemas.
Slang terms
Analingus is also known in slang terminology as rimming (or rim job), eating ass, or tossing the salad. The origin of "tossing a salad" is not entirely known, but it is used in prison slang in the United States prison system, where performing anilingus on another inmate is one way of paying dues or gaining favor.
Etymology
The term anilingus comes from the Latin words anus and -lingus, from lingere, meaning "to lick" and is based on the pattern of cunnilingus. It entered English through the 1899 F. J. Rebman translation of Edition 10 of sexologist Richard von Krafft-Ebing's 1886 book Psychopathia sexualis.
Practice
Anilingus can involve a variety of techniques to stimulate the anus, including use of the lips or licking; it may also involve the tongue moving around the edge of the anus or up and down the insides of the cheeks of the buttocks. Insertion of the tongue into the rectum is another possible technique.
Health risks and prevention
Health risk
Anilingus has potential health risks arising from the oral contact with feces. Diseases which may be transmitted by contact with feces include: bacterial diseases including shigellosis (bacillary dysentery); viral systemic diseases including hepatitis A, hepatitis B, hepatitis C, poliomyelitis, human papillomavirus (HPV) and herpes simplex virus; parasites including intestinal parasites; and infections and inflammations chlamydia infection, gastroenteritis, conjunctivitis, gonorrhea, lymphogranuloma venereum and other sexually transmitted infections.
Applying the mouth to the genitals immediately after applying it to the anus can introduce the bacterium Escherichia coli ("E. coli") into the urethra, leading to a urinary tract infection. HIV/AIDS is not believed to be easily transmitted through anilingus.
Anilingus with a number of casual partners increases the health risks associated with the practice. Generally, people carrying infections that may be passed on during anilingus appear healthy. Parasites may be in the feces if undercooked meat was consumed. The feces contain traces of hepatitis A only if the infected person has eaten contaminated food.
Prevention
Safe sex practices may include thorough washing of the anal region before anilingus to wash away most external fecal particles and reduce the risk of contraction of fecal-sourced infection. An enema can also reduce the risk of direct fecal contact. A dental dam may also be used, and another safe sex practice is to avoid unprotected sex which involves fellatio after anal intercourse.
If the receiving partner has wounds or open sores on the genitals, or if the giving partner has wounds or open sores on or in the mouth, or bleeding gums, this poses an increased risk of sexually transmitted infections. Brushing the teeth, flossing, undergoing dental work, and eating crunchy foods (such as potato chips) relatively soon before or after performing anilingus also increases the risk of transmission, because all of these activities can cause small scratches on the inside of the lips, cheeks, and palate. These wounds, even when they are microscopic, increase the chances of contracting sexually transmitted infections that can be transmitted orally under these conditions.
See also
Anal eroticism
Anal fingering
Facesitting
Fecal bacteriotherapy
Felching
References
External links
Columbia University article on health risks of anilingus
Anal eroticism
Anal sex
Oral eroticism
Pornography terminology
Sexual acts | Anilingus | Biology | 935 |
515,683 | https://en.wikipedia.org/wiki/Carbanion | In organic chemistry, a carbanion is an anion in which carbon is negatively charged.
Formally, a carbanion is the conjugate base of a carbon acid:
where B stands for the base. The carbanions formed from deprotonation of alkanes (at an sp3 carbon), alkenes (at an sp2 carbon), arenes (at an sp2 carbon), and alkynes (at an sp carbon) are known as alkyl, alkenyl (vinyl), aryl, and alkynyl (acetylide) anions, respectively.
Carbanions have a concentration of electron density at the negatively charged carbon, which, in most cases, reacts efficiently with a variety of electrophiles of varying strengths, including carbonyl groups, imines/iminium salts, halogenating reagents (e.g., N-bromosuccinimide and diiodine), and proton donors. A carbanion is one of several reactive intermediates in organic chemistry. In organic synthesis, organolithium reagents and Grignard reagents are commonly treated and referred to as "carbanions." This is a convenient approximation, although these species are generally clusters or complexes containing highly polar, but still covalent bonds metal–carbon bonds (Mδ+–Cδ−) rather than true carbanions.
Geometry
Absent π delocalization, the negative charge of a carbanion is localized in an spx hybridized orbital on carbon as a lone pair. As a consequence, localized alkyl, alkenyl/aryl, and alkynyl carbanions assume trigonal pyramidal, bent, and linear geometries, respectively. By Bent's rule, placement of the carbanionic lone pair electrons in an orbital with significant s character is favorable, accounting for the pyramidalized and bent geometries of alkyl and alkenyl carbanions, respectively. Valence shell electron pair repulsion (VSEPR) theory makes similar predictions. This contrasts with carbocations, which have a preference for unoccupied nonbonding orbitals of pure atomic p character, leading to planar and linear geometries, respectively, for alkyl and alkenyl carbocations.
However, delocalized carbanions may deviate from these geometries. Instead of residing in a hybrid orbital, the carbanionic lone pair may instead occupy a p orbital (or an orbital of high p character). A p orbital has a more suitable shape and orientation to overlap with the neighboring π system, resulting in more effective charge delocalization. As a consequence, alkyl carbanions with neighboring conjugating groups (e.g., allylic anions, enolates, nitronates, etc.) are generally planar rather than pyramidized. Likewise, delocalized alkenyl carbanions sometimes favor a linear instead of bent geometry. More often, a bent geometry is still preferred for substituted alkenyl anions, though the linear geometry is only slightly less stable, resulting in facile equilibration between the (E) and (Z) isomers of the (bent) anion through a linear transition state. For instance, calculations indicate that the parent vinyl anion or ethylenide, , has an inversion barrier of , while allenyl anion or allenide, ), whose negative charge is stabilized by delocalization, has an inversion barrier of only , reflecting stabilization of the linear transition state by better π delocalization.
Trends and occurrence
Carbanions are typically nucleophilic and basic. The basicity and nucleophilicity of carbanions are determined by the substituents on carbon. These include
the inductive effect. Electronegative atoms adjacent to the charge will stabilize the charge;
the extent of conjugation of the anion. Resonance effects can stabilize the anion. This is especially true when the anion is stabilized as a result of aromaticity.
Geometry also affects the orbital hybridization of the charge-bearing carbanion. The greater the s-character of the charge-bearing atom, the more stable the anion.
Carbanions, especially ones derived from weak carbon acids that do not benefit sufficiently from the two stabilizing factors listed above, are generally oxygen- and water-sensitive to varying degrees. While some merely degrade and decompose over several weeks or months upon exposure to air, others may react vigorously and exothermically with air almost immediately to spontaneously ignite (pyrophoricity). Among commonly encountered carbanionic reagents in the laboratory, ionic salts of hydrogen cyanide (cyanides) are unusual in being indefinitely stable under dry air and hydrolyzing only very slowly in the presence of moisture.
Organometallic reagents like butyllithium (hexameric cluster, ) or methylmagnesium bromide (ether complex, ) are often referred to as "carbanions," at least in a retrosynthetic sense. However, they are really clusters or complexes containing a polar covalent bond, though with electron density heavily polarized toward the carbon atom. The more electropositive the attached metal atom, the closer the behavior of the reagent is to that of a true carbanion.
In fact, true carbanions (i.e., a species not attached to a stabilizing covalently bound metal) without electron-withdrawing and/or conjugating substituents are not available in the condensed phase, and these species must be studied in the gas phase. For some time, it was not known whether simple alkyl anions could exist as free species; many theoretical studies predicted that even the methanide anion should be an unbound species (i.e., the electron affinity of was predicted to be negative). Such a species would decompose immediately by spontaneous ejection of an electron and would therefore be too fleeting to observe directly by mass spectrometry. However, in 1978, the methanide anion was unambiguously synthesized by subjecting ketene to an electric discharge, and the electron affinity (EA) of was determined by photoelectron spectroscopy to be +1.8 kcal/mol, making it a bound species, but just barely so. The structure of was found to be pyramidal (C3v) with an H−C−H angle of 108° and inversion barrier of 1.3 kcal/mol, while was determined to be planar (D3h point group).
Simple primary, secondary and tertiary sp3 carbanions (e.g., ethanide , isopropanide , and t-butanide were subsequently determined to be unbound species (the EAs of , , are −6, −7.4, −3.6 kcal/mol, respectively) indicating that α substitution is destabilizing. However, relatively modest stabilizing effects can render them bound. For example, cyclopropyl and cubyl anions are bound due to increased s character of the lone pair orbital, while neopentyl and phenethyl anions are also bound, as a result of negative hyperconjugation of the lone pair with the β-substituent (nC → σ*C–C). The same holds true for anions with benzylic and allylic stabilization. Gas-phase carbanions that are sp2 and sp hybridized are much more strongly stabilized and are often prepared directly by gas-phase deprotonation.
In the condensed phase only carbanions that are sufficiently stabilized by delocalization have been isolated as truly ionic species. In 1984, Olmstead and Power presented the lithium crown ether salt of the triphenylmethanide carbanion from triphenylmethane, n-butyllithium and 12-crown-4 (which forms a stable complex with lithium cations) at low temperatures:
Adding n-butyllithium to triphenylmethane (pKa in DMSO of = 30.6) in THF at low temperatures followed by 12-crown-4 results in a red solution and the salt complex [Li(12-crown-4)] precipitates at −20 °C. The central C–C bond lengths are 145 pm with the phenyl ring propellered at an average angle of 31.2°. This propeller shape is less pronounced with a tetramethylammonium counterion. A crystal structure for the analogous diphenylmethanide anion ([Li(12-crown-4)]), prepared form diphenylmethane (pKa in DMSO of = 32.3), was also obtained. However, the attempted isolation of a complex of the benzyl anion from toluene (pKa in DMSO of ≈ 43) was unsuccessful, due to rapid reaction of the formed anion with the THF solvent. The free benzyl anion has also been generated in the solution phase by pulse radiolysis of dibenzylmercury.
Early in 1904 and 1917, Schlenk prepared two red-colored salts, formulated as and , respectively, by metathesis of the corresponding organosodium reagent with tetramethylammonium chloride. Since tetramethylammonium cations cannot form a chemical bond to the carbanionic center, these species are believed to contain free carbanions. While the structure of the former was verified by X-ray crystallography almost a century later, the instability of the latter has so far precluded structural verification. The reaction of the putative "" with water was reported to liberate toluene and tetramethylammonium hydroxide and provides indirect evidence for the claimed formulation.
One tool for the detection of carbanions in solution is proton NMR. A spectrum of cyclopentadiene in DMSO shows four vinylic protons at 6.5 ppm and two methylene bridge protons at 3 ppm whereas the cyclopentadienyl anion has a single resonance at 5.50 ppm. The use of and NMR has provided structural and reactivity data for a variety of organolithium species.
Carbon acids
Any compound containing hydrogen can, in principle, undergo deprotonation to form its conjugate base. A compound is a carbon acid if deprotonation results in loss of a proton from a carbon atom. Compared to compounds typically considered to be acids (e.g., mineral acids like nitric acid, or carboxylic acids like acetic acid), carbon acids are typically many orders of magnitude weaker, although exceptions exist (see below). For example, benzene is not an acid in the classical Arrhenius sense, since its aqueous solutions are neutral. Nevertheless, it is very weak Brønsted acid with an estimated pKa of 49 which may undergo deprotonation in the presence of a superbase like the Lochmann–Schlosser base (n-butyllithium and potassium t-butoxide). As conjugate acid–base pairs, the factors that determine the relative stability of carbanions also determine the ordering of the pKa values of the corresponding carbon acids. Furthermore, pKa values allow the prediction of whether a proton transfer process will be thermodynamically favorable: In order for the deprotonation of an acidic species HA with base to be thermodynamically favorable (K > 1), the relationship pKa(BH) > pKa(AH) must hold.
These values below are pKa values determined in dimethylsulfoxide (DMSO), which has a broader useful range (~0 to ~35) than values determined in water (~0 to ~14) and better reflect the basicity of the carbanions in typical organic solvents. Values below less than 0 or greater than 35 are indirectly estimated; hence, the numerical accuracy of these values is limited. Aqueous pKa values are also commonly encountered in the literature, particularly in the context of biochemistry and enzymology. Moreover, aqueous values are often given in introductory organic chemistry textbooks for pedagogical reasons, although the issue of solvent dependence is often glossed over. In general, pKa values in water and organic solvent diverge significantly when the anion is capable of hydrogen bonding. For instance, in the case of water, the values differ dramatically: the pKa in water of water is 14.0, while the pKa in DMSO of water is 31.4, reflecting the differing ability of water and DMSO to stabilize the hydroxide anion. On the other hand, for cyclopentadiene, the numerical values are comparable: the pKa in water is 15, while the pKa in DMSO is 18.
{|align="center" class="wikitable collapsible" style="background: #ffffff; text-align: center;"
|+Carbon acid acidities by pKa in DMSO.These values may differ significantly from aqueous pKa values.
|-
!Name
!Formula
!Structural formula
!pKa in DMSO
|-
|Cyclohexane
|
|
|~60
|-
|Methane
|
|
|~56
|-
|Benzene
|
|
|~49
|-
|Propene
|
|
|~44
|-
|Toluene
|
|
|~43
|- style="background: lightgray;"
|Ammonia (N–H)
|
|
|~41
|-
|Dithiane
|
|
|~39
|-
|Dimethyl sulfoxide
|
|
|35.1
|-
|Diphenylmethane
|
|
|32.3
|-
|Acetonitrile
|
|
|31.3
|- style="background: lightgray;"
|Aniline (N–H)
|
|
|30.6
|-
|Triphenylmethane
|
|
|30.6
|-
|Fluoroform
|
|
|30.5
|-
|Xanthene
|
|
|30.0
|- style="background: lightgray;"
|Ethanol (O–H)
|
|
|29.8
|-
|Phenylacetylene
|
|
|28.8
|-
|Thioxanthene
|
|
|28.6
|-
|Acetone
|
|
|26.5
|-
|Chloroform
|
|
|24.4
|-
|Benzoxazole
|
|
|24.4
|-
|Fluorene
|
|
|22.6
|-
|Indene
|
|
|20.1
|-
|Cyclopentadiene
|
|
|18.0
|-
|Nitromethane
|
|
|17.2
|-
|Diethyl malonate
|
|
|16.4
|-
|Acetylacetone
|
|
|13.3
|-
|Hydrogen cyanide
|HCN
|
|12.9
|- style="background: lightgray;"
|Acetic acid (O–H)
|
|
|12.6
|-
|Malononitrile
|
|
|11.1
|-
|Dimedone
|
|
|10.3
|-
|Meldrum's acid
|
|
|7.3
|-
|Hexafluoroacetylacetone
|
|
|2.3
|- style="background: lightgray;"
|Hydrogen chloride (Cl–H)
|HCl
|HCl (g)
|−2.0
|-
|Triflidic acid
|
|
|~ −16
|-
|}
Note that acetic acid, ammonia, aniline, ethanol, and hydrogen chloride are not carbon acids, but are common acids shown for comparison.
As indicated by the examples above, acidity increases (pKa decreases) when the negative charge is delocalized. This effect occurs when the substituents on the carbanion are unsaturated and/or electronegative. Although carbon acids are generally thought of as acids that are much weaker than "classical" Brønsted acids like acetic acid or phenol, the cumulative (additive) effect of several electron accepting substituents can lead to acids that are as strong or stronger than the inorganic mineral acids. For example, trinitromethane , tricyanomethane , pentacyanocyclopentadiene , and fulminic acid HCNO, are all strong acids with aqueous pKa values that indicate complete or nearly complete proton transfer to water. Triflidic acid, with three strongly electron-withdrawing triflyl groups, has an estimated pKa well below −10. On the other end of the scale, hydrocarbons bearing only alkyl groups are thought to have pKa values in the range of 55 to 65. The range of acid dissociation constants for carbon acids thus spans over 70 orders of magnitude.
The acidity of the α-hydrogen in carbonyl compounds enables these compounds to participate in synthetically important C–C bond-forming reactions including the aldol reaction and Michael addition.
Chiral carbanions
With the molecular geometry for a carbanion described as a trigonal pyramid the question is whether or not carbanions can display chirality, because if the activation barrier for inversion of this geometry is too low any attempt at introducing chirality will end in racemization, similar to the nitrogen inversion. However, solid evidence exists that carbanions can indeed be chiral for example in research carried out with certain organolithium compounds.
The first ever evidence for the existence of chiral organolithium compounds was obtained in 1950. Reaction of chiral 2-iodooctane with s-butyllithium in petroleum ether at −70 °C followed by reaction with dry ice yielded mostly racemic 2-methylbutyric acid but also an amount of optically active 2-methyloctanoic acid, which could only have formed from likewise optically active 2-methylheptyllithium with the carbon atom linked to lithium the carbanion:
On heating the reaction to 0 °C the optical activity is lost. More evidence followed in the 1960s. A reaction of the cis isomer of 2-methylcyclopropyl bromide with s-butyllithium again followed by carboxylation with dry ice yielded cis-2-methylcyclopropylcarboxylic acid. The formation of the trans isomer would have indicated that the intermediate carbanion was unstable.
In the same manner the reaction of (+)-(S)-l-bromo-l-methyl-2,2-diphenylcyclopropane with n-butyllithium followed by quenching with methanol resulted in product with retention of configuration:
Of recent date are chiral methyllithium compounds:
The phosphate 1 contains a chiral group with a hydrogen and a deuterium substituent. The stannyl group is replaced by lithium to intermediate 2 which undergoes a phosphate–phosphorane rearrangement to phosphorane 3 which on reaction with acetic acid gives alcohol 4. Once again in the range of −78 °C to 0 °C the chirality is preserved in this reaction sequence. (Enantioselectivity was determined by NMR spectroscopy after derivatization with Mosher's acid.)
History
A carbanionic structure first made an appearance in the reaction mechanism for the benzoin condensation as correctly proposed by Clarke and Arthur Lapworth in 1907. In 1904 Wilhelm Schlenk prepared in a quest for tetramethylammonium (from tetramethylammonium chloride and
) and in 1914 he demonstrated how triarylmethyl radicals could be reduced to carbanions by alkali metals The phrase carbanion was introduced by Wallis and Adams in 1933 as the negatively charged counterpart of the carbonium ion
See also
Carbocation
Enolates
Nitrile anion
References
External links
Large database of Bordwell pKa values at www.chem.wisc.edu Link
Large database of Bordwell pKa values at daecr1.harvard.edu Link
Anions
Reactive intermediates | Carbanion | Physics,Chemistry | 4,298 |
41,036,105 | https://en.wikipedia.org/wiki/Great%20Annihilator | 1E1740.7-2942, or the Great Annihilator, is a Milky Way microquasar, located near the Galactic Center on the sky. It likely consists of a black hole and a companion star. It is one of the brightest X-ray sources in the region around the Galactic Center.
The object was first detected in soft X-rays by the Einstein Observatory, and later detected in hard X-rays by the Soviet Granat space observatory. Followup observations by the SIGMA detector on board Granat showed that the object was a variable emitter of massive amounts of photon pairs at 511 keV, which usually indicates the annihilation of an electron-positron pair. This led to the nickname, "Great Annihilator." Early observations also showed a spectrum similar to that of the Cygnus X-l, a black hole with a stellar companion, which suggested that Great Annihilator was also a stellar mass black hole.
The object also has a radio source counterpart that emits jets approximately 1.5 pc (5 ly) long. These jets are probably synchrotron emission from positron-electron pairs streaming out at high velocities from the source of antimatter. Modeling of the observed precession of these jets gives an object distance of approximately 5 kpc (or 16,000 ly). This means that while the object is likely located along our line of sight towards the center of the Milky Way, it may be closer to us than Sagittarius A*, the black hole at the center of our galaxy.
References
Ophiuchus
Stellar black holes
Microquasars | Great Annihilator | Physics,Astronomy | 344 |
21,151,506 | https://en.wikipedia.org/wiki/Membrane%20bound%20polyribosome | In cell biology, membrane bound polyribosomes are attached to a cell's endoplasmic reticulum. When certain proteins are synthesized by a ribosome they can become "membrane-bound". The newly produced polypeptide chains are inserted directly into the endoplasmic reticulum by the ribosome and are then transported to their destinations. Bound ribosomes usually produce proteins that are used within the cell membrane or are expelled from the cell via exocytosis.
Background
A membrane-bound polyribosome, as the name suggests, is composed of multiple ribosomes that are associated with a membrane. Proteins are synthesized via messenger ribonucleic acid (mRNA) from the nucleus being released either into the cytoplasm or into the rough endoplasmic reticulum. The rough endoplasmic reticulum branches off of the cell nucleus, has multiple cisternae or layered folds that have interstitial space for protein extrusion. Ribosomes are located in both the cytosol, cellular fluid, or rough endoplasmic reticulum and attach to this ribonucleic acid by separation and re-association of subunits around the messenger ribonucleic acid. In eukaryotic cells, the small subunit (40S) stays on one side and translates the messenger ribonucleic acid while the large subunit (60S) goes codon by codon down the mRNA and attaches each amino acids coded for making a polypeptide. A polysome is when multiple ribosomes attach to the same strand of messenger ribonucleic acid. The polypeptides ribosomes produce go on to be cell structural proteins, enzymes, and many other things. Ribosomes can also sometimes be associated with chloroplasts and mitochondria but these are not membrane bound.
Origin
Free-floating ribosomes can become membrane bound through a process called translocation. Through translocation, ribosomes that are found in the cytosol producing proteins are moved and attached to the membrane. This process is responsible for development of the rough endoplasmic reticulum. First, ribosomes begin protein synthesis at the N-terminus. The first of the polypeptide may actually be a signal sequence that tells the ribosome that the protein must be extruded into the rough endoplasmic reticulum. The signal sequence triggers translocation by binding with a signal recognition particle (SRP) also located in the cytosol. The signal recognition particle allows recognition and binding via a signal recognition particle receptor on the target membrane’s surface. The signal recognition particle receptor and signal recognition particle are both attached to a translocon and bound with guanosine triphosphate (GTP). This guanosine triphosphate is phosphorylated for energy and opens the translocon allowing the ribosome to attach via its 60S subunit and its signal sequence to enter the lumen, or institial space of the rough endoplasmic reticulum. The signal recognition particle and signal recognition particle receptor detach and can be recycled. The signal sequence is cleaved inside the lumen of the rough endoplasmic reticulum and the ribosome continues to produce the protein into the endoplasmic reticulum where it is folded. Upon protein synthesis completion, the translocon closes and the ribosome detaches. During translocation, translation briefly stops until binding with the membrane is finished. It is also important to remember that ribosomes can associate and dissociate with the endoplasmic reticulum as need for protein synthesis. After synthesis into the rough endoplasmic reticulum, proteins may travel to the end of the rough endoplasmic reticulum where they are exocytosed, or packaged into small vesicles formed via cleavage of the membrane of the rough endoplasmic reticulum. These vesicles are sent to the Golgi apparatus for sorting and release as needed by the cell. Some proteins are made to be released immediately as the cell is in constant need of them while some proteins are store for immediate release upon signal.
The idea that translation and translocation occur simultaneously except in some yeasts was confirmed via microsomes. Microsomes are small vesicles of rough endoplasmic reticulum's membranes formed after disruption of the organelle via homogenation. Homogenation is physical disruption of cells. Microsomes form after homogenization because of the membrane nature of the endoplasmic reticulum. In a lipid bilayer, hydrophobic tails must come together and be hydrophilic head must face the external aqueous environment. In an experiment, proteins were synthesized via ribosomes with microsomes added simultaneously and with microsomes added after synthesis. In the group where microsomes were added simultaneously, the proteins were synthesized into the microsome with the signal sequence cleaved. In the group where microsomes were added post protein synthesis, the proteins were located outside the microsome and retained their signal sequence. Therefore, it is possible to tell if a protein has been extruded into a microsome by its length (lack of a N-terminal signal protein if extruded), resistance to proteases, lack of resistance to proteases in the presence of detergents, and glycosylation. It was confirmed that non-extruded proteins are longer via SDS-page of proteins in the presence of and without microsomes. Protease resistance is due to the characteristics of the surrounding endoplasmic reticulum. And glycosylation occurs via glycosyltransferases to help with folding and stabilization of proteins.
Significance
The cleavage of a signal protein, resistance to proteases, and glycosylation provided by the endoplasmic reticulum to membrane-bound polyribosomes allows for more effective protein production. Presence of the signal protein makes the protein bulkier, a different shape, and harder to store until the unusable signal sequence can be cleaved. The protection from proteases due to protection by the endoplasmic reticulum prevents the protein from being degraded as it is formed. Extrusion in the endoplasmic reticulum also makes sure that the protein folds correctly. Resident endoplasmic reticulum proteins like binding protein (BiP), protein disulfide isomerase (PDI), and glycosyltransferases (GTs) are all responsible for ensuring correct protein folding and stabilization as the protein is assembled. Binding protein can actively help fold or prevent folding of proteins while protein disulfide isomerase promotes the formation of disulfide bridges. Glycosyltransferases promotes the glycosylation or incorporation of a carbohydrate to improve rigidity or structure of a protein. Failure of proteins to fold correctly may result in the unfolded protein response. Unfolded proteins cause swelling of the endoplasmic reticulum as more unfolded proteins continue to be produced. The unfolded protein response can result in endoplasmic reticulum stress, phosphorylation of PERK, phosphorylation of elF2a, down regulation of protein production, and possibly apoptosis. Apoptosis of affected cells may result in a disease like Amylotrophic Lateral Sclerosis (ALS). In Amylotrophic Lateral Sclerosis, dendritic cells undergo endoplasmic reticulum stress because of misfiled SOD1 proteins and apoptose resulting in lack of nerve transmission and loss of muscle control. Eventually, those with Amylotrophic Lateral Sclerosis die because of lack of nerve impulses to signal breathing or heart ventricle contraction.
References
Cell biology | Membrane bound polyribosome | Biology | 1,656 |
43,423,462 | https://en.wikipedia.org/wiki/Dark%20silicon | In the electronics industry, dark silicon is the amount of circuitry of an integrated circuit that cannot be powered-on at the nominal operating voltage for a given thermal design power (TDP) constraint.
Dennard scaling would posit that as transistors get smaller, they become more efficient in proportion to the increase in number for a given area, but this scaling has broken down in recent years, meaning that increases in the efficiency of smaller transistors are not proportionate with the increase in their number. This discontinuation of scaling has led to sharp increases in power density that hamper powering-on all transistors simultaneously while keeping temperatures in a safe operating range.
As of 2011, researchers from different groups have projected that, at 8 nm technology nodes, the amount of dark silicon may reach up to 50–80% depending upon the processor architecture, cooling technology, and application workloads. Dark silicon may be unavoidable even in server workloads with abundance of inherent client request-level parallelism.
Challenges and opportunities
The emergence of dark silicon introduces several challenges for the architecture, electronic design automation (EDA), and hardware-software co-design communities. These include the question of how best to utilize the plethora of transistors (with potentially many dark ones) when designing and managing energy-efficient on-chip many-core processors under peak power and thermal constraints. Architects have initiated several efforts to leverage dark silicon in designing application-specific and accelerator-rich architectures.
Recently, researchers have explored how dark silicon exposes new challenges and opportunities for the EDA community. In particular, they have demonstrated thermal, reliability (soft error and aging), and process variation concerns for dark silicon many-core processors.
References
Electronic design | Dark silicon | Engineering | 360 |
52,252,137 | https://en.wikipedia.org/wiki/Ectopic%20decidua | Ectopic decidua are decidual cells found outside inner lining of the uterus. This condition was first described in 1971 by Walker and the name 'ectopic decidua' was coined by Tausig. While ectopic decidua is most commonly seen during pregnancy, it rarely occurs in non-pregnant people, accompanied by bleeding and pain.
Generally, ectopic decidua has no clinical symptoms, but it sometimes manifests as abdominal pain in pregnancy. Ectopic decidua most commonly occurs the ovary, cervix and serosal lining of the uterus. It rarely occurs in peritoneum also. In the peritoneum, ectopic decidua is formed due to metaplasia of subserosal stromal cells under the influence of progesterone. It regresses within 4–6 weeks after childbirth. Therefore, no treatment is needed for this condition. However, it is necessary to differentiate deciduosis from metastatic cancers and mesothelioma.
References
Histopathology
Human pregnancy | Ectopic decidua | Chemistry | 227 |
3,988,213 | https://en.wikipedia.org/wiki/SecurityFocus | SecurityFocus was an online computer security news portal and purveyor of information security services. Home to the well-known Bugtraq mailing list, SecurityFocus columnists and writers included former Department of Justice cybercrime prosecutor Mark Rasch, and hacker-turned-journalist Kevin Poulsen.
References
External links
(no longer active)
Internet properties disestablished in 2002
Computer security organizations
Gen Digital acquisitions | SecurityFocus | Technology | 87 |
22,607,189 | https://en.wikipedia.org/wiki/Hacker%20News | Hacker News (HN) is a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. In general, content that can be submitted is defined as "anything that gratifies one's intellectual curiosity."
The word hacker in "Hacker News" is used in its original meaning and refers to the hacker culture which consists of people who enjoy tinkering with technology.
History
The site was created by Paul Graham in February 2007. Initially called Startup News or occasionally News.YC., it became known by its current name on August 14, 2007. It developed as a project of Graham's company Y Combinator, functioning as a real-world application of the Arc programming language which Graham co-developed.
At the end of March 2014, Graham stepped away from his leadership role at Y Combinator, leaving Hacker News administration in the hands of other staff members. The site is currently moderated by Daniel Gackle who posts under the username dang.
Gackle co-moderated Hacker News with Scott Bell (username sctb) until 2019 when Bell stopped working on the site.
Vision and practices
The intention was to recreate a community similar to the early days of Reddit. However, unlike Reddit where new users can immediately both upvote and downvote content, Hacker News does not allow users to downvote content until they have accumulated 501 "karma" points. Karma points are calculated as the number of upvotes a given user's content has received minus the number of downvotes. "Flagging" comments, likewise, is not permitted until a user has 30 karma points.
Graham stated he hopes to avoid the Eternal September that results in the general decline of intelligent discourse within a community. The site has a proactive attitude in moderating content, including automated flame and spam detectors and active human moderation. It also practices stealth banning in which user posts stop appearing for others to see, unbeknownst to the user. Additional software is used to detect "voting rings to purposefully vote up stories".
Criticism
According to a 2013 TechCrunch article: "Graham says that Hacker News gets a lot of complaints that it has a bias toward featuring stories about Y Combinator startups, but he says there is no such bias. [...] Graham adds that he gets a lot of vitriol from users personally with accusations of bias or censoring."
See also
Slashdot
Reddit
References
External links
American news websites
Social bookmarking
News aggregators
Computing websites
Internet properties established in 2007
Y Combinator companies | Hacker News | Technology | 539 |
1,164,549 | https://en.wikipedia.org/wiki/Proton%20therapy | In medicine, proton therapy, or proton radiotherapy, is a type of particle therapy that uses a beam of protons to irradiate diseased tissue, most often to treat cancer. The chief advantage of proton therapy over other types of external beam radiotherapy is that the dose of protons is deposited over a narrow range of depth; hence in minimal entry, exit, or scattered radiation dose to healthy nearby tissues.
When evaluating whether to treat a tumor with photon or proton therapy, physicians may choose proton therapy if it is important to deliver a higher radiation dose to targeted tissues while significantly decreasing radiation to nearby organs at risk. The American Society for Radiation Oncology Model Policy for Proton Beam therapy says proton therapy is considered reasonable if sparing the surrounding normal tissue "cannot be adequately achieved with photon-based radiotherapy" and can benefit the patient. Like photon radiation therapy, proton therapy is often used in conjunction with surgery and/or chemotherapy to most effectively treat cancer.
Description
Proton therapy is a type of external beam radiotherapy that uses ionizing radiation. In proton therapy, medical personnel use a particle accelerator to target a tumor with a beam of protons. These charged particles damage the DNA of cells, ultimately killing them by stopping their reproduction and thus eliminating the tumor. Cancerous cells are particularly vulnerable to attacks on DNA because of their high rate of division and their limited ability to repair DNA damage. Some cancers with specific defects in DNA repair may be more sensitive to proton radiation.
Proton therapy lets physicians deliver a highly conformal beam, i.e. delivering radiation that conforms to the shape and depth of the tumor and sparing much of the surrounding, normal tissue. For example, when comparing proton therapy to the most advanced types of photon therapy—intensity-modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT)—proton therapy can give similar or higher radiation doses to the tumor with a 50%-60% lower total body radiation dose.
Protons can focus energy delivery to fit the tumor shape, delivering only low-dose radiation to surrounding tissue. As a result, the patient has fewer side effects. All protons of a given energy have a certain penetration range; very few protons penetrate beyond that distance. Also, the dose delivered to tissue is maximized only over the last few millimeters of the particle's range; this maximum is called the spread out Bragg peak, often called the SOBP (see visual).
To treat tumors at greater depth, one needs a beam with higher energy, typically given in MeV (mega electron volts). Accelerators used for proton therapy typically produce protons with energies of 70 to 250 MeV. Adjusting proton energy during the treatment maximizes the cell damage within the tumor. Tissue closer to the surface of the body than the tumor gets less radiation, and thus less damage. Tissues deeper in the body get very few protons, so the dose becomes immeasurably small.
In most treatments, protons of different energies with Bragg peaks at different depths are applied to treat the entire tumor. These Bragg peaks are shown as thin blue lines in the figure in this section. While tissues behind (or deeper than) the tumor get almost no radiation, the tissues in front of (shallower than) the tumor get radiation dosage based on the SOBP.
Equipment
Most installed proton therapy systems use isochronous cyclotrons. Cyclotrons are considered simple to operate, reliable and can be made compact, especially with use of superconducting magnets. Synchrotrons can also be used, with the advantage of easier production at varying energies. Linear accelerators, as used for photon radiation therapy, are becoming commercially available as limitations of size and cost are resolved. Modern proton systems incorporate high-quality imaging for daily assessment of tumor contours, treatment planning software illustrating 3D dose distributions, and various system configurations, e.g. multiple treatment rooms connected to one accelerator. Partly because of these advances in technology, and partly because of the continually increasing amount of proton clinical data, the number of hospitals offering proton therapy continues to grow.
FLASH therapy
FLASH radiotherapy is a technique under development for photon and proton treatments, using very high dose rates (necessitating large beam currents). If applied clinically, it could shorten treatment time to just one to three 1-second sessions, and further reducing side effects.
History
The first suggestion that energetic protons could be an effective treatment was made by Robert R. Wilson in a paper published in 1946 while he was involved in the design of the Harvard Cyclotron Laboratory (HCL). The first treatments were performed with particle accelerators built for physics research, notably Berkeley Radiation Laboratory in 1954 and at Uppsala in Sweden in 1957. In 1961, a collaboration began between HCL and Massachusetts General Hospital (MGH) to pursue proton therapy. Over the next 41 years, this program refined and expanded these techniques while treating 9,116 patients before the cyclotron was shut down in 2002. In the USSR a therapeutic proton beam with energies up to 200 MeV was obtained at the synchrocyclotron of JINR in Dubna in 1967. The ITEP center in Moscow, Russia, which began treating patients in 1969, is the oldest proton center still in operation. The Paul Scherrer Institute in Switzerland was the world's first proton center to treat eye tumors beginning in 1984. In addition, they invented pencil beam scanning in 1996, which became the state-of-the art form of proton therapy.
The world's first hospital-based proton therapy center was a low energy cyclotron centre for eye tumors at Clatterbridge Centre for Oncology in the UK, opened in 1989, followed in 1990 at the Loma Linda University Medical Center (LLUMC) in Loma Linda, California. Later, the Northeast Proton Therapy Center at Massachusetts General Hospital was brought online, and the HCL treatment program was transferred to it in 2001 and 2002. At the beginning of 2023, there were 41 proton therapy centers in the United States, and a total of 89 worldwide. As of 2020, six manufacturers make proton therapy systems: Hitachi, Ion Beam Applications, Mevion Medical Systems, ProNova Solutions, ProTom International and Varian Medical Systems.
Types
The newest form of proton therapy, pencil beam scanning, gives therapy by sweeping a proton beam laterally over the target so that it gives the required dose while closely conforming to shape of the targeted tumor. Before the use of pencil beam scanning, oncologists used a scattering method to direct a wide beam toward the tumor.
Passive scattering beam delivery
The first commercially available proton delivery systems used a scattering process, or passive scattering, to deliver the therapy. With scattering proton therapy the proton beam is spread out by scattering devices, and the beam is then shaped by putting items such as collimators and compensators in the path of the protons. The collimators were custom made for the patient with milling machines. Passive scattering gives homogeneous dose along the target volume. Therefore, passive scattering gives more limited control over dose distributions proximal to target. Over time many scattering therapy systems have been upgraded to deliver pencil beam scanning. Because scattering therapy was the first type of proton therapy available, most clinical data available on proton therapy—especially long-term data as of 2020—were acquired via scattering technology.
Pencil beam scanning beam delivery
A newer and more flexible delivery method is pencil beam scanning, using a beam that sweeps laterally over the target so that it delivers the needed dose while closely conforming to the tumor's shape. This conformal delivery is achieved by shaping the dose through magnetic scanning of thin beamlets of protons without needing apertures and compensators. Multiple beams are delivered from different directions, and magnets in the treatment nozzle steer the proton beam to conform to the target volume layer as the dose is painted layer by layer. This type of scanning delivery provides greater flexibility and control, letting the proton dose conform more precisely to the shape of the tumor.
Delivery of protons via pencil beam scanning, in use since 1996 at the Paul Scherrer Institute, allows for the most precise type of proton delivery: intensity-modulated proton therapy (IMPT). IMPT is to proton therapy what IMRT is to conventional photon therapy—treatment that more closely conforms to the tumor while avoiding surrounding structures. Virtually all new proton systems provide pencil beam scanning exclusively. A study led by Memorial Sloan Kettering Cancer Center suggests that IMPT can improve local control when compared to passive scattering for patients with nasal cavity and paranasal sinus malignancies.
Application
It was estimated that by the end of 2019, a total of ≈200,000 patients had been treated with proton therapy. Physicians use protons to treat conditions in two broad categories:
Disease sites that respond well to higher doses of radiation, i.e., dose escalation. Dose escalation has sometimes shown a higher probability of "cure" (i.e. local control) than conventional radiotherapy. These include, among others, uveal melanoma (ocular tumor), skull base and paraspinal tumor (chondrosarcoma and chordoma), and unresectable sarcoma. In all these cases proton therapy gives significant improvement in the probability of local control, over conventional radiotherapy. For eye tumors, proton therapy also has high rates of maintaining the natural eye.
Treatment where proton therapy's increased precision reduces unwanted side effects by lessening the dose to normal tissue. In these cases, the tumor dose is the same as in conventional therapy, so there is no expectation of increased probability of curing the disease. Instead, emphasis is on reducing the dose to normal tissue, thus reducing unwanted effects.
Two prominent examples are pediatric neoplasms (such as medulloblastoma) and prostate cancer.
Pediatric
Irreversible long-term side effects of conventional radiation therapy for pediatric cancers are well documented and include growth disorders, neurocognitive toxicity, ototoxicity with subsequent effects on learning and language development, and renal, endocrine and gonadal dysfunctions. Radiation-induced secondary malignancy is another very serious adverse effect that has been reported. As there is minimal exit dose when using proton radiation therapy, dose to surrounding normal tissues can be significantly limited, reducing the acute toxicity which positively impacts the risk for these long-term side effects. Cancers requiring craniospinal irradiation, for example, benefit from the absence of exit dose with proton therapy: dose to the heart, mediastinum, bowel, bladder and other tissues anterior to the vertebrae is eliminated, hence a reduction of acute thoracic, gastrointestinal and bladder side effects.
Eye tumor
Proton therapy for eye tumors is a special case since this treatment requires only relatively low energy protons (≈70 MeV). Owing to this low energy, some particle therapy centers only treat eye tumors. Proton, or more generally, hadron therapy of tissue close to the eye affords sophisticated methods to assess the alignment of the eye that can vary significantly from other patient position verification approaches in image guided particle therapy. Position verification and correction must ensure that the radiation spares sensitive tissue like the optic nerve to preserve the patient's vision.
For ocular tumors, selecting the type of radiotherapy depends on tumor location and extent, tumor radioresistance (calculating the dose needed to eliminate the tumor), and the therapy's potential toxic side effects on nearby critical structures. For example, proton therapy is an option for retinoblastoma and intraocular melanoma. The advantage of a proton beam is that it has the potential to effectively treat the tumor while sparing sensitive structures of the eye. Given its effectiveness, proton therapy has been described as the "gold standard" treatment for ocular melanoma. The implementation of momentum cooling technique in proton therapy for eye treatment can significantly enhance its effectiveness. This technique aids in reducing the radiation dose administered to healthy organs while ensuring that the treatment is completed within a few seconds. Consequently, patients experience improved comfort during the procedure.
Base of skull cancer
When receiving radiation for skull base tumors, side effects of the radiation can include pituitary hormone dysfunction and visual field deficit—after radiation for pituitary tumors—as well as cranial neuropathy (nerve damage), radiation-induced osteosarcoma (bone cancer), and osteoradionecrosis, which occurs when radiation causes part of the bone in the jaw or skull base to die. Proton therapy has been very effective for people with base of skull tumors. Unlike conventional photon radiation, protons do not penetrate beyond the tumor. Proton therapy lowers the risk of treatment-related side effects from when healthy tissue gets radiation. Clinical studies have found proton therapy to be effective for skull base tumors.
Head and neck tumor
Proton particles do not deposit exit dose, so proton therapy can spare normal tissues far from the tumor. This is particularly useful for head and neck tumors because of the anatomic constraints found in nearly all cancers in this region. The dosimetric advantage unique to proton therapy translates into toxicity reduction. For recurrent head and neck cancer requiring reirradiation, proton therapy is able to maximize a focused dose of radiation to the tumor while minimizing dose to surrounding tissues, hence a minimal acute toxicity profile, even in patients who got multiple prior courses of radiotherapy.
Left-side breast cancer
When breast cancer — especially in the left breast — is treated with conventional radiation, the lung and heart, which are near the left breast, are particularly susceptible to photon radiation damage. Such damage can eventually cause lung problems (e.g. lung cancer) or various heart problems. Depending on location of the tumor, damage can also occur to the esophagus, or to the chest wall (which can potentially lead to leukemia). One recent study showed that proton therapy has low toxicity to nearby healthy tissues and similar rates of disease control compared with conventional radiation. Other researchers found that proton pencil beam scanning techniques can reduce both the mean heart dose and the internal mammary node dose to essentially zero.
Small studies have found that, compared to conventional photon radiation, proton therapy delivers minimal toxic dose to healthy tissues and specifically decreased dose to the heart and lung. Large-scale trials are underway to examine other potential benefits of proton therapy to treat breast cancer.
Lymphoma
Though chemotherapy is the main treatment for lymphoma, consolidative radiation is often used in Hodgkin lymphoma and aggressive non-Hodgkin lymphoma, while definitive treatment with radiation alone is used in a small fraction of lymphoma patients. Unfortunately, treatment-related toxicities caused by chemotherapy agents and radiation exposure to healthy tissues are major concerns for lymphoma survivors. Advanced radiation therapy technologies such as proton therapy may offer significant and clinically relevant advantages such as sparing important organs at risk and decreasing the risk for late normal tissue damage while still achieving the primary goal of disease control. This is especially important for lymphoma patients who are being treated with curative intent and have long life expectancy following therapy.
Prostate cancer
In prostate cancer cases, the issue is less clear. Some published studies found a reduction in long term rectal and genito-urinary damage when treating with protons rather than photons (meaning X-ray or gamma ray therapy). Others showed a small difference, limited to cases where the prostate is particularly close to certain anatomical structures. The relatively small improvement found may be the result of inconsistent patient set-up and internal organ movement during treatment, which offsets most of the advantage of increased precision. One source suggests that dose errors around 20% can result from motion errors of just . and another that prostate motion is between .
The number of cases of prostate cancer diagnosed each year far exceeds those of the other diseases referred to above, and this has led some, but not all, facilities to devote most of their treatment slots to prostate treatments. For example, two hospital facilities devote ≈65% and 50% of their proton treatment capacity to prostate cancer, while a third devotes only 7.1%.
Worldwide numbers are hard to compile, but one example says that in 2003 ≈26% of proton therapy treatments worldwide were for prostate cancer.
Gastrointestinal malignancy
A growing amount of data shows that proton therapy has great potential to increase therapeutic tolerance for patients with GI malignancy. The possibility of decreasing radiation dose to organs at risk may also help facilitate chemotherapy dose escalation or allow new chemotherapy combinations. Proton therapy will play a decisive role for ongoing intensified combined modality treatments for GI cancers. The following review presents the benefits of proton therapy in treating hepatocellular carcinoma, pancreatic cancer and esophageal cancer.
Hepatocellular carcinoma
Post-treatment liver decompensation, and subsequent liver failure, is a risk with radiotherapy for hepatocellular carcinoma, the most common type of primary liver cancer. Research shows that proton therapy gives favorable results related to local tumor control, progression-free survival, and overall survival. Other studies, which examine proton therapy compared with conventional photon therapy, show that proton therapy gives improved survival and/or fewer side effects; hence proton therapy could significantly improve clinical outcomes for some patients with liver cancer.
Reirradiation for recurrent cancer
For patients who get local or regional recurrences after their initial radiation therapy, physicians are limited in their treatment options due to their reluctance to give additional photon radiation therapy to tissues that have already been irradiated. Re-irradiation is a potentially curative treatment option for patients with locally recurrent head and neck cancer. In particular, pencil beam scanning may be ideally suited for reirradiation. Research shows the feasibility of using proton therapy with acceptable side effects, even in patients who have had multiple prior courses of photon radiation.
Comparison with other treatments
A large study on comparative effectiveness of proton therapy was published by teams of the University of Pennsylvania and Washington University in St. Louis in JAMA Oncology, assessing if proton therapy in the setting of concurrent chemoradiotherapy is associated with fewer 90-day unplanned hospitalizations and overall survival compared with concurrent photon therapy and chemoradiotherapy. The study included 1483 adult patients with nonmetastatic, locally advanced cancer treated with concurrent chemoradiotherapy with curative intent and concluded, "proton chemoradiotherapy was associated with significantly reduced acute adverse events that caused unplanned hospitalizations, with similar disease-free and overall survival". A significant number of randomized controlled trials is recruiting, but only a limited number have been completed as of August 2020. A phase III randomized controlled trial of proton beam therapy versus radiofrequency ablation (RFA) for recurrent hepatocellular carcinoma organized by the National Cancer Center in Korea showed better 2-year local progression-free survival for the proton arm and concluded that proton beam therapy (PBT) is "not inferior to RFA in terms of local progression-free survival and safety, denoting that either RFA or PBT can be applied to recurrent small HCC patients". A phase IIB randomized controlled trial of proton beam therapy versus IMRT for locally advanced esophageal cancer organized by University of Texas MD Anderson Cancer Center concluded that proton beam therapy reduced the risk and severity of adverse events compared with IMRT while maintaining similar progression free survival. Another Phase II Randomized Controlled Trial comparing photons versus protons for Glioblastoma concluded that patients at risk of severe lymphopenia could benefit from proton therapy. A team from Stanford University assessed the risk of secondary cancer after primary cancer treatment with external beam radiation using data from the National Cancer Database for 9 tumor types: head and neck, gastrointestinal, gynecologic, lymphoma, lung, prostate, breast, bone/soft tissue, and brain/central nervous system. The study included a total of 450,373 patients and concluded that proton therapy was associated with a lower risk of second cancer.
The issue of when, whether, and how best to apply this technology is still under discussion by physicians and researchers. One recently introduced method, 'model-based selection', uses comparative treatment plans for IMRT and IMPT in combination with normal tissue complication probability (NTCP) models to identify patients who may benefit most from proton therapy.
Clinical trials are underway to examine the comparative efficacy of proton therapy (vs photon radiation) for the following:
Pediatric cancers—by St. Jude Children's Research Hospital, Samsung Medical Center
Base of skull cancer—by Heidelberg University
Head and neck cancer—by MD Anderson, Memorial Sloan Kettering and other centers
Brain and spinal cord cancer—by Massachusetts General Hospital, Uppsala University and other centers, NRG Oncology
Hepatocellular carcinoma (liver)—by NRG Oncology, Chang Gung Memorial Hospital, Loma Linda University
Lung cancer—by Radiation Therapy Oncology Group (RTOG), Proton Collaborative Group (PCG), Mayo Clinic
Esophageal cancer—by NRG Oncology, Abramson Cancer Center, University of Pennsylvania
Breast cancer—by University of Pennsylvania, Proton Collaborative Group (PCG)
Pancreatic cancer—by University of Maryland, Proton Collaborative Group (PCG)
X-ray radiotherapy
The figure at the right of the page shows how beams of X-rays (IMRT; left frame) and beams of protons (right frame), of different energies, penetrate human tissue. A tumor with a sizable thickness is covered by the IMRT spread out Bragg peak (SOBP) shown as the red lined distribution in the figure. The SOBP is an overlap of several pristine Bragg peaks (blue lines) at staggered depths.
Megavoltage X-ray therapy has less "skin sparing potential" than proton therapy: X-ray radiation at the skin, and at very small depths, is lower than for proton therapy. One study estimates that passively scattered proton fields have a slightly higher entrance dose at the skin (≈75%) compared to therapeutic megavoltage (MeV) photon beams (≈60%). X-ray radiation dose falls off gradually, needlessly harming tissue deeper in the body and damaging the skin and surface tissue opposite the beam entrance. The differences between the two methods depends on:
Width of the SOBP
Depth of the tumor
Number of beams that treat the tumor
The X-ray advantage of less harm to skin at the entrance is partially counteracted by harm to skin at exit point.
Since X-ray treatments are usually done with multiple exposures from opposite sides, each section of skin is exposed to both entering and exiting X-rays. In proton therapy, skin exposure at the entrance point is higher, but tissues on the opposite side of the body to the tumor get no radiation. Thus, X-ray therapy causes slightly less damage to skin and surface tissues, and proton therapy causes less damage to deeper tissues in front of and beyond the target.
An important consideration in comparing these treatments is whether the equipment delivers protons via the scattering method (historically, the most common) or a spot scanning method. Spot scanning can adjust the width of the SOBP on a spot-by-spot basis, which reduces the volume of normal (healthy) tissue inside the high dose region. Also, spot scanning allows for intensity modulated proton therapy (IMPT), which determines individual spot intensities using an optimization algorithm that lets the user balance the competing goals of irradiating tumors while sparing normal tissue. Spot scanning availability depends on the machine and the institution. Spot scanning is more commonly known as pencil-beam scanning and is available on IBA, Hitachi, Mevion (known as HYPERSCAN which became US FDA approved in 2017) and Varian.
Surgery
Physicians base the decision to use surgery or proton therapy (or any radiation therapy) on tumor type, stage, and location. Sometimes surgery is superior (such as cutaneous melanoma), sometimes radiation is superior (such as skull base chondrosarcoma), and sometimes are comparable (for example, prostate cancer). Sometimes, they are used together (e.g., rectal cancer or early stage breast cancer).
The benefit of external beam proton radiation is in the dosimetric difference from external beam X-ray radiation and brachytherapy in cases where use of radiation therapy is already indicated, rather than as a direct competition with surgery. In prostate cancer, the most common indication for proton beam therapy, no clinical study directly comparing proton therapy to surgery, brachytherapy, or other treatments has shown any clinical benefit for proton beam therapy. Indeed, the largest study to date showed that IMRT compared with proton therapy was associated with less gastrointestinal morbidity.
Side effects and risks
Proton therapy is a type of external beam radiotherapy, and shares risks and side effects of other forms of radiation therapy. The dose outside of the treatment region can be significantly less for deep-tissue tumors than X-ray therapy, because proton therapy takes full advantage of the Bragg peak. Proton therapy has been in use for over 40 years, and is a mature technology. As with all medical knowledge, understanding of the interaction of radiations with tumor and normal tissue is still imperfect.
Costs
Historically, proton therapy has been expensive. An analysis published in 2003 found that the cost of proton therapy is ≈2.4 times that of X-ray therapies. Newer, less expensive, and dozens more proton treatment centers are driving costs down and they offer more accurate three-dimensional targeting. Higher proton dosage over fewer treatments sessions (1/3 fewer or less) is also driving costs down. Thus the cost is expected to reduce as better proton technology becomes more widely available. An analysis published in 2005 determined that the cost of proton therapy is not unrealistic and should not be the reason for denying patients access to the technology. In some clinical situations, proton beam therapy is clearly superior to the alternatives.
A study in 2007 expressed concerns about the effectiveness of proton therapy for prostate cancer, but with the advent of new developments in the technology, such as improved scanning techniques and more precise dose delivery ('pencil beam scanning'), this situation may change considerably. Amitabh Chandra, a health economist at Harvard University, said, "Proton-beam therapy is like the Death Star of American medical technology... It's a metaphor for all the problems we have in American medicine." Proton therapy is cost-effective for some types of cancer, but not all. In particular, some other treatments offer better overall value for treatment of prostate cancer.
As of 2018, the cost of a single-room particle therapy system is US$40 million, with multi-room systems costing up to US$200 million.
Treatment centers
As of August 2020, there are over 89 particle therapy facilities worldwide, with at least 41 others under construction. As of August 2020, there are 34 operational proton therapy centers in the United States. As of the end of 2015 more than 154,203 patients had been treated worldwide.
One hindrance to universal use of the proton in cancer treatment is the size and cost of the cyclotron or synchrotron equipment necessary. Several industrial teams are working on development of comparatively small accelerator systems to deliver the proton therapy to patients. Among the technologies being investigated are superconducting synchrocyclotrons (also known as FM Cyclotrons), ultra-compact synchrotrons, dielectric wall accelerators, and linear particle accelerators.
United States
Proton treatment centers in the United States (in chronological order of first treatment date) include:
The Indiana University Health Proton Therapy Center in Bloomington, Indiana opened in 2004 and ceased operations in 2014.
Outside the US
Australia
In July 2020, construction began for "SAHMRI 2", the second building for the South Australian Health and Medical Research Institute. The building will house the Australian Bragg Centre for Proton Therapy & Research, a addition to the largest health and biomedical precinct in the Southern Hemisphere, Adelaide's BioMed City. The proton therapy unit is being supplied by ProTom International, which will install its Radiance 330 proton therapy system, the same system used at Massachusetts General Hospital. When in full operation, it will have the ability to treat approximately 600-700 patients per year with around half of these expected to be children and young adults. The facility is expected to be completed in late 2023, with its first patients treated in 2025. In 2024 the South Australian government expressed concerns about the delivery of the project.
India
Apollo Proton Cancer Centre (APCC) in Chennai, Tamil Nadu, a unit under Apollo Hospitals, is a Cancer specialty hospital. APCC is the only cancer hospital in India with Joint Commission International accreditation.
Israel
In January 2020, it was announced that a proton therapy center would be built in Ichilov Hospital, at the Tel Aviv Sourasky Medical Center. The project's construction was fully funded by donations. It will have two treatment rooms. According to a newspaper report in 2023, it should be ready in three to four years. The report also mentions that "Proton therapy for cancer treatment has arrived in Israel and the Middle East with a clinical trial underway that sees Hadassah Medical Center partnering with P-Cure, an Israeli company that has developed a unique system designed to fit into existing hospital settings".
Spain
In October 2021, the Amancio Ortega Foundation arranged with the Spanish government and several autonomous communities to donate 280 million euros to install ten proton accelerators in the public health system.
United Kingdom
In 2013 the British government announced that £250 million had been budgeted to establish two centers for advanced radiotherapy: The Christie NHS Foundation Trust (the Christie Hospital) in Manchester, which opened in 2018; and University College London Hospitals NHS Foundation Trust, which opened in 2021. These offer high-energy proton therapy, and other types of advanced radiotherapy, including intensity-modulated radiotherapy (IMRT) and image-guided radiotherapy (IGRT). In 2014, only low-energy proton therapy was available in the UK, at Clatterbridge Cancer Centre NHS Foundation Trust in Merseyside. But NHS England has paid to have suitable cases treated abroad, mostly in the US. Such cases rose from 18 in 2008 to 122 in 2013, 99 of whom were children. The cost to the National Health Service averaged ~£100,000 per case.
See also
Particle therapy
Charged particle therapy
Hadron
Microbeam
Fast neutron therapy
Boron neutron capture therapy
Linear energy transfer
Electromagnetic radiation and health
Dosimetry
Ionizing radiation
List of oncology-related terms
References
Further reading
External links
The Intrepid Proton-Man , educational comic books by Steve Englehart and Michael Jaszewski for pediatric patients
2019 BBC Horizon documentary
2019 Jove video by the University of Maryland School of Medicine explaining the treatment process: Proton Therapy Delivery and Its Clinical Application in Select Solid Tumor Malignancies
2019 The NHS Proton Beam Therapy Programme
Proton Therapy Collaborative Group PTCOG
Alliance for Proton Therapy
CARES Cancer Network
National Association for Proton Therapy
American Society for Radiation Oncology Model Policy – Proton Beam Therapy
Proton therapy – MedlinePlus Medical Encyclopedia
Proton Therapy
What is Proton Therapy
Medical physics
Radiation therapy procedures
Proton | Proton therapy | Physics | 6,418 |
44,849,849 | https://en.wikipedia.org/wiki/Tylopilus%20louisii | Tylopilus louisii is a bolete fungus in the family Boletaceae. Described as new to science in 1964 by Belgian mycologist Paul Heinemann, it is found in the Republic of the Congo. It has a gray cap and stipe, and spores measuring 10–11.7 by 3.5–3.8 μm.
See also
List of North American boletes
References
External links
louisii
Fungi described in 1964
Fungi of Africa
Fungus species | Tylopilus louisii | Biology | 98 |
1,028,345 | https://en.wikipedia.org/wiki/Stanis%C5%82aw%20Zaremba%20%28mathematician%29 | Stanisław Zaremba (3 October 1863 – 23 November 1942) was a Polish mathematician and engineer. His research in partial differential equations, applied mathematics and classical analysis, particularly on harmonic functions, gained him a wide recognition. He was one of the mathematicians who contributed to the success of the Polish School of Mathematics through his teaching and organizational skills as well as through his research. Apart from his research works, Zaremba wrote many university textbooks and monographies.
He was a professor of the Jagiellonian University (since 1900), member of Academy of Learning (since 1903), co-founder and president of the Polish Mathematical Society (1919), and the first editor of the Annales de la Société Polonaise de Mathématique.
He should not be confused with his son Stanisław Krystyn Zaremba, also a mathematician.
Biography
Zaremba was born on 3 October 1863 in Romanówka, present-day Ukraine. The son of an engineer, he was educated at a grammar school in Saint Petersburg and studied at the Institute of Technology of the same city obtaining his diploma in engineering in 1886. The same year he left Saint Petersburg and went to Paris to study mathematics: he received his degree from the Sorbonne in 1889. He stayed in France until 1900, when he joined the faculty at the Jagiellonian University in Kraków. His years in France enabled him to establish a strong bridge between Polish mathematicians and those in France.
He died on 23 November 1942 in Kraków, during the German occupation of Poland.
Work
Research activity
Selected publication
, translated in Russian as .
See also
Kraków School of Mathematics
Mixed boundary condition
Notes
References
.
.
.
, .
External links
19th-century Polish mathematicians
20th-century Polish mathematicians
Corresponding Members of the Russian Academy of Sciences (1917–1925)
Corresponding Members of the USSR Academy of Sciences
Members of the Lwów Scientific Society
Polish engineers
Mathematical analysts
University of Paris alumni
Academic staff of Jagiellonian University
1863 births
1942 deaths
Mathematicians from Austria-Hungary | Stanisław Zaremba (mathematician) | Mathematics | 402 |
25,967,148 | https://en.wikipedia.org/wiki/Brickellia%20veronicifolia | Brickellia veronicifolia is a North American species of plants in the family Asteraceae. It is widespread across much of Mexico, from Chihuahua to Oaxaca. In the United States, it very rare, found only in the Chisos Mountains inside Big Bend National Park in Texas, and also in Otero County in New Mexico.
Brickellia veronicifolia is a shrub up to 90 cm (3 feet) tall. It produces large numbers of small, pale yellow or cream-colored flower heads.
Brickellia veronicifolia contains high amounts of essential oils, Germacrene D, a natural insecticide and the two flavonoids brickellin and eupatolitin.
References
External links
veronicifolia
Flora of Mexico
Plant toxin insecticides
Plants described in 1818
Flora of the South-Central United States
Flora without expected TNC conservation status | Brickellia veronicifolia | Chemistry | 180 |
30,011,620 | https://en.wikipedia.org/wiki/Video%20copy%20detection | Video copy detection is the process of detecting illegally copied videos by analyzing them and comparing them to original content.
The goal of this process is to protect a video creator's intellectual property.
History
Indyk et al. produced a video copy detection theory based on the length of the film; however, it worked only for whole films without modifications. When applied to short clips of a video, Idynk et al.'s technique does not detect that the clip is a copy.
Later, Oostveen et al. introduced the concept of a fingerprint, or hash function, that creates a unique signature of the video based on its contents. This fingerprint is based on the length of the video and the brightness, as determined by splitting it into a grid. The fingerprint cannot be used to recreate the original video because it describes only certain features of its respective video.
Some time ago, B.Coskun et al. presented two robust algorithms based on discrete cosine transform.
Hampapur and Balle created an algorithm creating a global description of a piece of video based on the video's motion, color, space, and length.
To look at the color levels of the image was thought, and for this reason, Li et al. created an algorithm that examines the colors of a clip by creating a binary signature get from the histogram of every frame. This algorithm, however, returns inconsistent results in cases in which a logo is added to the video, because the insertion of the logo's color elements adds false information that can confuse the system.
Techniques
Watermarks
Watermarks are used to introduce an invisible signal into a video to ease the detection of illegal copies. This technique is widely used by photographers. Placing a watermark on a video such that it is easily seen by an audience allows the content creator to detect easily whether the image has been copied.
The limitation of watermarks is that if the original image is not watermarked, then it is not possible to know whether other images are copies.
Content-based signature
In this technique, a unique signature is created for the video on the basis of the video's content. Various video copy detection algorithms exist that use features of the video's content to assign the video a unique videohash. The fingerprint can be compared with other videohashes in a database.
This type of algorithm has a significant problem: if various aspects of the videos' contents are similar, it is difficult for an algorithm to determine whether the video in question is a copy of the original or merely similar to it. In such a case (e.g., two distinct news broadcasts), the algorithm can return that the video in question is a copy as the news broadcast often involve similar kind of banner and presenter often sit in a similar position. Videos with very minimal changes in frames with respect to time are more vulnerable to hash collision.
Algorithms
The following are some algorithms and techniques proposed for video copy detection.
Global Descriptors
Global temporal descriptor
In this algorithm, a global intensity is defined as the sum of all intensities of all pixels weighted along all the video. Thus, an identity for a video sample can be constructed on the basis of the length of the video and the pixel intensities throughout.
The global intensity a(t) is defined as:
Where k is the weighting of the image, I is the image, and N is the number of pixels in the image.
Global ordinal measurement descriptor
In this algorithm, the video is divided in N blocks, sorted by gray level. Then it's possible to create a vector describing the average gray level of each block.
With these average levels it is possible to create a new vector S(t), the video's signature:
To compare two videos, the algorithm defines a D(t) representing the similarity between both.
The value returned by D(t) helps determine whether the video in question is a copy.
Ordinal and Temporal Descriptors
This technique was proposed by L.Chen and F. Stentiford. A measurement of dissimilarity is made by combining the two aforementioned algorithms, Global temporal descriptors and Global ordinal measurement descriptors, in time and space.
TMK+PDQF
In 2019, Facebook open sourced TMK+PDQF, part of a suite of tools used at Facebook to detect harmful content. It generates a signature of a whole video, and can easily handle changes in format or added watermarks, but is less tolerant of cropping or clipping.
Local Descriptors
AJ
Described by A. Joly et al., this algorithm is an improvement of Harris' Interest Points detector. This technique suggests that in many videos a significant number of frames are almost identical, so it is more efficient to test not every frame but just those depicting a significant amount of motion.
ViCopT
ViCopT uses the interest points from each image to define a signature of the whole video. In every image, the algorithms identifies and defines two parts: the background, a set of static elements along a temporal sequence, and the motion, persistent points changing positions throughout the video.
Space Time Interest Points (STIP)
This algorithm was developed by I. Laptev and T.Lindeberg. It uses the interest points technique along the space and time to define the video signature, and creates a 34th-dimension vector that stores this signature.
Algorithm showcasing
There exist algorithms for video copy detection that are in use today. In 2007, there was an evaluation showcase known as the Multimedia Understanding Through Semantics, Computation and Learning (MUSCLE), which tested video copy detection algorithms on various video samples ranging from home video recordings to TV show segments ranging from one minute to one hour in length.
References
MUSCLE (Multimedia Understanding through Semantics, Computation and Learning)
IBM - Exploring Computer vision group
Multimedia
Video | Video copy detection | Technology | 1,196 |
68,649,892 | https://en.wikipedia.org/wiki/Cantharellus%20enelensis | Cantharellus enelensis is one of several species of chanterelle native to North America, discovered in 2017 as a new member of the C. cibarius complex. It forms mycorrhizal relationships and is an edible mushroom.
Taxonomy
Cantharellus enelensis was discovered in 2017 as a new member of the C. cibarius complex. It was temporarily categorized as having a conservation status of 'least concern'.
Etymology
The name enelensis is in honour of the Canadian province of Newfoundland and Labrador where the mushroom was first discovered.
Description
Cantharellus enelensis has decurrent ridges that are forked, a cap that is from in diameter and can be infundibuliform in older specimens. The flesh is firm and white to pale yellow on the inside and can smell fruity, often described as apricot smelling. The foot of the mushroom gets wider closer to the cap.
Similar species
Members of the C. cibarius complex in eastern North America are difficult to distinguish from one another without special techniques such as DNA sequencing and microscopic examinations.
Cantharellus. enelensis can be distinguished from C. cibarius by its hymenophore, which is more orange in C. enelensis.
Distribution and habitat
C. enelensis is one of 40 varieties of Cantharellus that grows in North America, which it is native to.
Chanterelles identified with DNA sequencing as C. enelensis have been found in Newfoundland, Quebec, Michigan and Illinois but there is evidence to suggest it is widespread in North American conifer forests. It is the most commonly found chanterelle in Newfoundland.
Ecology
Cantharellus enelensis forms mycorrhizal relationships and grows in conifer forests with well drained, moist, sandy soil.
The mushrooms beat fruiting bodies between July and September with the peak in August.
Uses
Cantharellus enelensis is an edible mushroom.
References
Fungi of North America
Fungi described in 2017
Edible fungi
enelensis
Fungus species | Cantharellus enelensis | Biology | 418 |
7,389,796 | https://en.wikipedia.org/wiki/HMG-CoA | β-Hydroxy β-methylglutaryl-CoA (HMG-CoA), also known as 3-hydroxy-3-methylglutaryl coenzyme A, is an intermediate in the mevalonate and ketogenesis pathways. It is formed from acetyl CoA and acetoacetyl CoA by HMG-CoA synthase. The research of Minor J. Coon and Bimal Kumar Bachhawat in the 1950s at University of Illinois led to its discovery.
HMG-CoA is a metabolic intermediate in the metabolism of the branched-chain amino acids, which include leucine, isoleucine, and valine. Its immediate precursors are β-methylglutaconyl-CoA (MG-CoA) and β-hydroxy β-methylbutyryl-CoA (HMB-CoA).
HMG-CoA reductase catalyzes the conversion of HMG-CoA to mevalonic acid, a necessary step in the biosynthesis of cholesterol.
Biosynthesis
Mevalonate pathway
Mevalonate synthesis begins with the beta-ketothiolase-catalyzed Claisen condensation of two molecules of acetyl-CoA to produce acetoacetyl CoA. The following reaction involves the joining of acetyl-CoA and acetoacetyl-CoA to form HMG-CoA, a process catalyzed by HMG-CoA synthase.
In the final step of mevalonate biosynthesis, HMG-CoA reductase, an NADPH-dependent oxidoreductase, catalyzes the conversion of HMG-CoA into mevalonate, which is the primary regulatory point in this pathway. Mevalonate serves as the precursor to isoprenoid groups that are incorporated into a wide variety of end-products, including cholesterol in humans.
Ketogenesis pathway
HMG-CoA lyase breaks it into acetyl CoA and acetoacetate.
See also
Steroidogenic enzyme
References
Thioesters of coenzyme A | HMG-CoA | Chemistry,Biology | 433 |
360,033 | https://en.wikipedia.org/wiki/Ordnance%20Survey%20National%20Grid | The Ordnance Survey National Grid reference system (OSGB), also known as British National Grid (BNG), is a system of geographic grid references, distinct from latitude and longitude, whereby any location in Great Britain can be described in terms of its distance from the origin (0, 0), which lies to the west of the Isles of Scilly.
The Ordnance Survey (OS) devised the national grid reference system, and it is heavily used in its survey data, and in maps based on those surveys, whether published by the Ordnance Survey or by commercial map producers. Grid references are also commonly quoted in other publications and data sources, such as guide books and government planning documents.
A number of different systems exist that can provide grid references for locations within the British Isles: this article describes the system created solely for Great Britain and its outlying islands (including the Isle of Man). The Irish grid reference system is a similar system created by the Ordnance Survey of Ireland and the Ordnance Survey of Northern Ireland for the island of Ireland. The Irish Transverse Mercator (ITM) coordinate reference system was adopted in 2001 and is now the preferred coordinate reference system across Ireland. ITM is based on the Universal Transverse Mercator coordinate system (UTM), used to provide grid references for worldwide locations, and this is the system commonly used for the Channel Islands. European-wide agencies also use UTM when mapping locations, or may use the Military Grid Reference System (MGRS), or variants of it.
Grid letters
The first letter of the British National Grid is derived from a larger set of 25 squares of size 500 km by 500 km, labelled A to Z, omitting one letter (I) (refer diagram below), previously used as a military grid. Four of these largest squares contain significant land area within Great Britain: S, T, N and H. The O square contains a tiny area of North Yorkshire, Beast Cliff at , almost all of which lies below mean high tide.
For the second letter, each 500 km square is subdivided into 25 squares of size 100 km by 100 km, each with a letter code from A to Z (again omitting I) starting with A in the north-west corner to Z in the south-east corner. These squares are outlined in light grey on the "100km squares" map, with those containing land lettered. The central (2° W) meridian is shown in red.
Grid digits
Within each square, eastings and northings from the south west corner of the square are given numerically. For example, NH0325 means a 1 km square whose south-west corner is 3 km east and 25 km north from the south-west corner of square NH. A location can be indicated to varying resolutions numerically, usually from two digits in each coordinate (for a 1 km square) through to five (for a 1 m square); in each case the first half of the digits is for the first coordinate and the second half for the other. The most common usage is the six figure grid reference, employing three digits in each coordinate to determine a 100 m square. For example, the grid reference of the 100 m square containing the summit of Ben Nevis is . (Grid references may be written with or without spaces; e.g., also NN166712.) NN has an easting of 200 km and northing of 700 km, so the OSGB36 National Grid location for Ben Nevis is at 216600, 771200.
All-numeric grid references
Grid references may also be quoted as a pair of numbers: eastings then northings in metres, measured from the southwest corner of the SV square. 13 digits may be required for locations in Orkney and further north. For example, the grid reference for Sullom Voe Oil Terminal in the Shetland islands may be given as or 439668,1175316.
Another, distinct, form of all-numeric grid reference is an abbreviated alphanumeric reference where the letters are simply omitted, e.g. 166712 for the summit of Ben Nevis. Unlike the numeric references described above, this abbreviated grid reference is incomplete; it gives the location relative to an OS 100×100 km square, but does not specify which square. It is often used informally when the context identifies the OS 2-letter square. For example, within the context of a location known to be on OS Landranger sheet 41 (which extends from NN000500 in the south-west to NN400900 in the north-east) the abbreviated grid reference 166712 is equivalent to NN166712. If working with more than one Landranger sheet, this may also be given as 41/166712.
Alternatively, sometimes numbers instead of the two-letter combinations are used for the 100×100 km squares. The numbering follows a grid index where the tens denote the progress from West to East and the units from South to North. In the north of Scotland, the numbering is modified: the 100 km square to the north of 39 is numbered N30; the square to the north of 49 is N40, etc.
Compatibility with related systems
The grid is based on the OSGB36 datum (Ordnance Survey Great Britain 1936, based on the Airy 1830 ellipsoid), and was introduced after the retriangulation of 1936–1962. It replaced the Cassini Grid which had previously been the standard projection for Ordnance Survey maps.
The Airy ellipsoid is a regional best fit for Britain; more modern mapping tends to use the GRS80 ellipsoid used by the Global Positioning System (the Airy ellipsoid assumes the Earth to be about 1 km smaller in diameter than the GRS80 ellipsoid, and to be slightly less flattened). The British maps adopt a transverse Mercator projection with an origin (the "true" origin) at 49° N, 2° W (an offshore point in the English Channel which lies between the island of Jersey and the French port of St. Malo). Over the Airy ellipsoid a straight line grid, the National Grid, is placed with a new false origin to eliminate negative numbers, creating a 700 km by 1300 km grid. This false origin is located south-west of the Isles of Scilly.
In order to minimize the overall scale error, a factor of 2499/2500 is applied. This creates two lines of longitude about 180 km east and west of the central meridian along which the local scale factor equals 1, i.e. map scale is correct. Inside these lines the local scale factor is less than 1, with a minimum of 0.04% too small at the central meridian. Outside these lines the local scale factor is greater than 1, and is about 0.04% too large near the east and west coasts. Grid north and true north are only aligned on the central meridian (400 km easting) of the grid which is 2° W (OSGB36) and approx. (WGS 84).
A geodetic transformation between OSGB 36 and other terrestrial reference systems (like ITRF2000, ETRS89, or WGS 84) can become quite tedious if attempted manually. The most common transformation is called the Helmert datum transformation, which results in a typical 7 m error from true. The definitive transformation from ETRS89 that is published by the Ordnance Survey is called the National Grid Transformation OSTN15. This models the detailed distortions in the 1936–1962 retriangulation, and achieves backwards compatibility in grid coordinates to sub-metre accuracy.
Datum shift between OSGB 36 and WGS 84
The difference between the coordinates on different datums varies from place to place. The longitude and latitude positions on OSGB 36 are the same as for WGS 84 at a point in the Atlantic Ocean well to the west of Great Britain. In Cornwall, the WGS 84 longitude lines are about 70 metres east of their OSGB 36 equivalents, this value rising gradually to about 120 m east on the east coast of East Anglia. The WGS 84 latitude lines are about 70 m south of the OSGB 36 lines in South Cornwall, the difference diminishing to zero in the Scottish Borders, and then increasing to about 50 m north on the north coast of Scotland. (If the lines are further east, then the longitude value of any given point is further west. Similarly, if the lines are further south, the values will give the point a more northerly latitude.) The smallest datum shift is on the west coast of Scotland and the greatest in Kent.
Datum shift between OSGB 36 and ED 50
These two datums are not both in general use in any one place, but for a point in the English Channel halfway between Dover and Calais, the ED50 longitude lines are about 20 m east of the OSGB36 equivalents, and the ED50 latitude lines are about 150 m south of the OSGB36 ones.
Summary parameters of the coordinate system
Datum: OSGB36
Map projection: Transverse Mercator projection using Redfearn series
True origin: 49°N, 2°W
False origin: 400 km west, 100 km north of True Origin
Scale factor: 0.9996012717
EPSG Code: EPSG:27700
Ellipsoid: Airy 1830
Semi-major axis a:
Semi-minor axis b:
Flattening (derived constant): 1/299.3249646
See also
Ordnance Datum Newlyn
Irish grid reference system
Maidenhead Locator System
United States National Grid
World Geodetic System
Custom units of measure
Tetrad
Hectad
Myriad
Notes
References
External links
Ordnance Survey A guide to coordinate systems in Great Britain: An introduction to mapping coordinate systems and the use of GPS datasets with Ordnance Survey mapping; Version 3.6, 2020 [Retrieved 19 February 2022].
Ordnance Survey's Grid script: a brief introduction to the National Grid Reference; Version November 2011 [Retrieved 13 February 2014].
- Multiple-format co-ordinate transformer for Great Britain & Channel Islands
(JavaScript source code)
Web utility to find a UK grid reference
LatLong <> OS Grid Ref converts & presents in many formats, generates specific links to that location for several useful map web pages - 1840–present. LatLong WSG84 <> GB, Ireland (inc NI) and Chanel Islands (30U) GR formats recognised. Distance measure for dog-leg routes & area calculations.
Open source dataset (in GeoPackage format) of the British National Grids at various resolutions, available for download from Ordnance Survey's GitHub.
Geography of the United Kingdom
Maps from Ordnance Survey
Geographic coordinate systems
Land surveying systems
Geodesy
Geocodes
Surveying of the United Kingdom | Ordnance Survey National Grid | Mathematics | 2,227 |
43,785,666 | https://en.wikipedia.org/wiki/Kinyoun%20stain | The Kinyoun method or Kinyoun stain (cold method), developed by Joseph J. Kinyoun, is a procedure used to stain acid-fast species of the bacterial genus Mycobacterium. It is a variation of a method developed by Robert Koch in 1882. Certain species of bacteria have a waxy lipid called mycolic acid, in their cell walls which allow them to be stained with Acid-Fast better than a Gram-Stain. The unique ability of mycobacteria to resist decolorization by acid-alcohol is why they are termed acid-fast. It involves the application of a primary stain (basic fuchsin), a decolorizer (acid-alcohol), and a counterstain (methylene blue). Unlike the Ziehl–Neelsen stain (Z-N stain), the Kinyoun method of staining does not require heating. In the Ziehl–Neelsen stain, heat acts as a physical mordant while phenol (carbol of carbol fuchsin) acts as the chemical mordant.
Modification
The Kinyoun method can be modified as a weak acid fast stain, which uses 0.5–1.0% sulfuric acid instead of hydrochloric acid. The weak acid fast stain, in addition to staining Mycobacteria, will also stain organisms that are not able to maintain the carbol fuchsin after decolorizing with HCl, such as Nocardia species and Cryptosporidium.
See also
Auramine-rhodamine stain
References
Bacteriology
Staining
Microscopy
pt:Coloração de Kinyoun | Kinyoun stain | Chemistry,Biology | 344 |
119,596 | https://en.wikipedia.org/wiki/Elimination%20communication | Elimination communication (EC) is a practice in which a caregiver uses timing, signals, cues, and intuition to address an infant's need to eliminate waste. Caregivers try to recognize and respond to babies' bodily needs and enable them to urinate and defecate in an appropriate place (e.g. a toilet). Caregivers may use diapers (nappies) as a back-up in case of "misses" some or all of the time, or not at all. EC emphasizes communication between the caregiver and child, helping them both become more attuned to the child's innate rhythms and control of urination and defecation. The term "elimination communication" was inspired by traditional practices of diaperless baby care in less industrialized countries and hunter-gatherer cultures. Some practitioners of EC begin soon after birth, the optimum window being zero to four months in terms of helping the baby get in tune with their elimination needs, although it can be started with babies of any age. The practice can be done full-time, part-time, or just occasionally.
In the UK, baby-led potty training is a similar system for meeting babies' toileting needs. The main feature of the system is that care-givers 'hold babies out' or support them on a potty in order for them to void in an appropriate place outside their nappy. The method is typically started before the baby is six months old. Care-givers use a combination of timing, and observing babies' own signals, to decide when to hold them out. Some parents use the technique just occasionally, others as an alternative to full-time nappies, and some as a route to toilet independence.
Origins
Keeping babies clean and dry without diapers is standard practice in many cultures throughout the world. While this practice is only recently becoming known in industrialized societies, it remains the dominant method of baby hygiene in non-industrialized ones.
The terms elimination communication and natural infant hygiene were coined by Ingrid Bauer and are used interchangeably in her book, Diaper Free! The Gentle Wisdom of Natural Infant Hygiene (2001). Bauer had traveled to India and Africa, where she noticed that while most mothers carried their diaperless babies constantly, she saw no elimination "accidents" as would be expected in industrialized countries where babies wear diapers almost continuously from birth. Subsequently, she raised her own children with minimal use of diapers, and eventually began to share her approach with other mothers and caregivers—initially through Internet-based parenting support groups and eventually through her book and website.
Prior publications introducing Western parents to this practice include the booklet Conscious Toilet Training, by Laurie Boucke (1979), the book Trickle Treat: Diaperless Infant Toilet Training Method, by Laurie Boucke (1991), a pamphlet entitled Elimination Timing, by Natec (1994), and the more extensive Infant Potty Training: A Gentle and Primeval Method Adapted to Modern Living, by Laurie Boucke (2000). Boucke was influenced by an Indian friend who taught her how mothers in India care for babies without diapers, and she adapted the method to fit her Western lifestyle. Boucke later co-produced an in-depth DVD entitled Potty Whispering: The Gentle Practice of Infant Potty Training (2006) and co-authored articles for medical journals.
While the terms elimination communication and infant potty training have become synonymous, many caregivers who practice EC do not consider it to be a form of "training", per se. "Nappyless technique" is a term some mothers in the UK prefer to describe babies who use a potty. EC is viewed primarily as a way to meet the baby's present needs and to enhance attachment and communication in general. In that sense, EC is often likened to breastfeeding. "Toilet mastery is, of course, an inevitable consequence", writes Bauer, "Yet it's no more the goal of Natural Infant Hygiene than weaning is the goal of breastfeeding" (2001, p. 217).
Today, one often hears the terms "natural infant hygiene", "infant potty training", "nappy-free", "infant pottying" and "elimination communication" used synonymously.
The method of holding a baby out to trigger the reflex to urinate and defecate has presumably been used by mothers since the first Homo sapiens. The English doctor Pye Henry Chavasse suggested in his 1839 book "Advice to a Mother on the Management of her Children", that the baby should be held out over a pot at least a dozen times a day at 3 months old; if this were done, there need be no more nappies at 4 months. In 1912 Edward Mansfield Brockbank advised that babies should be supported on the pot from two months old.
The practise was commonplace up until the fifties, when Dr Spock's method of delaying the start of toilet training until 18 months became popular. This, coupled with the advent of disposable nappies meant that the practise of BLPT diminished. Although some mothers still used the method, learning about it either by accident or intuition, or from knowledge passed on by grandparents, the method dropped out of common usage. From the nineties until the present, official UK health advice suggests that it is counter-productive to start toilet training before 18 months, and the standard advice is to wait until children showed signs of "readiness" (but not before 18 – 24 months of age). Amongst some health professionals there is a received wisdom that babies have no bladder or bowel control under two years.
Benefits
According to The Diaper-Free Baby by Christine Gross-Loh, EC offers a wide range of advantages. Because EC lessens families' reliance on diapers, this helps reduce the environmental impact of discarding disposable diapers and/or washing cloth diapers, and saves families hundreds, if not thousands, of dollars in disposable diapers. EC babies are free from the problems of conventional diapering such as diaper rash, diaper change battles, not being able to explore diapered parts of their bodies, vulnerability to urinary tract infections, and potentially delayed or difficult potty training. Gross-Loh also reports that EC promotes a unique and wonderful bond between babies and caregivers.
Parents report that the squat or "potty" position that they tend to use to hold their baby in order to go is very comfortable for the baby. The position aligns the digestive tract and supports relaxation, as well as contraction of the pelvic floor muscles, helping babies to release their urine or stool and simultaneously build control of the urinary and anal sphincter muscles. This especially helps babies who are suffering from mild constipation. Many babies find defecating to be an unsettling process, especially as they transition to solid food. With EC, parents hold their infant in a supportive position as they defecate into the toilet or a suitable receptacle, offering loving emotional and physical support during this process.
Parents report benefits in three areas: for baby, parents and the environment. These are some of the main advantages:
For baby:
reduces incidence of nappy rash; more comfortable; encourages communication; helps relieve constipation and wind.
For parents:
cheaper; less washing; more hygienic; fewer leaking nappies; more confidence; greater bond with baby; another tool to soothe a crying baby.
For the environment:
reduces number of soiled and wet nappies sent to landfill; less washing of cloth nappies; solid waste treated via sewerage instead of going to landfill where it releases methane.
Criticisms
Conventional potty training advice is based on late 1990s research by Thomas Berry Brazelton, who introduced the "readiness approach". He writes that "widespread acceptance of readiness and independent toileting have since been supported by clinical experience and resulted in agreement that a child should be ready to participate in toilet training at approximately 18 months of age and be trained completely by 2 or 3 years old." He argues that trying to toilet train before this age could be coercive and therefore psychologically damaging. Brazelton acknowledges that elimination communication is both possible and desirable, but he believes it is difficult to perform in Western society. In particular he cites a mother's return to work as an obstacle to elimination communication. He also argues that parents should not be made to feel guilty if they cannot communicate with their babies in this way. His neutrality on the subject has been questioned since he has worked as a consultant for Procter & Gamble, manufacturer of Pampers diapers, including appearing in a Pampers commercial.
Components
The main components of EC are timing, signals, cueing, and intuition.
Timing
Timing refers to identifying the infant's natural timing of elimination. Newborns tend to urinate every 10–20 minutes, sometimes very regularly, which makes timing extremely useful. Older babies may still be very regular, or may vary in timing based on when they have last eaten or slept. As infants get older, the time between eliminations will increase. By six months, it is not uncommon for babies to go an hour or more without urinating while awake (babies, like adults, rarely urinate during a deep sleep). Timing varies radically for defecation, as some infants may have several bowel movements a day, while others may only have one every few days. Parents report that some babies as young as three months will appear to hold all their bowel movements until they are held in a particular squat position, as long as this is offered regularly enough. Parents also offer the potty at various times according to routine, e.g. after a feed, after waking, just before bath or bed. In the West, infant potty training historically relied on timing as the main method of training.
Signals
Signals are the baby's way of informing a caregiver of an elimination need. Some babies signal very clearly from the beginning, while others may have very subtle signals, or no signal at all. These signals vary widely from one infant to another. Examples include a certain facial expression, a particular cry, squirming, or a sudden unexplained fussiness, among others. Signals are most effectively observed if the baby is left without diapers for the first couple of weeks of starting elimination communication. Babies who are nursing will often start unlatching and relatching repeatedly as they feed when they need to eliminate. For defecation, many babies may grunt or pass gas as a signal. As babies get older their signals become more conscious and babies often point to, or look at, a caregiver or potty to indicate need. Older babies can learn a gesture or baby sign for "potty". Later they may learn a word as part of their early acquisition of language.
Cueing
Cueing consists of the caregiver making a particular sound or other cue when they provide the baby with an opportunity to eliminate. At first, the caregiver can make the cueing sound when the baby is eliminating to develop an association between the sound and the action. Once the association is established, the cue can be used to indicate to the baby that he or she is in an appropriate potty place. This is especially useful for infants who may not recognize public toilets or unfamiliar receptacles as a "potty". Common sound cues include "psss psss" for urination and "hmm hmm" (grunting) for defecation. Older babies (late starters) may respond better to more word-like cues. Cues do not have to be auditory; the act of sitting on the potty itself or being held in position can serve as a cue, or the sign language sign for "toilet" can be a cue. The American Sign Language sign for "toilet" involves forming a hand into the letter "T" (a fist with the thumb inserted between the first and middle fingers) and shaking the hand side to side from the wrist.
Intuition
Intuition refers to a caregiver's unprompted thought that the baby may need to eliminate. Although much intuition may simply be subconscious awareness of timing or signals, many parents who practice EC find it an extremely reliable component.
Baby-led potty training method
Babies are born with a primitive reflex which causes them to empty their bladder when parents remove the nappy and hold them in a squat position. By holding their baby out regularly, parents capitalise on this reflex. They encourage the baby to go in this position, and soon the baby is conditioned to try to void when in this hold. Parents then offer the baby opportunities to go throughout the day. They can offer based on timing—either at convenient times for the parents, on a routine, or through learning what times the baby is likely to need to pass waste. They can also look out for signs that the baby is uncomfortable with a full bladder or bowel.
Once the baby has become accustomed to passing waste when held or on the potty, parents are able to adapt the method to suit their lifestyle. They can offer the potty just occasionally to help relieve an unsettled baby, or they can offer regularly throughout the day in order to drastically reduce the reliance on nappies.
When continued through to the baby's second year, the method is adapted to help them transition to complete toilet independence.
See also
Attachment parenting
Dunstan Baby Language
Infant potty training method
Open-crotch pants
Toilet training
References
Toilet training | Elimination communication | Biology | 2,777 |
1,819,888 | https://en.wikipedia.org/wiki/Wreck%20of%20the%20Old%2097 | The Wreck of the Old 97 was an American rail disaster involving the Southern Railway mail train, officially known as the Fast Mail (train number 97), while en route from Monroe, Virginia, to Spencer, North Carolina, on September 27, 1903. Travelling at an excessive speed in an attempt to maintain schedule, the train derailed at the Stillhouse Trestle near Danville, Virginia, where it careened off the side of the bridge, killing 11 on-board personnel and injuring seven others. The wreck inspired a famous railroad ballad, which was the focus of a copyright lawsuit and became seminal in the genre of country music.
Wreck
The wreck of Old 97, known as the Fast Mail, occurred when the engineer, 33-year-old Joseph Andrew ("Steve") Broady at the controls of Southern Railway 1102, was operating the train at high speed in order to stay on schedule and arrive at Spencer on time. The Fast Mail had a reputation for never being late. Locomotive 1102, a ten-wheeler 4-6-0 engine built by Baldwin Locomotive Works in Philadelphia, had rolled out of the factory in early 1903, less than a year before the wreck.
On the day of the accident, the Fast Mail was behind schedule when it left Washington, D.C., and was one hour late when it arrived in Monroe, Virginia. When the train arrived in Monroe its crew was switched, and when it left Monroe, there were 17 people on board. The train personnel included Broady, conductor John Blair, fireman A.C. Clapp, student fireman John Hodge (sometimes known as Dodge in other documents), and flagman James Robert Moody. Also aboard were various mail clerks including J.L. Thompson, Scott Chambers, Daniel Flory, Paul Argenbright, Lewis Spies, Frank Brooks, Percival Indermauer, Charles Reams, Jennings Dunlap, Napoleon Maupin, J. H. Thompson, and W. R. Pinckney, an express messenger. When the train pulled into Lynchburg, Wentworth Armistead, a safe locker, boarded the train, bringing the number of on-board personnel to 18. (A safe locker is a railroad employee entrusted with the combination to a train's safe.)
At Monroe, Broady was instructed to get the Fast Mail to Spencer, distant, on time. The scheduled running time from Monroe to Spencer was four hours, fifteen minutes – an average speed of approximately . In order to make up the one hour delay, the train's average speed would have to be at least . Broady was ordered to maintain speed through Franklin Junction in Gretna, an intermediate stop normally made during the run.
The route between Monroe and Spencer ran through rolling terrain, and there were numerous danger points due to the combination of grades and tight radius curves. Signs were posted to warn engineers to watch their speed. However, in his quest to stay on time, Broady rapidly descended a heavy grade that ended at the Stillhouse Trestle, which spanned Stillhouse Branch. He was unable to sufficiently reduce speed as he approached the curve leading into the trestle, causing the entire train to derail and plunge into the ravine below. The flames that erupted afterwards consumed the splintered debris of the wooden cars, and it was very hard for the local fire department to extinguish the blaze. The investigation that followed was greatly hampered by the fire and the few witnesses to the incident.
Of the eighteen men on board, eleven men died (nine on impact) and seven were injured. Among the deceased were the conductor Blair, engineer Broady and flagman Moody. The bodies of both firemen were recovered, but they were mangled so badly they were unrecognizable.
Several survivors of the wreck believed they stayed alive because they jumped from the train just before the fatal plunge. Among the survivors were mail clerks Thompson and Harris. Pinckney, the express messenger, also survived the wreck, went home to Charlotte, North Carolina, and immediately resigned after his life-changing experience. Two other survivors, Jennings J. Dunlap and M.C. Maupin, did not resign, although they transferred to new departments. Dunlap went to work on a train that ran between Washington and Charlotte (the Southern Railway line from Monroe to Spencer was then and remains today a segment of the (now Norfolk Southern) line Washington and Charlotte) , while Maupin worked at the Charlotte Union station.
Only a fraction of the mail had survived, including a large case filled with canaries that managed to escape and fly to safety. Engine 1102 was recovered and repaired, and it went on to perform further duties until it was dismantled in July 1935.
The day after the wreck, vice-president Finley made a speech in which he said: "The train consisted of two postal cars, one express and one baggage car for the storage of mail.... Eyewitnesses said the train was approaching the trestle at speeds of an hour." The Southern Railway placed blame for the wreck on Broady, disavowing that he had been ordered to run as fast as possible to maintain the schedule. The railroad also claimed he descended the grade leading to the trestle at a speed of more than . Several eyewitnesses to the wreck, however, stated that the speed was probably around . In all likelihood, the railroad was at least partially to blame, as it had a lucrative contract with the U.S. Post Office to haul mail, and the contract included a penalty clause for each minute the train was late into Spencer. It is probably safe to conclude that the engineers piloting the Fast Mail were always under pressure to stay on time so that the railroad would not be penalized for late mail delivery.
The Fast Mail was in another fatal accident earlier in the year of 1903. On Monday, April 13, the train left Washington at 8:00 am, en route to New Orleans. As the train approached Lexington, North Carolina, it collided with a boulder on the track, causing the train to derail and ditch, killing the engineer and fireman. The locomotive that was pulling the train is unknown.
Ballad
The disaster inspired several songs, the most famous being the ballad first recorded commercially by Virginia musicians G. B. Grayson and Henry Whitter. Vernon Dalhart's version was released in 1924 (Victor Record no. 19427), sometimes cited as the first million-selling country music release in the American record industry, with Carson Robison playing guitar and Dalhart playing harmonica. Since then, "Wreck of the Old 97" has been recorded by numerous artists, including Dalhart himself in 1924 under the name Sid Turner on Perfect 12147, The Statler Brothers (feat. Johnny Cash), Charlie Louvin of The Louvin Brothers, Flatt and Scruggs, Woody Guthrie, Pete Seeger, Johnny Cash, Hank Snow, Hank Williams III, Patrick Sky, Nine Pound Hammer, Roy Acuff, Boxcar Willie, Lonnie Donegan, The Seekers, Ernest Stoneman & Kahle Brewer, Carolyn Hester, Hank Thompson, John Mellencamp, Pink Anderson, Lowgold, Chuck Ragan, and David Holt. The music was often accompanied by a banjo and a fiddle, while the lyrics were either sung, crooned, yodeled, whistled, hummed, recited, or chanted. The song rivaled that of "Casey Jones" for being the number one railroading song of all time. The actress Ann Dvorak sings two verses of the ballad in the 1932 movie Scarface.
The ballad was sung to the tune of "The Ship That Never Returned", written by Henry Clay Work in 1865. Originally, the lyrics were attributed to Fred Jackson Lewey and co-author Charles Weston Noell. Lewey claimed to have written the song the day after the accident, in which his cousin Albion Clapp was one of the two firemen killed. Lewey worked in a cotton mill that was at the base of the trestle, and also claimed to be on the scene of the accident pulling the victims from the wreckage. Musician Henry Whitter subsequently polished the original, altering the lyrics, resulting in the version performed by Dalhart.
In 1927 it was claimed that the author of "Wreck of the Old 97" was local resident David Graves George, who was one of the first on the scene. George was a brakeman and telegraph operator who also happened to be a singer. Witnessing the tragedy inspired him to write the ballad. After the 1924 recording by the Victor Talking Machine Company was released, George filed a claim for ownership. On March 11, 1933, Judge John Boyd proclaimed that George was the author of the ballad. Victor Talking Machine Company was forced to pay David $65,000 of the profits from about five million records sold. Victor appealed three times. The first two times, the courts ruled in favor of George. The third time the court of appeals ruled in favor of Victor Talking Machines. George appealed to the Supreme Court of the United States, but the court ruled that George had filed his appeal too late and dismissed it, thereby granting Victor ownership of the ballad.
"Wreck of the Old 97" is 777 in the Roud Folk Song Index.
The ballad clearly places the blame for the wreck on the railroad company for pressuring Steve Broady to exceed a safe speed limit, for the lyric (on the Dalhart recording) begins, "Well, they handed him his orders in Monroe, Virginia, saying, 'Steve, you're way behind time; this is not 38 it is Old 97, you must put her into Spencer on time.
See also
List of train songs
Old 97's, a band named after the ballad
References
Further reading
External links
Photos of the accident scene Roanoke Public Libraries
Mark Daniel Jones – Witness of Wreck
Wreck of Old 97 Historic Marker
Wreck of the Old 97
Biography of Fred Jackson Lewey
Joseph Broady at Find a grave
1903 in Virginia
Bridge disasters in the United States
Danville, Virginia
American folk songs
Railway accidents and incidents in Virginia
Railway accidents in 1903
Transportation disasters in Virginia
Songs based on American history
Accidents and incidents involving Southern Railway (U.S.)
Train wreck ballads
September 1903
Songs about trains | Wreck of the Old 97 | Technology | 2,090 |
58,568,168 | https://en.wikipedia.org/wiki/Klaus%20Immelmann | Klaus Immelmann (May 6, 1935 – September 8, 1987) was a German ethologist and ornithologist. He undertook field research in Africa and Australia, and published works in German and English. His second and third visit to South Africa were in 1969 and 1971. Immelmann became a permanent executive member of the International Ornithological Union, and its president in 1986. He is the author of Australian finches in bush and aviary (1965), regarded as the first standard text on the subject, and a study of comparative biology of estrildid finches in Australia. His first visit to Australia was in the late 1950s, shortly after receiving his PhD. His 1976 book Einführung in die Verhaltensforschung, which brought together much of his scientific thinking, was translated into English in 1980.
References
1935 births
1987 deaths
German ornithologists
Ethologists
Ornithological writers
20th-century German zoologists | Klaus Immelmann | Biology | 201 |
17,945 | https://en.wikipedia.org/wiki/Lie%20group | In mathematics, a Lie group (pronounced ) is a group that is also a differentiable manifold, such that group multiplication and taking inverses are both differentiable.
A manifold is a space that locally resembles Euclidean space, whereas groups define the abstract concept of a binary operation along with the additional properties it must have to be thought of as a "transformation" in the abstract sense, for instance multiplication and the taking of inverses (to allow division), or equivalently, the concept of addition and subtraction. Combining these two ideas, one obtains a continuous group where multiplying points and their inverses is continuous. If the multiplication and taking of inverses are smooth (differentiable) as well, one obtains a Lie group.
Lie groups provide a natural model for the concept of continuous symmetry, a celebrated example of which is the circle group. Rotating a circle is an example of a continuous symmetry. For any rotation of the circle, there exists the same symmetry, and concatenation of such rotations makes them into the circle group, an archetypal example of a Lie group. Lie groups are widely used in many parts of modern mathematics and physics.
Lie groups were first found by studying matrix subgroups contained in or , the groups of invertible matrices over or . These are now called the classical groups, as the concept has been extended far beyond these origins. Lie groups are named after Norwegian mathematician Sophus Lie (1842–1899), who laid the foundations of the theory of continuous transformation groups. Lie's original motivation for introducing Lie groups was to model the continuous symmetries of differential equations, in much the same way that finite groups are used in Galois theory to model the discrete symmetries of algebraic equations.
History
Sophus Lie considered the winter of 1873–1874 as the birth date of his theory of continuous groups. Thomas Hawkins, however, suggests that it was "Lie's prodigious research activity during the four-year period from the fall of 1869 to the fall of 1873" that led to the theory's creation. Some of Lie's early ideas were developed in close collaboration with Felix Klein. Lie met with Klein every day from October 1869 through 1872: in Berlin from the end of October 1869 to the end of February 1870, and in Paris, Göttingen and Erlangen in the subsequent two years. Lie stated that all of the principal results were obtained by 1884. But during the 1870s all his papers (except the very first note) were published in Norwegian journals, which impeded recognition of the work throughout the rest of Europe. In 1884 a young German mathematician, Friedrich Engel, came to work with Lie on a systematic treatise to expose his theory of continuous groups. From this effort resulted the three-volume Theorie der Transformationsgruppen, published in 1888, 1890, and 1893. The term groupes de Lie first appeared in French in 1893 in the thesis of Lie's student Arthur Tresse.
Lie's ideas did not stand in isolation from the rest of mathematics. In fact, his interest in the geometry of differential equations was first motivated by the work of Carl Gustav Jacobi, on the theory of partial differential equations of first order and on the equations of classical mechanics. Much of Jacobi's work was published posthumously in the 1860s, generating enormous interest in France and Germany. Lie's idée fixe was to develop a theory of symmetries of differential equations that would accomplish for them what Évariste Galois had done for algebraic equations: namely, to classify them in terms of group theory. Lie and other mathematicians showed that the most important equations for special functions and orthogonal polynomials tend to arise from group theoretical symmetries. In Lie's early work, the idea was to construct a theory of continuous groups, to complement the theory of discrete groups that had developed in the theory of modular forms, in the hands of Felix Klein and Henri Poincaré. The initial application that Lie had in mind was to the theory of differential equations. On the model of Galois theory and polynomial equations, the driving conception was of a theory capable of unifying, by the study of symmetry, the whole area of ordinary differential equations. However, the hope that Lie theory would unify the entire field of ordinary differential equations was not fulfilled. Symmetry methods for ODEs continue to be studied, but do not dominate the subject. There is a differential Galois theory, but it was developed by others, such as Picard and Vessiot, and it provides a theory of quadratures, the indefinite integrals required to express solutions.
Additional impetus to consider continuous groups came from ideas of Bernhard Riemann, on the foundations of geometry, and their further development in the hands of Klein. Thus three major themes in 19th century mathematics were combined by Lie in creating his new theory:
The idea of symmetry, as exemplified by Galois through the algebraic notion of a group;
Geometric theory and the explicit solutions of differential equations of mechanics, worked out by Poisson and Jacobi;
The new understanding of geometry that emerged in the works of Plücker, Möbius, Grassmann and others, and culminated in Riemann's revolutionary vision of the subject.
Although today Sophus Lie is rightfully recognized as the creator of the theory of continuous groups, a major stride in the development of their structure theory, which was to have a profound influence on subsequent development of mathematics, was made by Wilhelm Killing, who in 1888 published the first paper in a series entitled Die Zusammensetzung der stetigen endlichen Transformationsgruppen (The composition of continuous finite transformation groups). The work of Killing, later refined and generalized by Élie Cartan, led to classification of semisimple Lie algebras, Cartan's theory of symmetric spaces, and Hermann Weyl's description of representations of compact and semisimple Lie groups using highest weights.
In 1900 David Hilbert challenged Lie theorists with his Fifth Problem presented at the International Congress of Mathematicians in Paris.
Weyl brought the early period of the development of the theory of Lie groups to fruition, for not only did he classify irreducible representations of semisimple Lie groups and connect the theory of groups with quantum mechanics, but he also put Lie's theory itself on firmer footing by clearly enunciating the distinction between Lie's infinitesimal groups (i.e., Lie algebras) and the Lie groups proper, and began investigations of topology of Lie groups. The theory of Lie groups was systematically reworked in modern mathematical language in a monograph by Claude Chevalley.
Overview
Lie groups are smooth differentiable manifolds and as such can be studied using differential calculus, in contrast with the case of more general topological groups. One of the key ideas in the theory of Lie groups is to replace the global object, the group, with its local or linearized version, which Lie himself called its "infinitesimal group" and which has since become known as its Lie algebra.
Lie groups play an enormous role in modern geometry, on several different levels. Felix Klein argued in his Erlangen program that one can consider various "geometries" by specifying an appropriate transformation group that leaves certain geometric properties invariant. Thus Euclidean geometry corresponds to the choice of the group E(3) of distance-preserving transformations of the Euclidean space , conformal geometry corresponds to enlarging the group to the conformal group, whereas in projective geometry one is interested in the properties invariant under the projective group. This idea later led to the notion of a G-structure, where G is a Lie group of "local" symmetries of a manifold.
Lie groups (and their associated Lie algebras) play a major role in modern physics, with the Lie group typically playing the role of a symmetry of a physical system. Here, the representations of the Lie group (or of its Lie algebra) are especially important. Representation theory is used extensively in particle physics. Groups whose representations are of particular importance include the rotation group SO(3) (or its double cover SU(2)), the special unitary group SU(3) and the Poincaré group.
On a "global" level, whenever a Lie group acts on a geometric object, such as a Riemannian or a symplectic manifold, this action provides a measure of rigidity and yields a rich algebraic structure. The presence of continuous symmetries expressed via a Lie group action on a manifold places strong constraints on its geometry and facilitates analysis on the manifold. Linear actions of Lie groups are especially important, and are studied in representation theory.
In the 1940s–1950s, Ellis Kolchin, Armand Borel, and Claude Chevalley realised that many foundational results concerning Lie groups can be developed completely algebraically, giving rise to the theory of algebraic groups defined over an arbitrary field. This insight opened new possibilities in pure algebra, by providing a uniform construction for most finite simple groups, as well as in algebraic geometry. The theory of automorphic forms, an important branch of modern number theory, deals extensively with analogues of Lie groups over adele rings; p-adic Lie groups play an important role, via their connections with Galois representations in number theory.
Definitions and examples
A real Lie group is a group that is also a finite-dimensional real smooth manifold, in which the group operations of multiplication and inversion are smooth maps. Smoothness of the group multiplication
means that μ is a smooth mapping of the product manifold into G. The two requirements can be combined to the single requirement that the mapping
be a smooth mapping of the product manifold into G.
First examples
The 2×2 real invertible matrices form a group under multiplication, called general linear group of degree 2 and denoted by or by : This is a four-dimensional noncompact real Lie group; it is an open subset of . This group is disconnected; it has two connected components corresponding to the positive and negative values of the determinant.
The rotation matrices form a subgroup of , denoted by . It is a Lie group in its own right: specifically, a one-dimensional compact connected Lie group which is diffeomorphic to the circle. Using the rotation angle as a parameter, this group can be parametrized as follows: Addition of the angles corresponds to multiplication of the elements of , and taking the opposite angle corresponds to inversion. Thus both multiplication and inversion are differentiable maps.
The affine group of one dimension is a two-dimensional matrix Lie group, consisting of real, upper-triangular matrices, with the first diagonal entry being positive and the second diagonal entry being 1. Thus, the group consists of matrices of the form
Non-example
We now present an example of a group with an uncountable number of elements that is not a Lie group under a certain topology. The group given by
with a fixed irrational number, is a subgroup of the torus that is not a Lie group when given the subspace topology. If we take any small neighborhood of a point in , for example, the portion of in is disconnected. The group winds repeatedly around the torus without ever reaching a previous point of the spiral and thus forms a dense subgroup of .
The group can, however, be given a different topology, in which the distance between two points is defined as the length of the shortest path in the group joining to . In this topology, is identified homeomorphically with the real line by identifying each element with the number in the definition of . With this topology, is just the group of real numbers under addition and is therefore a Lie group.
The group is an example of a "Lie subgroup" of a Lie group that is not closed. See the discussion below of Lie subgroups in the section on basic concepts.
Matrix Lie groups
Let denote the group of invertible matrices with entries in . Any closed subgroup of is a Lie group; Lie groups of this sort are called matrix Lie groups. Since most of the interesting examples of Lie groups can be realized as matrix Lie groups, some textbooks restrict attention to this class, including those of Hall, Rossmann, and Stillwell.
Restricting attention to matrix Lie groups simplifies the definition of the Lie algebra and the exponential map. The following are standard examples of matrix Lie groups.
The special linear groups over and , and , consisting of matrices with determinant one and entries in or
The unitary groups and special unitary groups, and , consisting of complex matrices satisfying (and also in the case of )
The orthogonal groups and special orthogonal groups, and , consisting of real matrices satisfying (and also in the case of )
All of the preceding examples fall under the heading of the classical groups.
Related concepts
A complex Lie group is defined in the same way using complex manifolds rather than real ones (example: ), and holomorphic maps. Similarly, using an alternate metric completion of , one can define a p-adic Lie group over the p-adic numbers, a topological group which is also an analytic p-adic manifold, such that the group operations are analytic. In particular, each point has a p-adic neighborhood.
Hilbert's fifth problem asked whether replacing differentiable manifolds with topological or analytic ones can yield new examples. The answer to this question turned out to be negative: in 1952, Gleason, Montgomery and Zippin showed that if G is a topological manifold with continuous group operations, then there exists exactly one analytic structure on G which turns it into a Lie group (see also Hilbert–Smith conjecture). If the underlying manifold is allowed to be infinite-dimensional (for example, a Hilbert manifold), then one arrives at the notion of an infinite-dimensional Lie group. It is possible to define analogues of many Lie groups over finite fields, and these give most of the examples of finite simple groups.
The language of category theory provides a concise definition for Lie groups: a Lie group is a group object in the category of smooth manifolds. This is important, because it allows generalization of the notion of a Lie group to Lie supergroups. This categorical point of view leads also to a different generalization of Lie groups, namely Lie groupoids, which are groupoid objects in the category of smooth manifolds with a further requirement.
Topological definition
A Lie group can be defined as a (Hausdorff) topological group that, near the identity element, looks like a transformation group, with no reference to differentiable manifolds. First, we define an immersely linear Lie group to be a subgroup G of the general linear group such that
for some neighborhood V of the identity element e in G, the topology on V is the subspace topology of and V is closed in .
G has at most countably many connected components.
(For example, a closed subgroup of ; that is, a matrix Lie group satisfies the above conditions.)
Then a Lie group is defined as a topological group that (1) is locally isomorphic near the identities to an immersely linear Lie group and (2) has at most countably many connected components. Showing the topological definition is equivalent to the usual one is technical (and the beginning readers should skip the following) but is done roughly as follows:
Given a Lie group G in the usual manifold sense, the Lie group–Lie algebra correspondence (or a version of Lie's third theorem) constructs an immersed Lie subgroup such that share the same Lie algebra; thus, they are locally isomorphic. Hence, satisfies the above topological definition.
Conversely, let be a topological group that is a Lie group in the above topological sense and choose an immersely linear Lie group that is locally isomorphic to . Then, by a version of the closed subgroup theorem, is a real-analytic manifold and then, through the local isomorphism, G acquires a structure of a manifold near the identity element. One then shows that the group law on G can be given by formal power series; so the group operations are real-analytic and itself is a real-analytic manifold.
The topological definition implies the statement that if two Lie groups are isomorphic as topological groups, then they are isomorphic as Lie groups. In fact, it states the general principle that, to a large extent, the topology of a Lie group together with the group law determines the geometry of the group.
More examples of Lie groups
Lie groups occur in abundance throughout mathematics and physics. Matrix groups or algebraic groups are (roughly) groups of matrices (for example, orthogonal and symplectic groups), and these give most of the more common examples of Lie groups.
Dimensions one and two
The only connected Lie groups with dimension one are the real line (with the group operation being addition) and the circle group of complex numbers with absolute value one (with the group operation being multiplication). The group is often denoted as , the group of unitary matrices.
In two dimensions, if we restrict attention to simply connected groups, then they are classified by their Lie algebras. There are (up to isomorphism) only two Lie algebras of dimension two. The associated simply connected Lie groups are (with the group operation being vector addition) and the affine group in dimension one, described in the previous subsection under "first examples".
Additional examples
The group SU(2) is the group of unitary matrices with determinant . Topologically, is the -sphere ; as a group, it may be identified with the group of unit quaternions.
The Heisenberg group is a connected nilpotent Lie group of dimension , playing a key role in quantum mechanics.
The Lorentz group is a 6-dimensional Lie group of linear isometries of the Minkowski space.
The Poincaré group is a 10-dimensional Lie group of affine isometries of the Minkowski space.
The exceptional Lie groups of types G2, F4, E6, E7, E8 have dimensions 14, 52, 78, 133, and 248. Along with the A–B–C–D series of simple Lie groups, the exceptional groups complete the list of simple Lie groups.
The symplectic group consists of all matrices preserving a symplectic form on . It is a connected Lie group of dimension .
Constructions
There are several standard ways to form new Lie groups from old ones:
The product of two Lie groups is a Lie group.
Any topologically closed subgroup of a Lie group is a Lie group. This is known as the closed subgroup theorem or Cartan's theorem.
The quotient of a Lie group by a closed normal subgroup is a Lie group.
The universal cover of a connected Lie group is a Lie group. For example, the group is the universal cover of the circle group . In fact any covering of a differentiable manifold is also a differentiable manifold, but by specifying universal cover, one guarantees a group structure (compatible with its other structures).
Related notions
Some examples of groups that are not Lie groups (except in the trivial sense that any group having at most countably many elements can be viewed as a 0-dimensional Lie group, with the discrete topology), are:
Infinite-dimensional groups, such as the additive group of an infinite-dimensional real vector space, or the space of smooth functions from a manifold to a Lie group , . These are not Lie groups as they are not finite-dimensional manifolds.
Some totally disconnected groups, such as the Galois group of an infinite extension of fields, or the additive group of the p-adic numbers. These are not Lie groups because their underlying spaces are not real manifolds. (Some of these groups are "p-adic Lie groups".) In general, only topological groups having similar local properties to Rn for some positive integer n can be Lie groups (of course they must also have a differentiable structure).
Basic concepts
The Lie algebra associated with a Lie group
To every Lie group we can associate a Lie algebra whose underlying vector space is the tangent space of the Lie group at the identity element and which completely captures the local structure of the group. Informally we can think of elements of the Lie algebra as elements of the group that are "infinitesimally close" to the identity, and the Lie bracket of the Lie algebra is related to the commutator of two such infinitesimal elements. Before giving the abstract definition we give a few examples:
The Lie algebra of the vector space Rn is just Rn with the Lie bracket given by [A, B] = 0. (In general the Lie bracket of a connected Lie group is always 0 if and only if the Lie group is abelian.)
The Lie algebra of the general linear group GL(n, C) of invertible matrices is the vector space M(n, C) of square matrices with the Lie bracket given by [A, B] = AB − BA.
If G is a closed subgroup of GL(n, C) then the Lie algebra of G can be thought of informally as the matrices m of M(n, C) such that 1 + εm is in G, where ε is an infinitesimal positive number with ε2 = 0 (of course, no such real number ε exists). For example, the orthogonal group O(n, R) consists of matrices A with AAT = 1, so the Lie algebra consists of the matrices m with (1 + εm)(1 + εm)T = 1, which is equivalent to m + mT = 0 because ε2 = 0.
The preceding description can be made more rigorous as follows. The Lie algebra of a closed subgroup G of GL(n, C), may be computed as
where exp(tX) is defined using the matrix exponential. It can then be shown that the Lie algebra of G is a real vector space that is closed under the bracket operation, .
The concrete definition given above for matrix groups is easy to work with, but has some minor problems: to use it we first need to represent a Lie group as a group of matrices, but not all Lie groups can be represented in this way, and it is not even obvious that the Lie algebra is independent of the representation we use. To get around these problems we give
the general definition of the Lie algebra of a Lie group (in 4 steps):
Vector fields on any smooth manifold M can be thought of as derivations X of the ring of smooth functions on the manifold, and therefore form a Lie algebra under the Lie bracket [X, Y] = XY − YX, because the Lie bracket of any two derivations is a derivation.
If G is any group acting smoothly on the manifold M, then it acts on the vector fields, and the vector space of vector fields fixed by the group is closed under the Lie bracket and therefore also forms a Lie algebra.
We apply this construction to the case when the manifold M is the underlying space of a Lie group G, with G acting on G = M by left translations Lg(h) = gh. This shows that the space of left invariant vector fields (vector fields satisfying Lg*Xh = Xgh for every h in G, where Lg* denotes the differential of Lg) on a Lie group is a Lie algebra under the Lie bracket of vector fields.
Any tangent vector at the identity of a Lie group can be extended to a left invariant vector field by left translating the tangent vector to other points of the manifold. Specifically, the left invariant extension of an element v of the tangent space at the identity is the vector field defined by v^g = Lg*v. This identifies the tangent space TeG at the identity with the space of left invariant vector fields, and therefore makes the tangent space at the identity into a Lie algebra, called the Lie algebra of G, usually denoted by a Fraktur Thus the Lie bracket on is given explicitly by [v, w] = [v^, w^]e.
This Lie algebra is finite-dimensional and it has the same dimension as the manifold G. The Lie algebra of G determines G up to "local isomorphism", where two Lie groups are called locally isomorphic if they look the same near the identity element.
Problems about Lie groups are often solved by first solving the corresponding problem for the Lie algebras, and the result for groups then usually follows easily.
For example, simple Lie groups are usually classified by first classifying the corresponding Lie algebras.
We could also define a Lie algebra structure on Te using right invariant vector fields instead of left invariant vector fields. This leads to the same Lie algebra, because the inverse map on G can be used to identify left invariant vector fields with right invariant vector fields, and acts as −1 on the tangent space Te.
The Lie algebra structure on Te can also be described as follows:
the commutator operation
(x, y) → xyx−1y−1
on G × G sends (e, e) to e, so its derivative yields a bilinear operation on TeG. This bilinear operation is actually the zero map, but the second derivative, under the proper identification of tangent spaces, yields an operation that satisfies the axioms of a Lie bracket, and it is equal to twice the one defined through left-invariant vector fields.
Homomorphisms and isomorphisms
If G and H are Lie groups, then a Lie group homomorphism f : G → H is a smooth group homomorphism. In the case of complex Lie groups, such a homomorphism is required to be a holomorphic map. However, these requirements are a bit stringent; every continuous homomorphism between real Lie groups turns out to be (real) analytic.
The composition of two Lie homomorphisms is again a homomorphism, and the class of all Lie groups, together with these morphisms, forms a category. Moreover, every Lie group homomorphism induces a homomorphism between the corresponding Lie algebras. Let be a Lie group homomorphism and let be its derivative at the identity. If we identify the Lie algebras of G and H with their tangent spaces at the identity elements, then is a map between the corresponding Lie algebras:
which turns out to be a Lie algebra homomorphism (meaning that it is a linear map which preserves the Lie bracket). In the language of category theory, we then have a covariant functor from the category of Lie groups to the category of Lie algebras which sends a Lie group to its Lie algebra and a Lie group homomorphism to its derivative at the identity.
Two Lie groups are called isomorphic if there exists a bijective homomorphism between them whose inverse is also a Lie group homomorphism. Equivalently, it is a diffeomorphism which is also a group homomorphism. Observe that, by the above, a continuous homomorphism from a Lie group to a Lie group is an isomorphism of Lie groups if and only if it is bijective.
Lie group versus Lie algebra isomorphisms
Isomorphic Lie groups necessarily have isomorphic Lie algebras; it is then reasonable to ask how isomorphism classes of Lie groups relate to isomorphism classes of Lie algebras.
The first result in this direction is Lie's third theorem, which states that every finite-dimensional, real Lie algebra is the Lie algebra of some (linear) Lie group. One way to prove Lie's third theorem is to use Ado's theorem, which says every finite-dimensional real Lie algebra is isomorphic to a matrix Lie algebra. Meanwhile, for every finite-dimensional matrix Lie algebra, there is a linear group (matrix Lie group) with this algebra as its Lie algebra.
On the other hand, Lie groups with isomorphic Lie algebras need not be isomorphic. Furthermore, this result remains true even if we assume the groups are connected. To put it differently, the global structure of a Lie group is not determined by its Lie algebra; for example, if Z is any discrete subgroup of the center of G then G and G/Z have the same Lie algebra (see the table of Lie groups for examples). An example of importance in physics are the groups SU(2) and SO(3). These two groups have isomorphic Lie algebras, but the groups themselves are not isomorphic, because SU(2) is simply connected but SO(3) is not.
On the other hand, if we require that the Lie group be simply connected, then the global structure is determined by its Lie algebra: two simply connected Lie groups with isomorphic Lie algebras are isomorphic. (See the next subsection for more information about simply connected Lie groups.) In light of Lie's third theorem, we may therefore say that there is a one-to-one correspondence between isomorphism classes of finite-dimensional real Lie algebras and isomorphism classes of simply connected Lie groups.
Simply connected Lie groups
A Lie group is said to be simply connected if every loop in can be shrunk continuously to a point in . This notion is important because of the following result that has simple connectedness as a hypothesis:
Theorem: Suppose and are Lie groups with Lie algebras and and that is a Lie algebra homomorphism. If is simply connected, then there is a unique Lie group homomorphism such that , where is the differential of at the identity.
Lie's third theorem says that every finite-dimensional real Lie algebra is the Lie algebra of a Lie group. It follows from Lie's third theorem and the preceding result that every finite-dimensional real Lie algebra is the Lie algebra of a unique simply connected Lie group.
An example of a simply connected group is the special unitary group SU(2), which as a manifold is the 3-sphere. The rotation group SO(3), on the other hand, is not simply connected. (See Topology of SO(3).) The failure of SO(3) to be simply connected is intimately connected to the distinction between integer spin and half-integer spin in quantum mechanics. Other examples of simply connected Lie groups include the special unitary group SU(n), the spin group (double cover of rotation group) Spin(n) for , and the compact symplectic group Sp(n).
Methods for determining whether a Lie group is simply connected or not are discussed in the article on fundamental groups of Lie groups.
Exponential map
The exponential map from the Lie algebra of the general linear group to is defined by the matrix exponential, given by the usual power series:
for matrices . If is a closed subgroup of , then the exponential map takes the Lie algebra of into ; thus, we have an exponential map for all matrix groups. Every element of that is sufficiently close to the identity is the exponential of a matrix in the Lie algebra.
The definition above is easy to use, but it is not defined for Lie groups that are not matrix groups, and it is not clear that the exponential map of a Lie group does not depend on its representation as a matrix group. We can solve both problems using a more abstract definition of the exponential map that works for all Lie groups, as follows.
For each vector in the Lie algebra of (i.e., the tangent space to at the identity), one proves that there is a unique one-parameter subgroup such that . Saying that is a one-parameter subgroup means simply that is a smooth map into and that
for all and . The operation on the right hand side is the group multiplication in . The formal similarity of this formula with the one valid for the exponential function justifies the definition
This is called the exponential map, and it maps the Lie algebra into the Lie group . It provides a diffeomorphism between a neighborhood of 0 in and a neighborhood of in . This exponential map is a generalization of the exponential function for real numbers (because is the Lie algebra of the Lie group of positive real numbers with multiplication), for complex numbers (because is the Lie algebra of the Lie group of non-zero complex numbers with multiplication) and for matrices (because with the regular commutator is the Lie algebra of the Lie group of all invertible matrices).
Because the exponential map is surjective on some neighbourhood of , it is common to call elements of the Lie algebra infinitesimal generators of the group . The subgroup of generated by is the identity component of .
The exponential map and the Lie algebra determine the local group structure of every connected Lie group, because of the Baker–Campbell–Hausdorff formula: there exists a neighborhood of the zero element of , such that for we have
where the omitted terms are known and involve Lie brackets of four or more elements. In case and commute, this formula reduces to the familiar exponential law .
The exponential map relates Lie group homomorphisms. That is, if is a Lie group homomorphism and the induced map on the corresponding Lie algebras, then for all we have
In other words, the following diagram commutes,
(In short, exp is a natural transformation from the functor Lie to the identity functor on the category of Lie groups.)
The exponential map from the Lie algebra to the Lie group is not always onto, even if the group is connected (though it does map onto the Lie group for connected groups that are either compact or nilpotent). For example, the exponential map of is not surjective. Also, the exponential map is neither surjective nor injective for infinite-dimensional (see below) Lie groups modelled on C∞ Fréchet space, even from arbitrary small neighborhood of 0 to corresponding neighborhood of 1.
Lie subgroup
A Lie subgroup of a Lie group is a Lie group that is a subset of and such that the inclusion map from to is an injective immersion and group homomorphism. According to Cartan's theorem, a closed subgroup of admits a unique smooth structure which makes it an embedded Lie subgroup of —i.e. a Lie subgroup such that the inclusion map is a smooth embedding.
Examples of non-closed subgroups are plentiful; for example take to be a torus of dimension 2 or greater, and let be a one-parameter subgroup of irrational slope, i.e. one that winds around in G. Then there is a Lie group homomorphism with . The closure of will be a sub-torus in .
The exponential map gives a one-to-one correspondence between the connected Lie subgroups of a connected Lie group and the subalgebras of the Lie algebra of . Typically, the subgroup corresponding to a subalgebra is not a closed subgroup. There is no criterion solely based on the structure of which determines which subalgebras correspond to closed subgroups.
Representations
One important aspect of the study of Lie groups is their representations, that is, the way they can act (linearly) on vector spaces. In physics, Lie groups often encode the symmetries of a physical system. The way one makes use of this symmetry to help analyze the system is often through representation theory. Consider, for example, the time-independent Schrödinger equation in quantum mechanics, . Assume the system in question has the rotation group SO(3) as a symmetry, meaning that the Hamiltonian operator commutes with the action of SO(3) on the wave function . (One important example of such a system is the hydrogen atom, which has a spherically symmetric potential.) This assumption does not necessarily mean that the solutions are rotationally invariant functions. Rather, it means that the space of solutions to is invariant under rotations (for each fixed value of ). This space, therefore, constitutes a representation of SO(3). These representations have been classified and the classification leads to a substantial simplification of the problem, essentially converting a three-dimensional partial differential equation to a one-dimensional ordinary differential equation.
The case of a connected compact Lie group K (including the just-mentioned case of SO(3)) is particularly tractable. In that case, every finite-dimensional representation of K decomposes as a direct sum of irreducible representations. The irreducible representations, in turn, were classified by Hermann Weyl. The classification is in terms of the "highest weight" of the representation. The classification is closely related to the classification of representations of a semisimple Lie algebra.
One can also study (in general infinite-dimensional) unitary representations of an arbitrary Lie group (not necessarily compact). For example, it is possible to give a relatively simple explicit description of the representations of the group SL(2, R) and the representations of the Poincaré group.
Classification
Lie groups may be thought of as smoothly varying families of symmetries. Examples of symmetries include rotation about an axis. What must be understood is the nature of 'small' transformations, for example, rotations through tiny angles, that link nearby transformations. The mathematical object capturing this structure is called a Lie algebra (Lie himself called them "infinitesimal groups"). It can be defined because Lie groups are smooth manifolds, so have tangent spaces at each point.
The Lie algebra of any compact Lie group (very roughly: one for which the symmetries form a bounded set) can be decomposed as a direct sum of an abelian Lie algebra and some number of simple ones. The structure of an abelian Lie algebra is mathematically uninteresting (since the Lie bracket is identically zero); the interest is in the simple summands. Hence the question arises: what are the simple Lie algebras of compact groups? It turns out that they mostly fall into four infinite families, the "classical Lie algebras" An, Bn, Cn and Dn, which have simple descriptions in terms of symmetries of Euclidean space. But there are also just five "exceptional Lie algebras" that do not fall into any of these families. E8 is the largest of these.
Lie groups are classified according to their algebraic properties (simple, semisimple, solvable, nilpotent, abelian), their connectedness (connected or simply connected) and their compactness.
A first key result is the Levi decomposition, which says that every simply connected Lie group is the semidirect product of a solvable normal subgroup and a semisimple subgroup.
Connected compact Lie groups are all known: they are finite central quotients of a product of copies of the circle group S1 and simple compact Lie groups (which correspond to connected Dynkin diagrams).
Any simply connected solvable Lie group is isomorphic to a closed subgroup of the group of invertible upper triangular matrices of some rank, and any finite-dimensional irreducible representation of such a group is 1-dimensional. Solvable groups are too messy to classify except in a few small dimensions.
Any simply connected nilpotent Lie group is isomorphic to a closed subgroup of the group of invertible upper triangular matrices with 1s on the diagonal of some rank, and any finite-dimensional irreducible representation of such a group is 1-dimensional. Like solvable groups, nilpotent groups are too messy to classify except in a few small dimensions.
Simple Lie groups are sometimes defined to be those that are simple as abstract groups, and sometimes defined to be connected Lie groups with a simple Lie algebra. For example, SL(2, R) is simple according to the second definition but not according to the first. They have all been classified (for either definition).
Semisimple Lie groups are Lie groups whose Lie algebra is a product of simple Lie algebras. They are central extensions of products of simple Lie groups.
The identity component of any Lie group is an open normal subgroup, and the quotient group is a discrete group. The universal cover of any connected Lie group is a simply connected Lie group, and conversely any connected Lie group is a quotient of a simply connected Lie group by a discrete normal subgroup of the center. Any Lie group G can be decomposed into discrete, simple, and abelian groups in a canonical way as follows. Write
Gcon for the connected component of the identity
Gsol for the largest connected normal solvable subgroup
Gnil for the largest connected normal nilpotent subgroup
so that we have a sequence of normal subgroups
.
Then
G/Gcon is discrete
Gcon/Gsol is a central extension of a product of simple connected Lie groups.
Gsol/Gnil is abelian. A connected abelian Lie group is isomorphic to a product of copies of R and the circle group S1.
Gnil/1 is nilpotent, and therefore its ascending central series has all quotients abelian.
This can be used to reduce some problems about Lie groups (such as finding their unitary representations) to the same problems for connected simple groups and nilpotent and solvable subgroups of smaller dimension.
The diffeomorphism group of a Lie group acts transitively on the Lie group
Every Lie group is parallelizable, and hence an orientable manifold (there is a bundle isomorphism between its tangent bundle and the product of itself with the tangent space at the identity)
Infinite-dimensional Lie groups
Lie groups are often defined to be finite-dimensional, but there are many groups that resemble Lie groups, except for being infinite-dimensional. The simplest way to define infinite-dimensional Lie groups is to model them locally on Banach spaces (as opposed to Euclidean space in the finite-dimensional case), and in this case much of the basic theory is similar to that of finite-dimensional Lie groups. However this is inadequate for many applications, because many natural examples of infinite-dimensional Lie groups are not Banach manifolds. Instead one needs to define Lie groups modeled on more general locally convex topological vector spaces. In this case the relation between the Lie algebra and the Lie group becomes rather subtle, and several results about finite-dimensional Lie groups no longer hold.
The literature is not entirely uniform in its terminology as to exactly which properties of infinite-dimensional groups qualify the group for the prefix Lie in Lie group. On the Lie algebra side of affairs, things are simpler since the qualifying criteria for the prefix Lie in Lie algebra are purely algebraic. For example, an infinite-dimensional Lie algebra may or may not have a corresponding Lie group. That is, there may be a group corresponding to the Lie algebra, but it might not be nice enough to be called a Lie group, or the connection between the group and the Lie algebra might not be nice enough (for example, failure of the exponential map to be onto a neighborhood of the identity). It is the "nice enough" that is not universally defined.
Some of the examples that have been studied include:
The group of diffeomorphisms of a manifold. Quite a lot is known about the group of diffeomorphisms of the circle. Its Lie algebra is (more or less) the Witt algebra, whose central extension the Virasoro algebra (see Virasoro algebra from Witt algebra for a derivation of this fact) is the symmetry algebra of two-dimensional conformal field theory. Diffeomorphism groups of compact manifolds of larger dimension are regular Fréchet Lie groups; very little about their structure is known.
The diffeomorphism group of spacetime sometimes appears in attempts to quantize gravity.
The group of smooth maps from a manifold to a finite-dimensional Lie group is an example of a gauge group (with operation of pointwise multiplication), and is used in quantum field theory and Donaldson theory. If the manifold is a circle these are called loop groups, and have central extensions whose Lie algebras are (more or less) Kac–Moody algebras.
There are infinite-dimensional analogues of general linear groups, orthogonal groups, and so on. One important aspect is that these may have simpler topological properties: see for example Kuiper's theorem. In M-theory, for example, a 10-dimensional SU(N) gauge theory becomes an 11-dimensional theory when N becomes infinite.
See also
Adjoint representation of a Lie group
Haar measure
Homogeneous space
List of Lie group topics
Representations of Lie groups
Symmetry in quantum mechanics
Lie point symmetry, about the application of Lie groups to the study of differential equations.
Notes
Explanatory notes
Citations
References
.
. Chapters 1–3 , Chapters 4–6 , Chapters 7–9
.
.
Borel's review
.
.
. The 2003 reprint corrects several typographical mistakes.
.
.
External links
Journal of Lie Theory
Manifolds
Symmetry | Lie group | Physics,Mathematics | 9,098 |
7,743,448 | https://en.wikipedia.org/wiki/Diel%20vertical%20migration | Diel vertical migration (DVM), also known as diurnal vertical migration, is a pattern of movement used by some organisms, such as copepods, living in the ocean and in lakes. The adjective "diel" (IPA: , ) comes from , and refers to a 24-hour period. The migration occurs when organisms move up to the uppermost layer of the water at night and return to the bottom of the daylight zone of the oceans or to the dense, bottom layer of lakes during the day. DVM is important to the functioning of deep-sea food webs and the biologically-driven sequestration of carbon.
In terms of biomass, DVM is the largest synchronous migration in the world. It is not restricted to any one taxon, as examples are known from crustaceans (copepods), molluscs (squid), and ray-finned fishes (trout).
The phenomenon may be advantageous for a number of reasons, most typically to access food and to avoid predators.
It is triggered by various stimuli, the most prominent being changes in light-intensity, though evidence suggests that biological clocks are an underlying stimulus as well. While this mass migration is generally nocturnal, with the animals ascending from the depths at nightfall and descending at sunrise, the timing can alter in response to the different cues and stimuli that trigger it. Some unusual events impact vertical migration: DVM can be absent during the midnight sun in Arctic regions and vertical migration can occur suddenly during a solar eclipse. The phenomenon also demonstrates cloud-driven variations.
The common swift is an exception among birds in that it ascends and descends into high altitudes at dusk and dawn, similar to the vertical migration of aquatic lifeforms.
Discovery
The phenomenon was first documented by French naturalist Georges Cuvier in 1817. He noted that daphnia, a type of plankton, appeared and disappeared according to a diurnal pattern.
During World War II the U.S. Navy was taking sonar readings of the ocean when they discovered the deep scattering layer (DSL). While performing sound propagation experiments, the University of California's Division of War Research (UCDWR) consistently had results of the echo-sounder that showed a distinct reverberation that they attributed to mid-water layer scattering agents. At the time, there was speculation that these readings may be attributed to enemy submarines.
Martin W. Johnson of Scripps Institution of Oceanography proposed a possible explanation. Working with the UCDWR, the Scripps researchers were able to confirm that the observed reverberations from the echo-sounder were in fact related to the diel vertical migration of marine animals. The DSL was caused by large, dense groupings of organisms, like zooplankton, that scattered the sonar to create a false or second bottom.
Once scientists started to do more research on what was causing the DSL, it was discovered that a large range of organisms were vertically migrating. Most types of plankton and some types of nekton have exhibited some type of vertical migration, although it is not always diel. These migrations may have substantial effects on mesopredators and apex predators by modulating the concentration and accessibility of their prey (e.g., impacts on the foraging behavior of pinnipeds).
Types of vertical migration
Diel
This is the most common form of vertical migration. Organisms migrate on a daily basis through different depths in the water column. Migration usually occurs between shallow surface waters of the epipelagic zone and deeper mesopelagic zone of the ocean or hypolimnion zone of lakes.
There are three recognized types of diel vertical migration:
Nocturnal vertical migration
In the most common form, nocturnal vertical migration, organisms ascend to the surface around dusk, remaining at the surface for the night, then migrating to depth again around dawn.
Reverse migration
Reverse migration occurs with organisms ascending to the surface at sunrise and remaining high in the water column throughout the day until descending with the setting sun.
Twilight diel vertical migration
Twilight diel vertical migration involves two separate migrations in a single 24-hour period, with the first ascent at dusk followed by a descent at midnight, often known as the "midnight sink". The second ascent to the surface and descent to the depths occurs at sunrise.
Seasonal
Organisms are found at different depths depending on what season it is. Seasonal changes to the environment may influence changes to migration patterns. Normal diel vertical migration occurs in species of foraminifera throughout the year in the polar regions; however, during the midnight sun, no differential light cues exist so they remain at the surface to feed upon the abundant phytoplankton, or to facilitate photosynthesis by their symbionts. This is not true for all species at all times, however. Zooplankton have been observed to resynchronize their migrations with the light of the moon during periods when the sun is not visible, and to stay in deeper waters when the moon is full.
Larger seasonally-migrating zooplankton such as overwintering copepods have been shown to transport a substantial amount of carbon to the deep ocean through a process known as the lipid pump. The lipid pump is a process that sequesters carbon (in the form of carbon-rich lipids) out of the surface ocean via the descent of copepods to the deep during autumn. These copepods accumulate these lipids during late summer and autumn before descending to the deep to overwinter in response to reduced primary production and harsh conditions at the surface. Furthermore, they rely on these lipid reserves that are metabolized for energy to survive through winter before ascending back to the surface in the spring, typically at the onset of a spring bloom.
Ontogenetic
Organisms spend different stages of their life cycle at different depths. There are often pronounced differences in migration patterns of adult female copepods, like Eurytemora affinis, which stay at depth with only a small upward movement at night, compared to the rest of its life stages which migrate over 10 meters. In addition, there is a trend seen in other copepods, like Acartia spp. that have an increasing amplitude of their DVM seen with their progressive life stages. This is possibly due to increasing body size of the copepods and the associated risk of visual predators, like fish, as being larger makes them more noticeable.
Vertical migration stimuli
There are two different types of factors that are known to play a role in vertical migration, endogenous and exogenous. Endogenous factors originate from the organism itself; sex, age, size, biological rhythms, etc. Exogenous factors are environmental factors acting on the organism such as light, gravity, oxygen, temperature, predator-prey interactions, etc.
Endogenous factors
Endogenous rhythm
Biological clocks are an ancient and adaptive sense of time innate to an organism that allows them to anticipate environmental changes and cycles so they are able to physiologically and behaviorally respond to the expected change.
Evidence of circadian rhythms controlling DVM, metabolism, and even gene expression have been found in copepod species, Calanus finmarchicus. These copepods were shown to continue to exhibit these daily rhythms of vertical migration in the laboratory setting even in constant darkness, after being captured from an actively migrating wild population.
An experiment was done at the Scripps Institution of Oceanography which kept organisms in column tanks with light/dark cycles. A few days later the light was changed to a constant low light and the organisms still displayed diel vertical migration. This suggests that some type of internal response was causing the migration.
Clock gene expression
Many organisms, including the copepod C. finmarchicus, have genetic material devoted to maintaining their biological clock. The expression of these genes varies temporally with the expression significantly increasing following dawn and dusk at times of greatest vertical migration. These findings may indicate they work as a molecular stimulus for vertical migration.
Body size
The relative body size of an organism has been found to affect DVM. Bull trout express daily and seasonal vertical migrations with smaller individuals always staying at a deeper layer than the larger individuals. This is most likely due to a predation risk, but is dependent on the individuals own size such that smaller animals may be more inclined to remain at depth.
Exogenous factors
Light
"Light is the most common and critical cue for vertical migration". However, as of 2010, there had not been sufficient research to determine which aspect of the light field was responsible. As of 2020, research has suggested that both light intensity and spectral composition of light are important.
Temperature
Organisms will migrate to a water depth with temperatures that best suit the organisms needs, for example some fish species migrate to warmer surface waters in order to aid digestion. Temperature changes can influence swimming behavior of some copepods. In the presence of a strong thermocline some zooplankton may be inclined to pass through it, and migrate to the surface waters, though this can be very variable even in a single species. The marine copepod, Calanus finmarchicus, will migrate through gradients with temperature differences of 6 °C over George's Bank; whereas, in the North Sea they are observed to remain below the gradient.
Salinity
Changes in salinity may promote organism to seek out more suitable waters if they happen to be stenohaline or unequipped to handle regulating their osmotic pressure. Areas that are impacted by tidal cycles accompanied by salinity changes, estuaries for example, may see vertical migration in some species of zooplankton. Salinity has also been proposed as a factor that regulates the biogeochemical impact of diel vertical migration.
Pressure
Pressure changes have been found to produce differential responses that result in vertical migration. Many zooplankton will react to increased pressure with positive phototaxis, a negative geotaxis, and/or a kinetic response that results in ascending in the water column. Likewise, when there is a decrease in pressure, the zoo plankton respond by passively sinking or active downward swimming to descend in the water column.
Predator kairomones
A predator might release a chemical cue which could cause its prey to vertically migrate away. This may stimulate the prey to vertically migrate to avoid said predator. The introduction of a potential predator species, like a fish, to the habitat of diel vertical migrating zooplankton has been shown to influence the distribution patterns seen in their migration. For example, a study used Daphnia and a fish that was too small to prey on them (Lebistus reticulatus), found that with the introduction of the fish to the system the Daphnia remained below the thermocline, where the fish was not present. This demonstrates the effects of kairomones on Daphnia DVM.
Tidal patterns
Some organisms have been found to move with the tidal cycle. A study looked at the abundance of a species of small shrimp, Acetes sibogae, and found that they tended to move further higher in the water column and in higher numbers during flood tides than during ebb tides experiences at the mouth of an estuary. It is possible that varying factors with the tides may be the true trigger for the migration rather than the movement of the water itself, like the salinity or minute pressure changes.
Reasons for vertical migration
There are many hypotheses as to why organisms would vertically migrate, and several may be valid at any given time.
Predator avoidance
The universality of DVM suggests that there is some powerful common factor behind it.
The connection between available light and DVM has led researchers to theorize that organisms may stay in deeper, darker areas during the day to avoid being eaten by predators who depend on light to see and catch their prey. While the ocean's surface provides an abundance of food, it may be safest for many species to visit it at night.
Light-dependent predation by fish is a common pressure that causes DVM behavior in zooplankton and krill. A given body of water may be viewed as a risk gradient whereby the surface layers are riskier to reside in during the day than deep water, and as such promotes varied longevity among zooplankton that settle at different daytime depths. Indeed, in many instances it is advantageous for zooplankton to migrate to deep waters during the day to avoid predation and come up to the surface at night to feed. For example, the northern krill Meganyctiphanes norvegica undergoes diel vertical migration to avoid planktivorous fish.
Patterns among migrators seem to support the predator avoidance theory. Migrators will stay in groups as they migrate, a behavior that may protect individuals within the group from being eaten. Groups of smaller, harder to see animals begin their upward migration before larger, easier to see species, consistent with the idea that detectability by visual predators is a key issue. Small creatures may start to migrate upwards as much as 20 minutes before the sun sets, while large conspicuous fish may wait as long as 80 minutes after the sun goes down. Species that are better able to avoid predators also tend to migrate before those with poorer swimming capabilities. Squid are a primary prey for Risso's dolphins (Grampus griseus), an air-breathing predator, but one that relies on acoustic rather than visual information to hunt. Squid delay their migration pattern by about 40 minutes when dolphins are about, lessening risk by feeding later and for a shorter time.
Metabolic advantages
Another possibility is that predators can benefit from diel vertical migration as an energy conservation strategy. Studies indicate that male dogfish (Scyliorhinus canicula) follow a "hunt warm - rest cool" strategy that enables them to lower their daily energy costs. They remain in warm water only long enough to obtain food, and then return to cooler areas where their metabolism can operate more slowly.
Alternatively, organisms feeding on the bottom in cold water during the day may migrate to surface waters at night in order to digest their meal at warmer temperatures.
Dispersal and transport
Organisms can use deep and shallow currents to find food patches or to maintain a geographical location.
Avoid UV damage
The sunlight can penetrate into the water column. If an organism, especially something small like a microbe, is too close to the surface the UV can damage them. So they would want to avoid getting too close to the surface, especially during daylight.
Water transparency
A theory known as the “transparency-regulator hypothesis" predicts that "the relative roles of UV and visual predation pressure will vary systematically across a gradient of lake transparency." In less transparent waters, where fish are present and more food is available, fish tend to be the main driver of DVM. In more transparent bodies of water, where fish are less numerous and food quality improves in deeper waters, UV light can travel farther, thus functioning as the main driver of DVM in such cases.
Unusual events
Due to the particular types of stimuli and cues used to initiate vertical migration, anomalies can change the pattern drastically.
For example, the occurrence of midnight sun in the Arctic induces changes to planktonic life that would normally perform DVM with a 24-hour night and day cycle. In the summers of the Arctic the Earth's north pole is directed toward the sun creating longer days and at the high latitude continuous day light for more than 24-hours. Species of foraminifera found in the ocean have been observed to cease their DVM pattern, and rather remain at the surface in favor of feeding on the phytoplankton. For example Neogloboquadrina pachyderma, and for those species that contain symbionts, like Turborotalita quinqueloba, remain in sunlight to aid photosynthesis. Changes in sea-ice and surface chlorophyll
concentration are found to be stronger determinants of the vertical
habitat of Arctic N. pachyderma.
There is also evidence of changes to vertical migration patterns during solar eclipse events. In the moments that the sun is obscured during normal day light hours, there is a sudden dramatic decrease in light intensity. The decreased light intensity, replicates the typical lighting experienced at night time that stimulate the planktonic organisms to migrate. During an eclipse, some copepod species distribution is concentrated near the surface, for example Calanus finmarchicus displays a classic diurnal migration pattern but on a much shorter time scale during an eclipse.
Importance for the biological pump
The biological pump is the conversion of CO2 and inorganic nutrients by plant photosynthesis into particulate organic matter in the euphotic zone and transference to the deeper ocean. This is a major process in the ocean and without vertical migration it wouldn't be nearly as efficient. The deep ocean gets most of its nutrients from the higher water column when they sink down in the form of marine snow. This is made up of dead or dying animals and microbes, fecal matter, sand and other inorganic material.
Organisms migrate up to feed at night so when they migrate back to depth during the day they defecate large sinking fecal pellets. Whilst some larger fecal pellets can sink quite fast, the speed that organisms move back to depth is still faster. At night organisms are in the top 100 metres of the water column, but during the day they move down to between 800 and 1000 meters. If organisms were to defecate at the surface it would take the fecal pellets days to reach the depth that they reach in a matter of hours. Therefore, by releasing fecal pellets at depth they have almost 1000 metres less to travel to get to the deep ocean. This is known as active transport. The organisms are playing a more active role in moving organic matter down to depths. Because a large majority of the deep sea, especially marine microbes, depends on nutrients falling down, the quicker they can reach the ocean floor the better.
Zooplankton and salps play a large role in the active transport of fecal pellets. 15–50% of zooplankton biomass is estimated to migrate, accounting for the transport of 5–45% of particulate organic nitrogen to depth. Salps are large gelatinous plankton that can vertically migrate 800 meters and eat large amounts of food at the surface. They have a very long gut retention time, so fecal pellets usually are released at maximum depth. Salps are also known for having some of the largest fecal pellets. Because of this they have a very fast sinking rate, small detritus particles are known to aggregate on them. This makes them sink that much faster. As previously mentioned, the lipid pump represents a substantial flux of POC (particulate organic carbon) to the deep ocean in the form of lipids produced by large overwintering copepods. Through overwintering, these lipids are transported to the deep in autumn and are metabolized at depths below the thermocline through winter before the copepods rise to the surface in the spring. The metabolism of these lipids reduces this POC at depth while producing CO2 as a waste product, ultimately serving as a potentially significant contributor to oceanic carbon sequestration. Although the flux of lipid carbon from the lipid pump has been reported to be comparable to the global POC flux from the biological pump, observational challenges with the lipid pump from deficient nutrient cycling, and capture techniques have made it difficult to incorporate it into the global carbon export flux. So while currently there is still much research being done on why organisms vertically migrate, it is clear that vertical migration plays a large role in the active transport of dissolved organic matter to depth.
See also
Krill
Phytoplankton
Primary production
References
Aquatic ecology
Biological oceanography
Marine biology
Planktology
Animal migration | Diel vertical migration | Biology | 4,093 |
7,464,700 | https://en.wikipedia.org/wiki/Theorem%20of%20corresponding%20states | According to van der Waals, the theorem of corresponding states (or principle/law of corresponding states) indicates that all fluids, when compared at the same reduced temperature and reduced pressure, have approximately the same compressibility factor and all deviate from ideal gas behavior to about the same degree.
Material constants that vary for each type of material are eliminated, in a recast reduced form of a constitutive equation. The reduced variables are defined in terms of critical variables.
The principle originated with the work of Johannes Diderik van der Waals in about 1873 when he used the critical temperature and critical pressure to derive a universal property of all fluids that follow the van der Waals equation of state. It predicts a value of that is found to be an overestimate when compared to real gases.
Edward A. Guggenheim used the phrase "Principle of Corresponding States" in an oft-cited paper to describe the phenomenon where different systems have very similar behaviors when near a critical point.
There are many examples of non-ideal gas models which satisfy this theorem, such as the van der Waals model, the Dieterici model, and so on, that can be found on the page on real gases.
Compressibility factor at the critical point
The compressibility factor at the critical point, which is defined as , where the subscript indicates physical quantities measured at the critical point, is predicted to be a constant independent of substance by many equations of state.
The table below for a selection of gases uses the following conventions:
: critical temperature [K]
: critical pressure [Pa]
: critical specific volume [m3⋅kg−1]
: gas constant (8.314 J⋅K−1⋅mol−1)
: Molar mass [kg⋅mol−1]
See also
Van der Waals equation
Equation of state
Compressibility factors
Johannes Diderik van der Waals equation
Noro-Frenkel law of corresponding states
References
External links
Properties of Natural Gases. Includes a chart of compressibility factors versus reduced pressure and reduced temperature (on last page of the PDF document)
Theorem of corresponding states on SklogWiki.
Laws of thermodynamics
Engineering thermodynamics
Continuum mechanics
Johannes Diderik van der Waals | Theorem of corresponding states | Physics,Chemistry,Engineering | 455 |
7,066,527 | https://en.wikipedia.org/wiki/Height%20Modernization | Height Modernization is the name of a series of state-by-state programs recently begun by the United States' National Geodetic Survey, a division of the National Oceanic and Atmospheric Administration. The goal of each state program is to place GPS base stations at various locations within each participating state to measure topographic changes in the directions of latitude and longitude caused by subsidence or earthquakes, as well as to measure changes in height (elevation).
References
Arizona Height Modernization – Arizona Geographic Information Council
Texas Height Modernization – Texas A&M University – Corpus Christi
Geodesy | Height Modernization | Mathematics | 114 |
15,875,167 | https://en.wikipedia.org/wiki/Pickering%20emulsion | A Ramsden emulsion, sometimes named Pickering emulsion, is an emulsion that is stabilized by solid particles (for example colloidal silica) which adsorb onto the interface between the water and oil phases. Typically, the emulsions are either water-in-oil or oil-in-water emulsions, but other more complex systems such as water-in-water, oil-in-oil, water-in-oil-in-water, and oil-in-water-in-oil also do exist. Pickering emulsions were named after S.U. Pickering, who described the phenomenon in 1907, although the effect was first recognized by Walter Ramsden in 1903.
Overview
If oil and water are mixed and small oil droplets are formed and dispersed throughout the water (oil-in-water emulsion), eventually the droplets will coalesce to decrease the amount of energy in the system. However, if solid particles are added to the mixture, they will bind to the surface of the interface and prevent the droplets from coalescing, making the emulsion more stable.
Particle properties such as hydrophobicity, shape, and size, as well as the electrolyte concentration of the continuous phase and the volume ratio of the two phases can have an effect on the stability of the emulsion. The particle’s contact angle to the surface of the droplet is a characteristic of the hydrophobicity of the particle. If the contact angle of the particle to the interface is low, the particle will be mostly wetted by the droplet and therefore will not be likely to prevent coalescence of the droplets. Particles that are partially hydrophobic are better stabilizers because they are partially wettable by both liquids and therefore bind better to the surface of the droplets. The optimal contact angle for a stable emulsion is achieved when the particle is equally wetted by the two phases (i.e. 90° contact angle). The stabilization energy is given by
where r is the particle radius, is the interfacial tension, and is the contact angle of the particle with the interface.
When the contact angle is approximately 90°, the energy required to stabilize the system is at its minimum.
Generally, the phase that preferentially wets the particle will be the continuous phase in the emulsion system. The most common type of Ramsden emulsions are oil-in-water emulsions due to the hydrophilicity of most organic particles.
One example of a Ramsden-stabilized emulsion is homogenized milk. The milk protein (casein) units are adsorbed at the surface of the milk fat globules and act as surfactants. The casein replaces the milkfat globule membrane, which is damaged during homogenization. Other examples of emulsions where Ramsden particles may be the stabilizing species are for example detergents, low-fat chocolates, mayonnaises and margarines.
Ramsden emulsions have gained increased attention and research interest during the last 20 years when the use of traditional surfactants was questioned due to environmental, health and cost issues. Synthetic nanoparticles as Ramsden emulsion stabilizers with well-defined sizes and compositions have been the primarily particles of interest until recently when also natural organic particles have gained increased attention. They are believed to have advantages such as cost-efficiency and degradability, and are issued from renewable resources.
Pickering emulsions find applications for enhanced oil recovery or water remediation. Certain Pickering emulsions remain stable even under gastric conditions and show an extraordinary resistance against gastric lipolysis, facilitating their use for controlled lipid digestion and satiation or oral delivery systems.
Additionally, it has been demonstrated that the stability of the Ramsden emulsions can be improved by the use of amphiphilic "Janus particles", namely particles that have one hydrophobic and one hydrophilic side, due to the higher adsorption energy of the particles at the liquid-liquid interface. This is evident when observing emulsion stabilization using polyelectrolytes.
It is also possible to use latex particles for Ramsden stabilization and then fuse these particles to form a permeable shell or capsule, called a colloidosome. Moreover, Ramsden emulsion droplets are also suitable templates for micro-encapsulation and the formation of closed, non-permeable capsules. This form of encapsulation can also be applied to water-in-water emulsions (dispersions of phase-separated aqueous polymer solutions), and can also be reversible.
Pickering-stabilized microbubbles may have applications as ultrasound contrast agents.
See also
Liquid marbles
References
Chemical mixtures
Condensed matter physics
Soft matter
Drug delivery devices
Dosage forms | Pickering emulsion | Physics,Chemistry,Materials_science,Engineering | 991 |
70,679,614 | https://en.wikipedia.org/wiki/QOI%20%28image%20format%29 | The Quite OK Image Format (QOI) is a specification for lossless image compression of 24-bit (8 bits per color RGB) or 32-bit (8 bits per color with 8-bit alpha channel RGBA) color raster (bitmapped) images, invented by Dominic Szablewski and first announced on 24 November 2021.
Description
The intended purpose was to create an open source lossless compression method that was faster and easier to implement than PNG. Figures specified in the blog post announcing the format claim 20-50 times faster encoding, and 3-4 times faster decoding speed compared to PNG, with similar file sizes. The author has donated the specification to the public domain (CC0).
Software and language support
QOI is supported natively by ImageMagick, Imagine (v1.3.9+), IrfanView (v4.60+), FFmpeg (v5.1+), and GraphicConverter (v11.8+). Microsoft PowerToys (v0.76+) for Windows 10 and 11 adds support for previewing QOI images to the Windows File Explorer. Community made plugins are available in GIMP, Paint.NET and XnView MP.
The game engine GameMaker designates bzip2 + QOI as the default format of texture groups since version 2022.1.0.609, to achieve the better compression but still quicker to decompress, while the standalone QOI and PNG formats are optional for the even faster performance and better compabilities respectively.
There are also implementations for various languages such as Rust, Python, Java, C++, C# and more. A full list can be found on the project's Git(Hub) repository README.
File format
Header
A QOI file consists of a 14-byte header, followed by any number of data “chunks” and an 8-byte end marker.
qoi_header {
char magic[4]; // magic bytes "qoif"
uint32_t width; // image width in pixels (BE)
uint32_t height; // image height in pixels (BE)
uint8_t channels; // 3 = RGB, 4 = RGBA
uint8_t colorspace; // 0 = sRGB with linear alpha
// 1 = all channels linear
};
The colorspace and channel fields are purely informative. They do not change the way data chunks are encoded.
Encoding
Images are encoded row by row, left to right, top to bottom. The
decoder and encoder start with as the previous pixel value. An image is complete when all pixels specified by have been covered. Pixels are encoded as:
Run-length encoding of the previous pixel ()
an index into the array of previously seen pixels ()
a difference compared to the previous pixel value in r,g,b ( or )
Full r,g,b or r,g,b,a values ( or )
The color channels are assumed to not be premultiplied with the alpha channel (“un-premultiplied alpha”). A running (zero-initialized) of previously seen pixel
values is maintained by the encoder and decoder. Each pixel that is seen by the encoder and decoder is put into this array at the position formed by a hash function of the color value.
In the encoder, if the pixel value at the index matches the current pixel, this index position is written to the stream as . The hash function for the index is:
index_position = (r * 3 + g * 5 + b * 7 + a * 11) % 64
Each chunk starts with a 2- or 8-bit tag, followed by a number of data bits. The bit length of chunks is divisible by 8 - i.e. all chunks are byte aligned. All values encoded in these data bits have the most significant bit on the left. The 8-bit tags have precedence over the 2-bit tags. A decoder must check for the presence of an 8-bit tag first. The byte stream's end is marked with 7 bytes followed by a single byte.
The possible chunks are:
8-bit tag (254)
8-bit red channel value
8-bit green channel value
8-bit blue channel value
The alpha value remains unchanged from the previous pixel.
8-bit tag (255)
8-bit red channel value
8-bit green channel value
8-bit blue channel value
8-bit alpha channel value
2-bit tag
6-bit index into the color index array:
A valid encoder must not issue 2 or more consecutive
chunks to the same index. should be used instead.
2-bit tag
2-bit red channel difference from the previous pixel
2-bit green channel difference from the previous pixel
2-bit blue channel difference from the previous pixel
The difference to the current channel values are using a wraparound operation, so will result in 255, while will result in 0.
Values are stored as unsigned integers with a bias of 2. E.g. −2 is stored as 0 (). 1 is stored as 3 (). The alpha value remains unchanged from the previous pixel.
2-bit tag
6-bit green channel difference from the previous pixel
4-bit red channel difference minus green channel difference
4-bit blue channel difference minus green channel difference
The green channel is used to indicate the general direction of change and is encoded in 6 bits. The red and blue channels (dr and db) base their diffs off of the green channel difference. I.e.:
dr_dg = (cur_px.r - prev_px.r) - (cur_px.g - prev_px.g)
db_dg = (cur_px.b - prev_px.b) - (cur_px.g - prev_px.g)
The difference to the current channel values are using a wraparound operation, so will result in 253, while will result in 1.
Values are stored as unsigned integers with a bias of 32 for the green channel and a bias of 8 for the red and blue channel. The alpha value remains unchanged from the previous pixel.
2-bit tag
6-bit run-length repeating the previous pixel
The run-length is stored with a bias of −1. Note that the runlengths 63 and 64 ( and ) are illegal as they are occupied by the and tags.
References
External links
Format website: C Source code and benchmark results
1 page PDF specification
GitHub repository (including C implementation)
How PNG Works: Compromising Speed for Quality - YouTube A video comparing compression techniques in PNG and QOI with animations and examples.
Computer-related introductions in 2021
Graphics standards
Image compression
Open formats
Raster graphics file formats | QOI (image format) | Technology | 1,437 |
71,295,111 | https://en.wikipedia.org/wiki/Arthonia%20stereocaulina | Arthonia stereocaulina is a species of lichenicolous fungus in the family Arthoniaceae.
Distribution
Arthonia stereocaulina has been reported from Alaska, Canada, Iceland, Russia and Svalbard.
Host species
As the name suggests, Arthonia stereocaulina infects lichens of the genus Stereocaulon. Known host species are:
Stereocaulon alpinum
Stereocaulon arcticum
Stereocaulon botryosum
Stereocaulon capitellatum
Stereocaulon depressum
Stereocaulon glareosum
Stereocaulon groenlandicum
Stereocaulon intermedium
Stereocaulon myriocarpum
Stereocaulon paschale
Stereocaulon rivulorum
Stereocaulon saxatile
Stereocaulon tomentosum
References
Arthoniaceae
Fungi of Canada
Fungi of Iceland
Fungi of Russia
Fungi of Svalbard
Fungi of the United States
Fungi described in 1993
Taxa named by Rolf Santesson
Lichenicolous fungi
Fungus species | Arthonia stereocaulina | Biology | 220 |
19,132,648 | https://en.wikipedia.org/wiki/WISE%20Campaign | The WISE Campaign (Women into Science and Engineering) is a United Kingdom-based organization that encourages women and girls to value and pursue science, technology, engineering and maths-related courses in school or college and to move on into related careers and progress. Its mission statement aims to facilitate understanding of these disciplines among women and girls and the opportunities which they present at a professional level. It is operated by UKRC trading as WISE (company number 07533934).
Formation
The campaign began on 17 January 1984, headed by The Baroness Platt of Writtle, a qualified mechanical engineer, at which time women made up 7% of graduate engineers and 3% of professional engineers in the UK. It was a collaboration between the Engineering Council and the Equal Opportunities Commission, originally viewed as a one-year campaign "Women into Science and Engineering" WISE'84.
Activities
One of WISE's main objectives is to listen to students and women qualified or working in these sectors, and understand and voice their opinions to academic institutions, policy-makers and employers. It then works creatively with delivery agencies and others, offering models, tools and approaches to support them in challenging traditional approaches, so as to demonstrate equitable involvement. WISE combats gender stereotypes to get more girls and women involved in careers where female participation was once considered near impossible.
WISE operates throughout the UK, with specialist committees in Wales, Northern Ireland and Scotland. Volunteers, from industry and relevant organisations, attend the various WISE committee meetings, and undertake projects with WISE.
In 2011 the UKRC - an organisation specialising in gender equality in science, engineering and technology - became part of WISE. Trudy Norris-Grey, the Chair of UKRC since 2007 then became Chair of WISE. WISE counts The Princess Royal, Dame Julia Higgins, Kate Bellingham and Joanna Kennedy as its patrons. The Founding Chair and Patron The Baroness Platt of Writtle died on 1 February 2015, aged 91.
Young Professionals' Board
The WISE Campaign has an advisory Board to the main Board called the WISE Young Professionals' Board, formerly the WISE Young Women's Board, with a mandate to act as a sounding board to the WISE Campaign and promote the visibility of young women in STEM.
Notable members and former members include:
Jess Wade was a Young Professionals' Board member 2015-2018 and is a campaigner for Women in STEM and promoting early career researchers.
Structure
It is headquartered at Leeds College of Building, though has been based at the UKRC (UK Resource Centre for Women in Science, Engineering, and Technology) in Bradford.
References
External links
of WISE Campaign
UK Parliament Business, Innovation and Skills Committee: Written evidence submitted by the Women into Science and Engineering (WISE) Campaign 11 Oct 2012
Ingenia March 2010, issue 42 pp 48–50 "Engaging girls in engineering"
1984 establishments in the United Kingdom
Educational organisations based in the United Kingdom
Organisations based in Bradford
Organisations based in Leeds
Organizations established in 1984
Organizations for women in science and technology
Science and technology in West Yorkshire
Women in the United Kingdom
Women in engineering
Women scientists | WISE Campaign | Technology | 609 |
10,220,473 | https://en.wikipedia.org/wiki/Eilenberg%E2%80%93Zilber%20theorem | In mathematics, specifically in algebraic topology, the Eilenberg–Zilber theorem is an important result in establishing the link between the homology groups of a product space and those of the spaces and . The theorem first appeared in a 1953 paper in the American Journal of Mathematics by Samuel Eilenberg and Joseph A. Zilber. One possible route to a proof is the acyclic model theorem.
Statement of the theorem
The theorem can be formulated as follows. Suppose and are topological spaces, Then we have the three chain complexes , , and . (The argument applies equally to the simplicial or singular chain complexes.) We also have the tensor product complex , whose differential is, by definition,
for and , the differentials on ,.
Then the theorem says that we have chain maps
such that is the identity and is chain-homotopic to the identity. Moreover, the maps are natural in and . Consequently the two complexes must have the same homology:
Statement in terms of composite maps
The original theorem was proven in terms of acyclic models but more mileage was gotten in a phrasing by Eilenberg and Mac Lane using explicit maps. The standard map they produce is traditionally referred to as the Alexander–Whitney map and the Eilenberg–Zilber map. The maps are natural in both and and inverse up to homotopy: one has
for a homotopy natural in both and such that further, each of , , and is zero. This is what would come to be known as a contraction or a homotopy retract datum.
The coproduct
The diagonal map induces a map of cochain complexes which, followed by the Alexander–Whitney yields a coproduct inducing the standard coproduct on . With respect to these coproducts on
and , the map
,
also called the Eilenberg–Zilber map, becomes a map of differential graded coalgebras. The composite itself is not a map of coalgebras.
Statement in cohomology
The Alexander–Whitney and Eilenberg–Zilber maps dualize (over any choice of commutative coefficient ring with unity) to a pair of maps
which are also homotopy equivalences, as witnessed by the duals of the preceding equations, using the dual homotopy . The coproduct does not dualize straightforwardly, because dualization does not distribute over tensor products of infinitely-generated modules, but there is a natural injection of differential graded algebras given by , the product being taken in the coefficient ring . This induces an isomorphism in cohomology, so one does have the zig-zag of differential graded algebra maps
inducing a product in cohomology, known as the cup product, because and are isomorphisms. Replacing with so the maps all go the same way, one gets the standard cup product on cochains, given explicitly by
,
which, since cochain evaluation vanishes unless , reduces to the more familiar expression.
Note that if this direct map of cochain complexes were in fact a map of differential graded algebras, then the cup product would make a commutative graded algebra, which it is not. This failure of the Alexander–Whitney map to be a coalgebra map is an example the unavailability of commutative cochain-level models for cohomology over fields of nonzero characteristic, and thus is in a way responsible for much of the subtlety and complication in stable homotopy theory.
Generalizations
An important generalisation to the non-abelian case using crossed complexes is given in the paper by Andrew Tonks below. This give full details of a result on the (simplicial) classifying space of a crossed complex stated but not proved in the paper by Ronald Brown and Philip J. Higgins on classifying spaces.
Consequences
The Eilenberg–Zilber theorem is a key ingredient in establishing the Künneth theorem, which expresses the homology groups in terms of and . In light of the Eilenberg–Zilber theorem, the content of the Künneth theorem consists in analysing how the homology of the tensor product complex relates to the homologies of the factors.
See also
Acyclic model
References
.
.
.
.
Homological algebra
Theorems in algebraic topology | Eilenberg–Zilber theorem | Mathematics | 897 |
55,782,403 | https://en.wikipedia.org/wiki/Hydnum%20vesterholtii | Hydnum vesterholtii is a species of fungus in the family Hydnaceae native to the southern and Central Europe and southwestern China.
References
Fungi described in 2012
Fungi of Europe
vesterholtii
Fungus species | Hydnum vesterholtii | Biology | 49 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.