id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
9,551,090 | https://en.wikipedia.org/wiki/Heptaminol | Heptaminol is an amino alcohol which is classified as a cardiac stimulant (positive inotropic action). It also increases coronary blood flow along with mild peripheral vasoconstriction. It is sometimes used in the treatment of low blood pressure, particularly orthostatic hypotension as it is a potent positive inotrope (improving cardiac contraction).
Use in doping
Heptaminol is classified by the World Anti-Doping Agency as a doping substance. In 2008, the cyclist Dmitriy Fofonov tested positive for heptaminol at the Tour de France. In June 2010, the swimmer Frédérick Bousquet tested positive. In 2013, the cyclist Sylvain Georges tested positive at the Giro d'Italia. In 2014, baseball player Joel Piniero tested positive as well as St. Louis Cardinals minor league baseball player Yeison Medina.
On March 22, 2019, Cycling South Africa reported that Ricardo Broxham has been sanctioned for an anti-doping rule violation of Articles 2.1 and 2.2 of the UCI Anti-Doping Rules after an in-competition test conducted on 18 August 2018 confirmed the presence of Heptaminol in his sample.
The UCI Anti-Doping Tribunal has imposed a period of ineligibility of 12 months for the violation, applicable as of 22 September 2018 up to and including 22 September 2019 and a disqualification of all results from the 2018 UCI Junior Track Cycling World Championships.
See also
1,3-Dimethylbutylamine
Iproheptine
Isometheptene
Methylhexanamine
Octodrine
Oenethyl
Tuaminoheptane
References
Vasodilators
Amines
Tertiary alcohols | Heptaminol | [
"Chemistry"
] | 353 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
9,551,741 | https://en.wikipedia.org/wiki/Auto%20insurance%20risk%20selection | Auto insurance risk selection is the process by which vehicle insurers determine whether or not to insure an individual and what insurance premium to charge. Depending on the jurisdiction, the insurance premium can be either mandated by the government or determined by the insurance company in accordance to a framework of regulations set by the government. Often, the insurer will have more freedom to set the price on physical damage coverages than on mandatory liability coverages.
When the premium is not mandated by the government, it is usually derived from the calculations of an actuary based on statistical data. The premium can vary depending on many factors that are believed to affect the expected cost of future claims. Those factors can include the car characteristics, the coverage selected (deductible, limit, covered perils), the profile of the driver (age, gender, driving history) and the usage of the car (commute to work or not, predicted annual distance driven).
History
Conventional methods for determining costs of motor vehicle insurance involve gathering relevant historical data from a personal interview with, or a written application completed by, the applicant for the insurance and by referencing the applicant's public motor vehicle driving record that is maintained by a governmental agency, such as a Bureau of Motor Vehicles. Such data results in a classification of the applicant to a broad actuarial class for which insurance rates are assigned based upon the empirical experience of the insurer. Many factors are deemed relevant to such classification in a particular actuarial class or risk level, such as age, sex, marital status, location of residence and driving record.
The current system of insurance creates groupings of vehicles and drivers (actuarial classes) based on the following types of classifications.
Vehicle: Age; manufacturer, model; and value.
Driver: Age; sex; marital status; driving record (based on government reports), violations (citations); at fault accidents; and place of residence.
Coverage: Types of losses covered, liability, uninsured or underinsured motorist, comprehensive, and collision; liability limits; and deductibles.
The classifications, such as age, are further broken into actuarial classes, such as 21- to 24-year-olds, to develop a unique vehicle insurance cost based on the specific combination of attributes for a particular risk. For example, the following information would produce a unique vehicle insurance cost:
Vehicle: Age – 7 years old; manufacturer, model – Ford, Explorer XLT; value $18,000
Driver: Age – 38 years old; gender – male; marital status – single; driving record (based on government reports) violations – 1 point (speeding); at fault accidents – 3 points (one at fault accident); place of residence 33619 (zip code)
Coverage: Types of losses covered; liability – yes; uninsured or underinsured – no; motorist comprehensive – yes; collision – yes; liability limits – $100,000/$300,000/$50,000; deductibles – $500/$500.
A change to any of this information might result in a different premium being charged if the change resulted in a different actuarial class or risk level for that variable. For instance, a change in the drivers' age from 38 to 39 may not result in a different actuarial class because 38- and 39-year-old people may be in the same actuarial class. However, a change in driver age from 38 to 45 may result in a different premium because the records of the insurer indicate a difference in risk associated with those ages and, therefore, the age difference results in a change in actuarial class or assigned risk level.
Current insurance rating systems also provide discounts and surcharges for some types of use of the vehicle, equipment on the vehicle and type of driver. Common surcharges and discounts include:
Surcharges: Business use.
Discounts: Safety equipment on the vehicle airbags, and antilock brakes; theft control devices passive systems (e.g. The Club), and alarm system; and driver type – good student, and safe driver (accident free); group – senior drivers fleet drivers .
Usage-based insurance
Conventional rating systems are primarily based on past realized losses and the past record of other drivers with similar characteristics. More recently, electronic systems have been introduced whereby the actual driving performance of a given driver is monitored and communicated directly to the insurance company. The insurance company then assigns the driver to a risk class based on the monitored driving behavior. An individual, therefore, can be put into different risk classes from month to month depending upon how they drive. For example, a driver who drives long distance at high speed in one month might be placed into a high risk class for that month and pay a large premium. The same driver who drives for short distances at low speed the next month might be placed into a lower risk class and charged a lower premium.
References
Actuarial science
Vehicle insurance | Auto insurance risk selection | [
"Mathematics"
] | 1,017 | [
"Applied mathematics",
"Actuarial science"
] |
9,552,096 | https://en.wikipedia.org/wiki/Support%20polygon | For a rigid object in contact with a fixed environment and acted upon by gravity in the vertical direction, its support polygon is a horizontal region over which the center of mass must lie to achieve static stability. For example, for an object resting on a horizontal surface (e.g. a table), the support polygon is the convex hull of its "footprint" on the table.
The support polygon succinctly represents the conditions necessary for an object to be at equilibrium under gravity. That is, if the object's center of mass lies over the support polygon, then there exist a set of forces over the region of contact that exactly counteracts the forces of gravity. Note that this is a necessary condition for stability, but not a sufficient one.
Derivation
Let the object be in contact at a finite number of points . At each point , let be the set of forces that can be applied on the object at that point. Here, is known as the friction cone, and for the Coulomb model of friction, is actually a cone with apex at the origin, extending to infinity in the normal direction of the contact.
Let be the (unspecified) forces at the contact points. To balance the object in static equilibrium, the following Newton-Euler equations must be met on :
for all
where is the force of gravity on the object, and is its center of mass. The first two equations are the Newton-Euler equations, and the third requires all forces to be valid. If there is no set of forces that meet all these conditions, the object will not be in equilibrium.
The second equation has no dependence on the vertical component of the center of mass, and thus if a solution exists for one , the same solution works for all . Therefore, the set of all that have solutions to the above conditions is a set that extends infinitely in the up and down directions. The support polygon is simply the projection of this set on the horizontal plane.
These results can easily be extended to different friction models and an infinite number of contact points (i.e. a region of contact).
Properties
Even though the word "polygon" is used to describe this region, in general it can be any convex shape with curved edges. The support polygon is invariant under translations and rotations about the gravity vector (that is, if the contact points and friction cones were translated and rotated about the gravity vector, the support polygon is simply translated and rotated).
If the friction cones are convex cones (as they typically are), the support polygon is always a convex region. It is also invariant to the mass of the object (provided it is nonzero).
If all contacts lie on a (not necessarily horizontal) plane, and the friction cones at all contacts contain the negative gravity vector , then the support polygon is the convex hull of the contact points projected onto the horizontal plane.
References
Classical mechanics | Support polygon | [
"Physics"
] | 594 | [
"Mechanics",
"Classical mechanics"
] |
9,552,145 | https://en.wikipedia.org/wiki/1%20%2B%202%20%2B%204%20%2B%208%20%2B%20%E2%8B%AF | In mathematics, is the infinite series whose terms are the successive powers of two. As a geometric series, it is characterized by its first term, 1, and its common ratio, 2. As a series of real numbers it diverges to infinity, so the sum of this series is infinity.
However, it can be manipulated to yield a number of mathematically interesting results. For example, many summation methods are used in mathematics to assign numerical values even to a divergent series. For example, the Ramanujan summation of this series is −1, which is the limit of the series using the 2-adic metric.
Summation
The partial sums of are since these diverge to infinity, so does the series.
It is written as
Therefore, any totally regular summation method gives a sum of infinity, including the Cesàro sum and Abel sum. On the other hand, there is at least one generally useful method that sums to the finite value of −1. The associated power series
has a radius of convergence around 0 of only so it does not converge at Nonetheless, the so-defined function has a unique analytic continuation to the complex plane with the point deleted, and it is given by the same rule Since the original series is said to be summable (E) to −1, and −1 is the (E) sum of the series. (The notation is due to G. H. Hardy in reference to Leonhard Euler's approach to divergent series.)
An almost identical approach (the one taken by Euler himself) is to consider the power series whose coefficients are all 1, that is,
and plugging in These two series are related by the substitution
The fact that (E) summation assigns a finite value to shows that the general method is not totally regular. On the other hand, it possesses some other desirable qualities for a summation method, including stability and linearity. These latter two axioms actually force the sum to be −1, since they make the following manipulation valid:
In a useful sense, is a root of the equation (For example, is one of the two fixed points of the Möbius transformation on the Riemann sphere). If some summation method is known to return an ordinary number for ; that is, not then it is easily determined. In this case may be subtracted from both sides of the equation, yielding so
The above manipulation might be called on to produce −1 outside the context of a sufficiently powerful summation procedure. For the most well-known and straightforward sum concepts, including the fundamental convergent one, it is absurd that a series of positive terms could have a negative value. A similar phenomenon occurs with the divergent geometric series (Grandi's series), where a series of integers appears to have the non-integer sum These examples illustrate the potential danger in applying similar arguments to the series implied by such recurring decimals as and most notably . The arguments are ultimately justified for these convergent series, implying that and but the underlying proofs demand careful thinking about the interpretation of endless sums.
It is also possible to view this series as convergent in a number system different from the real numbers, namely, the 2-adic numbers. As a series of 2-adic numbers this series converges to the same sum, −1, as was derived above by analytic continuation.
See also
1 − 1 + 2 − 6 + 24 − 120 + ⋯
1 − 1 + 1 − 1 + ⋯ (Grandi's series)
1 + 1 + 1 + 1 + ⋯
1 − 2 + 3 − 4 + ⋯
1 + 2 + 3 + 4 + ⋯
1 − 2 + 4 − 8 + ⋯
Two's complement, a data convention for representing negative numbers where −1 is represented as if it were .
Notes
References
Further reading
Binary arithmetic
Divergent series
Geometric series
P-adic numbers | 1 + 2 + 4 + 8 + ⋯ | [
"Mathematics"
] | 789 | [
"Binary arithmetic",
"P-adic numbers",
"Arithmetic",
"Number theory"
] |
9,552,389 | https://en.wikipedia.org/wiki/Anna%20Tambour | Anna Tambour is an author of satire, fable and other strange and hard-to-categorize fiction and poetry.
Her novel Crandolin was shortlisted for the 2013 World Fantasy Award. Tambour's collection Monterra's Deliciosa & Other Tales & was published in 2003, and Spotted Lily, a novel, in 2005. Ebook editions of both of these were published by infinity plus in 2011.
Reviews
Locus listed both Tambour's collections and both novels in their Recommended Reading lists. Her 2015 collection The Finest Ass in the Universe was shortlisted for an Aurealis Award for Best Collection. Spotted Lily was shortlisted in 2006 for the William L. Crawford Fantasy Award, and was recommended for a British Fantasy Society Award (Best Novel). In 2008, The Jeweller of Second-hand Roe won the Aurealis Award for best horror short story.
Tambour lives in the Australian bush, but has lived all over the world and is, in Tambour's words, "of no fixed nationality". In addition to writing fiction, Tambour also writes about and takes photographs of what she calls " — magnificent insignificants".
Bibliography
Novels
Spotted Lily. Canton, OH: Wildside, 2005,
Spotted Lily. Ebook. Wivenhoe, UK: infinity plus ebooks, 2011, ASIN B004JKNQK8
Crandolin. Chomu Press, UK: 2012, 987-1-907681-19-6
Crandolin. Ebook. Cheeky Frawg Books, Tallahassee, FL, USA, 2016, ASIN B01B264SNK
Smoke, Paper, Mirrors. Infinity Plus, UK: 2017,
Short fiction
Collections
Monterra's Deliciosa & Other Tales &. Canton, OH: Prime, 2003,
Monterra's Deliciosa & Other Tales &. Ebook. Wivenhoe, UK: infinity plus ebooks, 2011, ASIN B004JKNQJ4
The Finest Ass in the Universe. Ticonderoga Publications, Greenwood, WA, Australia, 2015,
The Road to Neozon. Obsidian Sky Books, Detroit, MI, USA, 2018,
Selected stories
Bibliography notes
References
External links
Anna Tambour Official website
Anna Tambour's blog Medlar Comfits
Anna Tambour bibliography listing, at ISFDB
Living people
Asimov's Science Fiction people
Australian fantasy writers
Australian women novelists
Australian women poets
Australian women short story writers
Fabulists
Nature writers
Australian women food writers
Women satirists
Australian women science fiction and fantasy writers
Australian science fiction writers
Women science writers
Year of birth missing (living people) | Anna Tambour | [
"Technology"
] | 537 | [
"Women science writers",
"Women in science and technology"
] |
9,552,884 | https://en.wikipedia.org/wiki/Fenobam | Fenobam is an imidazole derivative developed by McNeil Laboratories in the late 1970s as a novel anxiolytic drug with an at-the-time-unidentified molecular target in the brain. Subsequently, it was determined that fenobam acts as a potent and selective negative allosteric modulator of the metabotropic glutamate receptor subtype mGluR5, and it has been used as a lead compound for the development of a range of newer mGluR5 antagonists.
Fenobam has anxiolytic effects comparable to those of benzodiazepine drugs, but was never commercially marketed for the treatment of anxiety due to dose-limiting side effects such as amnesia and psychotomimetic symptoms. Following the discovery of its activity as a potent negative allosteric modulator of mGluR5, fenobam has been re-investigated for many applications, with its profile of combined antidepressant, anxiolytic, analgesic and anti-addictive effects potentially useful given the common co-morbidity of these symptoms. It has also shown promising initial results in the treatment of fragile X syndrome. It was developed by a team at McNeil Laboratories in the 1970s.
Chemistry
Fenobam is known to exist in five crystalline forms, all of them exhibiting a tautomeric structure with the proton attached to the five membered ring nitrogen.
See also
AZD9272
Basimglurant
MPEP
MTEP
MFZ 10-7
References
Anxiolytics
Imidazolines
Lactams
Ureas
3-Chlorophenyl compounds
Orphan drugs
MGlu5 receptor antagonists
Abandoned drugs
Glutamate receptor negative allosteric modulators | Fenobam | [
"Chemistry"
] | 356 | [
"Organic compounds",
"Ureas",
"Drug safety",
"Abandoned drugs"
] |
9,553,082 | https://en.wikipedia.org/wiki/Fernseh | The Fernseh AG television company was registered in Berlin on July 3, 1929, by John Logie Baird, Robert Bosch, Zeiss Ikon and D.S. Loewe as partners. John Baird owned Baird Television Ltd. in London, Zeiss Ikon was a camera company in Dresden, D.S. Loewe owned a company in Berlin and Robert Bosch owned a company, Robert Bosch GmbH, in Stuttgart. with an initial capital of 100,000 Reichsmark. Fernseh AG did research and manufacturing of television equipment.
Etymology
The company name "Fernseh AG" is a compound of Fernsehen ‘television’ and Aktiengesellschaft (AG) ‘joint-stock company’. The company was mainly known by its German abbreviation "FESE". See the see also section on this page for other uses.
Early years
In 1929 Fernseh AG's original board of directors included: Emanuel Goldberg, Oliver George Hutchinson (for Baird), David Ludwig Loewe, and Erich Carl Rassbach (for Bosch) and Eberhard Falkenstein who did the legal work.
Carl Zeiss's company worked alongside the early Bosch company. Much of the early work was in the area of research and development. Along with early TV sets (DE-6, E1, DE10) Fernseh AG made the first "Remote Truck"/"OB van", an "intermediate-film" mobile television camera in August 1932. This was a film camera that had its film developed in the truck and a "telecine" then transmitted the signal almost "live".
Fernseh GmbH
In 1939 Robert Bosch GmbH took complete ownership of Fernseh AG when Zeiss Ikon AG sold its share of Fernseh AG.
In 1952 Fernseh moved to Darmstadt, Germany, and increased its broadcast product line.
In 1967 Fernseh, by then commonly called "Bosch Fernseh", introduced color TV products. Fernseh offered a full line of video and film equipment: professional video cameras, VTRs and telecine devices. On August 27, 1967, the first color TV program in Germany aired, with a live broadcast from a Bosch Fernseh outside broadcast (OB) van. The networks ZDF, NDR and WDR each acquired a new color OB van from Bosch Fernseh to begin broadcasting in color.
Fernsehanlagen GmbH
In 1972 Robert Bosch renamed its TV division: Fernsehanlagen GmbH (Fernseh facilities). The company supplied almost all the studio equipment for the 1972 Summer Olympics in Munich. The Darmstadt HQ had over 2000 employees in 1972. In 1972 Fernseh started to manufacture SECAM TV studio equipment for Moscow.
Fernseh Inc.
In October 1979 Bell and Howell's TeleMation Inc. Division located in Salt Lake City, Utah, entered a joint venture with Robert Bosch GmbH, Bosch's Fernseh Division. The new joint venture was called Fernseh Inc., Bosch Fernseh Division, located in Darmstadt, Germany.
In April 1982 Bosch fully acquired Fernseh Inc., renaming it "Robert Bosch Corporation, Fernseh Division".
In 1986 Bosch entered into a new joint venture with Philips Broadcast in Breda, Netherlands. This new company was called Broadcast Television Systems or BTS inc. Philips had been in the Broadcast market for many years with a line of PC- and LDK- Norelco professional video cameras and other video products.
In 1995 Philips Electronics North America Corp. fully acquired BTS Inc., renaming it Philips Broadcast-Philips Digital Video Systems. Philips sold many of the Spirit DataCines.
In March 2001 this Philips division was sold to Thomson SA, the Division was call Thomson Multimedia. In 2002, the French electronics giant Thomson SA also acquired the Grass Valley Group from a private investor that had acquired it three years earlier from Tektronix in Beaverton, Oregon, USA. The name of this division of Thomson was shortened to Grass Valley. The Fernseh's Darmstadt factory, near the Darmstadt Train Station and European Space Operations Centre was moved a short distance to Weiterstadt, Germany. (Later, Grass Valley was sold to Belden on February 6, 2014. Belden also owned Miranda.)
Thomson Film Division, located in Weiterstadt including the product line of Spirit DataCine 4K, Bones Workstation, Scanity realtime film scanner and LUTher 3D Color Space converter, was sold to Parter Capital Group. The sale was made public on Sept. 9, 2008 and completed on Dec. 1, 2008. The new Headquarters was still in Weiterstadt, the former Bosch Fernseh — BTS factory. Parter Capital Group continued to have worldwide offices to support products from Weiterstadt, Germany. The new name of the company is Digital Film Technology. DFT Digital Film Technology became part of a new company: Precision Mechatronics GmbH in Weiterstadt, Germany. On October 1, 2012 Precision Mechatronics and DFT were acquired by Prasad Group, part of Prasad Studios (2012-2024). In 2013 DFT moved from Weiterstadt to Arheilgen-Darmstadt, Germany.
Products
Home Television sets (later moved to the Blaupunkt Division) (1930- )
Home Radios (later moved to the Blaupunkt Division) (1930- )
Vacuum tubeTube tester
Early mechanical Camera for Mechanical television 1938, "Universal mechanical scanner"
Intermediate film system for Remote Truck (1936)
"Farvimeter" a universal electrical testing device (1947)
"Farvigraph" a universal Oscilloscope (1949)
Slide scanner for station ID and test patterns (1955)
Filmgeber film chain F16LP15 Analog
Fernseh Theater TV system, 1935
TV transmitter 1944
Master control, B&W Video switcher
Sound recorder — player, Diaabtaster DAT15, Fernseh
B & W film chain
OMY Color film chain - Analog
BCM-40 and B&W BM-20 2 inch Quadruplex videotape 1966-70s
B & W cameras like Videokamera s/w K11 VK9 HA
M series monitors
TV Oscilloscope
Video-signal generators
Television standards conversion(1970s) Analog Model NC 56 P 40, with Plumbicon tube camera inside.
Transcoder to convert PAL to SECAM and SECAM to PAl. (1972)
BCR pre BCN VTR
BCN series 1 inch type B videotape (1979–1989) Analog VTR
KC series color professional video camera KCU-40, KCR, KCK-40, KCK-R-40, KCP-40, KCP-60, KCA, KCF-1, KCM-125, KCA, KCN92, KCN(1967–1990),
Color film chain, with KCU-40 camera
MC series color and B&WVideo monitorMC-37, MC-50 MCH 51, MH 21
OB Van - TV Remote Trucks - and Terminal Rack Equipment
RME series Mixers — Switcher - Vision mixer, Analog
FDL-60 Telecine - The world's first CCD telecine (1979–1989)
FRP-60 Color Corrector-Color grading (1983–1989)
FDL-90 Telecine (1989–1993) (now under BTS)
Noise/Grain Reducer: FDGR, DNR7, MNR9, MNR10, MNR11, VS4, Scream, Scream 4k
KCA-110 ENG Camera
KCF-1 ENG Camera (later Quartercam, not sold)
CCIR 601 Products CD7, DC7, 4X4 Booster, Test Gen. Encoders, Decoders.
DD series CCIR 601-D1 Mixers - Vision mixer DD5, DD10, DD20, DD30
DCR series D1 VTR DCR-100 DCR-300 DCR-500
BCH 1000 HDTV 1" VTR
KCH 1000 HDTV camera (RMH 1000)
FLH 1000 Telecine The world's first HDTV CCD Telecine (1994–1996)
Quadra 4:4:4 Telecine (1993–1998)
D6 HDTV VTR Uncompressed HDTV VTR (VooDoo)-(Gigabit Data Recorder) (2000–2006) (Now under Philips)
Spirit DataCine motion picture film scanner and HDTV Telecine SDC-2000 (1996–2006) also: SDC2001, SDC2002
Phantom Transfer Engine Software for Spirit Datacine Telecine for Virtual telecine(1998-)
Shadow HDTV Telecine STE (2000–2006)
VDC-2000 Specter Virtual telecine (1999–2002)
Specter FS Virtual telecine (2002–2006)
Spirit DataCine 4k Datacine - Telecine (2004-2014) also Spirit Spirit 2k/Spirit-HD (now Under Thomson-Grassvalley)
Bones Linux-Based Software for Spirit Datacine Telecine Transfer Engine Software (2005-2014 )
Bones Dailies (2008-2014)
LUTher 3D LUT Color Space (2005-2013)
Flexxity (2011-2014)
Scanity film scanner (2009- ) (Now under DFT)
Phantom 2: Linux-Based transfer Engine software and workstation for Spirit Datacine (2014-) (Now under DFT)
Polar HQ a 9.3K native scanner came out in 2023.
Photo gallery
Offices
Past and current offices in the cities of acquisitions (see History):
Cergy, France (Thomson World Headquarters)
Salt Lake City, Utah, United States - from TeleMation Inc -Bell and Howell
Beaverton, Oregon, United States- from Tektronix
Nevada City, California, United States — from Grassvalley Group
Breda, Netherlands - from Philips - Norelco
Weiterstadt - Darmstadt, Germany from Bosch Fernseh-(DFT),
In 2013 DFT moved from Weiterstadt to Arheilgen-Darmstadt, Germany.
See also
Hans Walz
Post-production
Video camera
Fernseh prefix:
Fernsehturm Berlin Television Tower.
Fernsehen, German word for "television".
Fernseh sprechstellen German Videotelephony.
Fernsehturm Stuttgart telecommunications tower in Stuttgart.
Fernsehsender Paul Nipkow first public television station in the world.
Fernsehturm (disambiguation) German word for television tower.
Fernsehen der DDR state television broadcaster in East Germany.
Fernsehturm Heidelberg Heidelberg transmission tower.
Fernsehturm Dresden-Wachwitz TV tower in Dresden.
Fernsehserien German for TV series which comprises several episodes.
ZDF Fernsehgarten "ZDF Television garden" is a German entertainment show.
Deutscher Fernseh-Rundfunk Early German Television Broadcasting
Fernsehproduktion a television production.
Fernsehnorm TV standard.
Fernsehpitaval Crime TV show from 1958 to 1978 on GDR.
References and notes
External links
History of Bosch Fernseh
Bosch.com History
Film Products
Early Fernseh TV set
Early Camera
BCN Pictures
Fernseh mechanical TVs
Deutsches Fernsehmuseum - German TV Museum
Mass media companies of Germany
Cameras
Video storage
Film production
Television technology
Video hardware
Technicolor SA | Fernseh | [
"Technology",
"Engineering"
] | 2,318 | [
"Information and communications technology",
"Television technology",
"Electronic engineering",
"Video hardware",
"Cameras",
"Recording devices"
] |
9,553,271 | https://en.wikipedia.org/wiki/Aeroshell | An aeroshell is a rigid heat-shielded shell that helps decelerate and protects a spacecraft vehicle from pressure, heat, and possible debris created by drag during atmospheric entry. Its main components consist of a heat shield (the forebody) and a back shell. The heat shield absorbs heat caused by air compression in front of the spacecraft during its atmospheric entry. The back shell carries the load being delivered, along with important components such as a parachute, rocket engines, and monitoring electronics like an inertial measurement unit that monitors the orientation of the shell during parachute-slowed descent.
Its purpose is used during the EDL, or Entry, Descent, and Landing, process of a spacecraft's flight. First, the aeroshell decelerates the spacecraft as it penetrates the planet's atmosphere and must necessarily dissipate the kinetic energy of the very high orbital speed. The heat shield absorbs some of this energy while much is also dissipated into the atmospheric gasses, mostly by radiation. During the latter stages of descent, a parachute is typically deployed and any heat shield is detached. Rockets may be located at the back shell to assist in control or to retropropulsively slow descent. Airbags may also be inflated to cushion impact with the ground, in which case the spacecraft could bounce on the planet's surface after the first impact. In many cases, communication throughout the process is relayed or recorded for subsequent transfer.
Aeroshells are a key component of space probes that must land intact on the surface of any object with an atmosphere. They have been used on the majority of missions returning payloads to the Earth. They are also used for all landing missions to Mars, Venus, Titan and (in the most extreme case) the Galileo probe to Jupiter. The size and geometry of an aeroshell is driven by the requirements of the EDL phase of its mission, as these parameters heavily influence its performance.
Components
The aeroshell consists of two main components: the heat shield, or forebody, which is located at the front of the aeroshell, and the back shell, which is located at the back of the aeroshell. The heat shield of the aeroshell faces the ram direction (forward) during a spacecraft's atmospheric entry, allowing it to absorb the high heat caused by compression of air in front of the craft. The backshell acts as a finalizer for the encapsulation of the payload. The backshell typically contains a parachute, pyrotechnic devices along with their electronics and batteries, an inertial measurement unit, and other hardware needed for the specific mission's entry, descent, and landing sequence. The parachute is located at the apex of the back shell and slows the spacecraft during EDL. The pyrotechnic control system releases devices such as nuts, rockets, and the parachute mortar. The inertial measurement unit reports the orientation of the back shell while it is swaying underneath the parachute. Retrorockets, if equipped, can assist in the terminal descent and landing of the spacecraft vehicle; alternatively or additionally, a lander may have retrorockets mounted on its own body for terminal descent and landing use (after the backshell has been jettisoned). Other rockets may be equipped to provide horizontal force to the back shell, helping to orient it to a more vertical position during the main retrorocket burn.
Design factors
A spacecraft's mission objective determines what flight requirements are needed to ensure mission success. These flight requirements are deceleration, heating, and impact and landing accuracy. A spacecraft must have a maximum value of deceleration low enough to keep the weakest points of its vehicle intact but high enough to penetrate the atmosphere without rebounding. Spacecraft structure and payload mass affect how much maximum deceleration it can stand. This force is represented by "g's", or Earth's gravitational acceleration. If its structure is well-designed enough and made from robust material (such as steel), then it can withstand a higher amount of g's. However, payload needs to be considered. Just because the spacecraft's structure can withstand high g's does not mean its payload can. For example, a payload of astronauts can only withstand approximately 9 g's, or 9 times their weight. Values that are more than this baseline increase risk of brain injury or death. It must also be able to withstand high temperature caused by the immense friction resulting from entering the atmosphere at hypersonic speed.
Finally, it must be able to penetrate an atmosphere and land on a terrain accurately, without missing its target. A more constricted landing area calls for more strict accuracy. In such cases, a spacecraft will be more streamlined and possess a steeper re-entry trajectory angle. These factors combine to affect the re-entry corridor, the area in which a spacecraft must travel in order to avoid burning up or rebounding out of an atmosphere. All of these above requirements are met through the consideration, design, and adjustment of a spacecraft's structure and trajectory. Future missions, however are making use of atmospheric rebound, allowing re-entry capsules to travel further during their decent, and land in more convenient locations.
The overall dynamics of aeroshells are influenced by inertial and drag forces, as defined it this equation: ß=m/CdA where m is defined as the mass of the aeroshell and its respective loads and CdA is defined as the amount of drag force an aeroshell can generate during a freestream condition. Overall, β is defined as mass divided by drag force (mass per unit drag area). A higher mass per unit drag area causes aeroshell entry, descent, and landing to happen at low and dense points of the atmosphere and also reduces the elevation capability and the timeline margin for landing. This is because a higher mass/drag area means the spacecraft does not have sufficient drag to slow down early in its decent, relying on the thicker atmosphere found at lower altitudes for the majority of its deceleration. Furthermore, higher mass/drag ratios mean less mass can be allocated to the spacecraft's payload which will have secondary impacts on funding and mission's science goals. Factors that increase during EDL include heat load and rate, which causes the system to forcefully accommodate the increase in thermal loads. This situation reduces the useful landed mass capability of entry, descent, and landing because an increase in thermal load leads to a heavier support structure and thermal protection system (TPS) of the aeroshell. Static stability also needs to be taken into consideration as it is necessary to maintain a high-drag altitude. This is why a swept aeroshell forebody as opposed to a blunt one is required; the previous shape ensures this factor's existence but also reduces drag area. Thus, there is a resulting tradeoff between drag and stability that affects the design of an aeroshell's shape. Lift-to-drag ratio is also another factor that needs to be considered. The ideal level for a lift-to-drag ration is at non-zero. Maintaining a non-zero L/D ratio allows for a higher parachute deployment altitude and reduced loads during deceleration.
Planetary Entry Parachute Program
NASA's Planetary Entry Parachute Program (PEPP) aeroshell, tested in 1966, was created to test parachutes for the Voyager Mars landing program. To simulate the thin Martian atmosphere, the parachute needed to be used at an altitude more than above the Earth. A balloon launched from Roswell, New Mexico was used to initially lift the aeroshell. The balloon then drifted west to the White Sands Missile Range, where the vehicle was dropped and the engines beneath the vehicle boosted it to the required altitude, where the parachute was deployed.
The Voyager program was later canceled, replaced by the much smaller Viking program several years later. NASA reused the Voyager name for the Voyager 1 and Voyager 2 probes to the outer planets, which had nothing to do with the Mars Voyager program.
Low-Density Supersonic Decelerator
The Low-Density Supersonic Decelerator or LDSD is a space vehicle designed to create atmospheric drag in order to decelerate during entry through a planet's atmosphere. It is essentially a disc-shaped vehicle containing an inflatable, doughnut-shaped balloon around the outside. The use of this type of system may allow an increase in the payload.
It is intended to be used to help a spacecraft decelerate before landing on Mars. This is done by inflating the balloon around the vehicle to increase the surface area and create atmospheric drag. After sufficient deceleration, a parachute on a long tether deploys to further slow the vehicle.
The vehicle is being developed and tested by NASA's Jet Propulsion Laboratory. Mark Adler is the project manager.
June 2014 test flight
The test flight took place on June 28, 2014, with the test vehicle launching from the United States Navy's Pacific Missile Range Facility in Kauaʻi, Hawaiʻi, at 18:45 UTC (08:45 local). A high-altitude helium balloon, which when fully inflated has a volume of , lifted the vehicle to around . The vehicle detached at 21:05 UTC (11:05 local), and four small, solid-fuel rocket motors spun up the vehicle to provide stability.
A half second after spin-up, the vehicle's Star 48B solid-fuel motor ignited, powering the vehicle to Mach 4 and an altitude of approximately . Immediately after rocket burn-out, four more rocket motors despun the vehicle. Upon slowing to Mach 3.8, the tube-shaped Supersonic Inflatable Aerodynamic Decelerator (SIAD-R configuration) deployed. SIAD is intended to increase atmospheric drag on the vehicle by increasing the surface area of its leading side, thus increasing the rate of deceleration.
Upon slowing to Mach 2.5 (around 107 seconds after SIAD deployment), the Supersonic Disk Sail (SSDS) parachute was deployed to slow the vehicle further. This parachute measures in diameter, nearly twice as large as the one used for the Mars Science Laboratory mission. However, it began tearing apart after deployment, and the vehicle impacted the Pacific Ocean at 21:35 UTC (11:35 local) travelling . All hardware and data recorders were recovered. Despite the parachute incident, the mission was declared a success; the primary goal was proving the flight worthiness of the test vehicle, while SIAD and SSDS were secondary experiments.
2015 test flights
Two more test flights of LDSD took place in mid-2015 at the Pacific Missile Range Facility. They focused on the SIAD-E and SSDS technologies, incorporating lessons learned during the 2014 test. Changes planned for the parachute include a rounder shape and structural reinforcement. Shortly after re-entry, however, the parachute was torn away.
Gallery
References
Space travel guide
Early Reentry Vehicles: Blunt Bodies and Ablatives
Spaceflight
Articles containing video clips | Aeroshell | [
"Astronomy"
] | 2,239 | [
"Spaceflight",
"Outer space"
] |
13,562,287 | https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase%201 | Cyclin-dependent kinase 1 also known as CDK1 or cell division cycle protein 2 homolog is a highly conserved protein that functions as a serine/threonine protein kinase, and is a key player in cell cycle regulation. It has been highly studied in the budding yeast S. cerevisiae, and the fission yeast S. pombe, where it is encoded by genes cdc28 and cdc2, respectively. With its cyclin partners, Cdk1 forms complexes that phosphorylate a variety of target substrates (over 75 have been identified in budding yeast); phosphorylation of these proteins leads to cell cycle progression.
Structure
Cdk1 is a small protein (approximately 34 kilodaltons), and is highly conserved. The human homolog of Cdk1, CDK1, shares approximately 63% amino-acid identity with its yeast homolog. Furthermore, human CDK1 is capable of rescuing fission yeast carrying a cdc2 mutation. Cdk1 is comprised mostly by the bare protein kinase motif, which other protein kinases share. Cdk1, like other kinases, contains a cleft in which ATP fits. Substrates of Cdk1 bind near the mouth of the cleft, and Cdk1 residues catalyze the covalent bonding of the γ-phosphate to the oxygen of the hydroxyl serine/threonine of the substrate.
In addition to this catalytic core, Cdk1, like other cyclin-dependent kinases, contains a T-loop, which, in the absence of an interacting cyclin, prevents substrate binding to the Cdk1 active site. Cdk1 also contains a PSTAIRE helix, which, upon cyclin binding, moves and rearranges the active site, facilitating Cdk1 kinase activities.
Function
When bound to its cyclin partners, Cdk1 phosphorylation leads to cell cycle progression. Cdk1 activity is best understood in S. cerevisiae, so Cdk1 S. cerevisiae activity is described here.
In the budding yeast, initial cell cycle entry is controlled by two regulatory complexes, SBF (SCB-binding factor) and MBF (MCB-binding factor). These two complexes control G1/S gene transcription; however, they are normally inactive. SBF is inhibited by the protein Whi5; however, when phosphorylated by Cln3-Cdk1, Whi5 is ejected from the nucleus, allowing for transcription of the G1/S regulon, which includes the G1/S cyclins Cln1,2. G1/S cyclin-Cdk1 activity leads to preparation for S phase entry (e.g., duplication of centromeres or the spindle pole body), and a rise in the S cyclins (Clb5,6 in S. cerevisiae). Clb5,6-Cdk1 complexes directly lead to replication origin initiation; however, they are inhibited by Sic1, preventing premature S phase initiation.
Cln1,2 and/or Clb5,6-Cdk1 complex activity leads to a sudden drop in Sic1 levels, allowing for coherent S phase entry. Finally, phosphorylation by M cyclins (e.g., Clb1, 2, 3 and 4) in complex with Cdk1 leads to spindle assembly and sister chromatid alignment. Cdk1 phosphorylation also leads to the activation of the ubiquitin-protein ligase APCCdc20, an activation which allows for chromatid segregation and, furthermore, degradation of M-phase cyclins. This destruction of M cyclins leads to the final events of mitosis (e.g., spindle disassembly, mitotic exit).
Regulation
Given its essential role in cell cycle progression, Cdk1 is highly regulated. Most obviously, Cdk1 is regulated by its binding with its cyclin partners. Cyclin binding alters access to the active site of Cdk1, allowing for Cdk1 activity; furthermore, cyclins impart specificity to Cdk1 activity. At least some cyclins contain a hydrophobic patch which may directly interact with substrates, conferring target specificity. Furthermore, cyclins can target Cdk1 to particular subcellular locations.
In addition to regulation by cyclins, Cdk1 is regulated by phosphorylation. A conserved tyrosine (Tyr15 in humans) leads to inhibition of Cdk1; this phosphorylation is thought to alter ATP orientation, preventing efficient kinase activity. In S. pombe, for example, incomplete DNA synthesis may lead to stabilization of this phosphorylation, preventing mitotic progression. Wee1, conserved among all eukaryotes phosphorylates Tyr15, whereas members of the Cdc25 family are phosphatases, counteracting this activity. The balance between the two is thought to help govern cell cycle progression. Wee1 is controlled upstream by Cdr1, Cdr2, and Pom1.
Cdk1-cyclin complexes are also governed by direct binding of Cdk inhibitor proteins (CKIs). One such protein, already discussed, is Sic1. Sic1 is a stoichiometric inhibitor that binds directly to Clb5,6-Cdk1 complexes. Multisite phosphorylation, by Cdk1-Cln1/2, of Sic1 is thought to time Sic1 ubiquitination and destruction, and by extension, the timing of S-phase entry. Only until Sic1 inhibition is overcome can Clb5,6 activity occur and S phase initiation may begin.
Interactions
Cdk1 has been shown to interact with:
BCL2,
CCNB1,
CCNE1,
CDKN3
DAB2,
FANCC,
GADD45A,
LATS1,
LYN,
P53, and
UBC.
See also
E2F#E2F.2FpRb complexes
Hyperphosphorylation
cdc25
Maturation promoting factor
CDK
cyclin A
cyclin B
cyclin D
cyclin E
Wee (cell cycle)
Mastl
References
Further reading
External links
Cell cycle
Proteins
EC 2.7.11
Cell cycle regulators
de:Cyclin-abhängige Kinase 1#Die Entdeckung des cdc2-Gens | Cyclin-dependent kinase 1 | [
"Chemistry",
"Biology"
] | 1,370 | [
"Biomolecules by chemical classification",
"Signal transduction",
"Cellular processes",
"Molecular biology",
"Proteins",
"Cell cycle",
"Cell cycle regulators"
] |
13,562,301 | https://en.wikipedia.org/wiki/CDKN1B | Cyclin-dependent kinase inhibitor 1B (p27Kip1) is an enzyme inhibitor that in humans is encoded by the CDKN1B gene. It encodes a protein which belongs to the Cip/Kip family of cyclin dependent kinase (Cdk) inhibitor proteins. The encoded protein binds to and prevents the activation of cyclin E-CDK2 or cyclin D-CDK4 complexes, and thus controls the cell cycle progression at G1. It is often referred to as a cell cycle inhibitor protein because its major function is to stop or slow down the cell division cycle.
Function
The p27Kip1 gene has a DNA sequence similar to other members of the "Cip/Kip" family which include the p21Cip1/Waf1 and p57Kip2 genes. In addition to this structural similarity the "Cip/Kip" proteins share the functional characteristic of being able to bind several different classes of Cyclin and Cdk molecules. For example, p27Kip1 binds to cyclin D either alone, or when complexed to its catalytic subunit CDK4. In doing so p27Kip1 inhibits the catalytic activity of Cdk4, which means that it prevents Cdk4 from adding phosphate residues to its principal substrate, the retinoblastoma (pRb) protein. Increased levels of the p27Kip1 protein typically cause cells to arrest in the G1 phase of the cell cycle. Likewise, p27Kip1 is able to bind other Cdk proteins when complexed to cyclin subunits such as Cyclin E/Cdk2 and Cyclin A/Cdk2.
Regulation
In general, extracellular growth factors which promote cell division reduce transcription and translation of p27Kip1. Also, increased synthesis of CDk4,6/cyclin D causes binding of p27 to this complex, sequestering it from binding to the CDk2/cyclin E complex. Furthermore, an active CDK2/cyclin E complex will phosphorylate p27 and tag p27 for ubiquitination. A mutation of this gene may lead to loss of control over the cell cycle leading to uncontrolled cellular proliferation. Loss of p27 expression has been observed in metastatic canine mammary carcinomas. Decreased TGF-beta signalling has been suggested to cause loss of p27 expression in this tumor type.
A structured cis-regulatory element has been found in the 5' UTR of the P27 mRNA where it is thought to regulate translation relative to cell cycle progression.
P27 regulation is accomplished by two different mechanisms. In the first its concentration is changed by the individual rates of transcription, translation, and proteolysis. P27 can also be regulated by changing its subcellular location Both mechanisms act to reduce levels of p27, allowing for the activation of Cdk1 and Cdk2, and for the cell to begin progressing through the cell cycle.
Transcription
Transcription of the CDKN1B gene is activated by Forkhead box class O family (FoxO) proteins which also acts downstream to promote p27 nuclear localization and decrease levels of COP9 subunit 5(COPS5) which helps in the degradation of p27. Transcription for p27 is activated by FoxO in response to cytokines, promyelocytic leukaemia proteins, and nuclear Akt signaling. P27 transcription has also been linked to another tumor suppressor gene, MEN1, in pancreatic islet cells where it promotes CDKN1B expression.
Translation
Translation of CDKN1B reaches its maximum during quiescence and early G1. Translation is regulated by polypyrimidine tract-binding protein(PTB), ELAVL1, ELAVL4, and microRNAs. PTB acts by binding CDKN1b IRES to increase translation and when PTB levels decrease, G1 phase is shortened. ELAVL1 and ELAVL4 also bind to CDKN1B IRES but they do so in order to decrease translation and so depletion of either results in G1 arrest.
Proteolysis
Degradation of the p27 protein occurs as cells exit quiescence and enter G1. Protein levels continue to fall rapidly as the cell continues through G1 and enters S phase. One of the most understood mechanisms for p27 proteolysis is the polyubiquitylation of p27 by the SCFSKP2 kinase associated protein 1 (Skp1) and 2 (Skp2). SKP1 and Skp2 degrades p27 after it has been phosphorylated at threonine 187 (Thr187) by either activating cyclin E- or cyclin A-CDK2. Skp2 is mainly responsible for the degradation of p27 levels that continues through S phase. However it is rarely expressed in early G1 where p27 levels first begin to decrease. During early G1 proteolysis of p27 is regulated by KIP1 Ubiquitylation Promoting Complex (KPC) which binds to its CDK inhibitory domain. P27 also has three Cdk-inhibited tyrosines at residues 74, 88, and 89. Of these, Tyr74 is of special interest because it is specific to p27-type inhibitors.
Nuclear export
Alternatively to the transcription, translation, and proteolytic method of regulation, p27 levels can also be changed by exporting p27 to the cytoplasm. This occurs when p27 is phosphorylated on Ser(10) which allows for CRM1, a nuclear export carrier protein, to bind to and remove p27 from the nucleus. Once p27 is excluded from the nucleus it cannot inhibit the cell's growth. In the cytoplasm it may be degraded entirely or retained. This step occurs very early when the cell is exiting the quiescent phase and thus is independent of Skp2 degradation of p27.
MicroRNA regulation
Because p27 levels can be moderated at the translational level, it has been proposed that p27 may be regulated by miRNAs. Recent research has suggested that both miR-221 and miR-222 control p27 levels although the pathways are not well understood.
Role in cancer
Proliferation
p27 is considered a tumor suppressor because of its function as a regulator of the cell cycle. In cancers it is often inactivated via impaired synthesis, accelerated degradation, or mislocalization. Inactivation of p27 is generally accomplished post-transcription by the oncogenic activation of various pathways including receptor tyrosine kinases (RTK), phosphatilidylinositol 3-kinase (PI3K), SRC, or Ras-mitogen activated protein kinase(MAPK). These act to accelerate the proteolysis of the p27 protein and allow the cancer cell to undergo rapid division and uncontrolled proliferation. When p27 is phosphorylated by Src at tyrosine 74 or 88 it ceases to inhibit cyclinE-cdk2. Src was also shown to reduce the half life of p27 meaning it is degraded faster. Many epithelial cancers are known to overexpress EGFR which plays a role in the proteolysis of p27 and in Ras-driven proteolysis. Non-epithelial cancers use different pathways to inactivate p27. Many cancer cells also upregulate Skp2 which is known to play an active role in the proteolysis of p27 As a result, Skp2 is inversely related to p27 levels and directly correlates with tumor grade in many malignancies.
Metastasis
In cancer cells, p27 can also be mislocalized to the cytoplasm in order to facilitate metastasis. The mechanisms by which it acts on motility differ between cancers. In hepatocellular carcinoma cells p27 co-localizes with actin fibers to act on GTPase Rac and induce cell migration. In breast cancer cytoplasmic p27 reduced RHOA activity which increased a cell's propensity for motility.
This role for p27 may indicate why cancer cells rarely fully inactivate or delete p27. By retaining p27 in some capacity it can be exported to the cytoplasm during tumorigenesis and manipulated to aid in metastasis. 70% of metastatic melanomas were shown to exhibit cytoplasmic p27, while in benign melanomas p27 remained localized to the nucleus. P27 is misplaced to the cytoplasm by the MAP2K, Ras, and Akt pathways although the mechanisms are not entirely understood. Additionally, phosphorylation of p27 at T198 by RSK1 has been shown to mislocalize p27 to the cytoplasm as well as inhibit the RhoA pathway. Because inhibition of RhoA results in a decrease in both stress fibers and focal adhesion, cell motility is increased. P27 can also be exported to the cytoplasm by oncogenic activation of the P13K pathway. Thus, mislocalization of p27 to the cytoplasm in cancer cells allows them to proliferate unchecked and provides for increased motility.
In contrast to these results, p27 has also been shown to be an inhibitor of migration in sarcoma cells. In these cells, p27 bound to stathmin which prevents stathmin from binding to tubulin and thus polymerization of microtubules increased and cell motility decreased.
MicroRNA regulation
Studies of various cell lines including glioblastoma cell lines, three prostate cancer cell lines, and a breast tumor cell line showed that suppressing miR-221 and miR-22 expression resulted in p27-dependent G1 growth arrest Then when p27 was knocked down, cell growth resumed indicating a strong role for miRNA regulated p27. Studies in patients have demonstrated an inverse correlation between miR-221&22 and p27 protein levels. Additionally nearby healthy tissue showed high expression of the p27 protein while miR-221&22 concentrations were low.
Regulation in specific cancers
In most cancers reduced levels of nuclear p27 are correlated with increased tumor size, increased tumor grade, and a higher propensity for metastasis. However the mechanisms by which levels of p27 are regulated vary between cancers.
Breast
In breast cancer, Src activation has been shown to correlate with low levels of p27 Breast cancers that were Estrogen receptor negative and progesterone receptor negative were more likely to display low levels of p27 and more likely to have a high tumor grade. Similarly, breast cancer patients with BRCA1/2 mutations were more likely to have low levels of p27.
Prostate
A mutation in the CDKN1B gene has been linked to an increased risk for hereditary prostate cancer in humans.
Multiple Endocrine Neoplasia
Mutations in the CDKN1B gene has been reported in families affected by the development of primary hyperparathyroidism and pituitary adenomas, and has been classified MEN4 (multiple endocrine neoplasia, type 4). Testing for CDKN1B mutations has been recommended in patients with suspected MEN, in whom previous testing for, the more common MEN1/RET mutation, is negative.
Clinical significance
Prognostic value
Several studies have demonstrated that reduced p27 levels indicate a poorer patient prognosis. However, because of the dual, contrasting roles p27 plays in cancer (as an inhibitor of growth and as a mechanism for metastasis) low levels of p27 may demonstrate that a cancer is not aggressive and will remain benign.
In ovarian cancer, p27 negative tumors progressed in 23 months compared to 85 months in p27 positive tumors and thus could be used as a prognostic marker. Similar studies have correlated low levels of p27 with a worse prognosis in breast cancer. Colorectal carcinomas that lacked p27 were shown to have increased p27-specific proteolysis and a median survival of only 69 months compared to 151 months for patients with high or normal levels of p27. The authors proposed clinicians could use patient specific levels of p27 to determine who would benefit from adjuvant therapy. Similar correlations were observed in patients with non-small cell lung cancer, those with colon, and prostate cancer.
So far studies have only evaluated the prognostic value of p27 retrospectively and a standardized scoring system has not been established. However it has been proposed that clinicians should evaluate a patient's p27 levels in order to determine if they will be responsive to certain chemotoxins which target fast growing tumors where p27 levels are low. Or in contrast, if p27 levels are found to be high in a patient's cancer, their risk for metastasis is higher and the physician can make an informed decision about their treatment plan. Because p27 levels are controlled post-transcriptionally, proteomic surveys can be used to establish and monitor a patient's individual levels which aids in the future of individualized medicine.
The following cancers have been demonstrated to have an inverse correlation with p27 expression and prognosis: oro-pharyngo-laryngeal, oesophageal, gastric, colon, lung, melanoma, glioma, breast cancer, prostate, lymphoma, leukemia.
Correlation to treatment response
P27 may also allow clinicians to better select an appropriate treatment for a patient. For example, patients with non-small cell lung cancer who were treated with platinum based chemotherapy showed reduced survival if they had low levels of p27. Similarly low levels of p27 correlated with poor results from adjuvant chemotherapy in breast cancer patients.
Value as a therapeutic target
P27 has been explored as a potential target for cancer therapy because its levels are highly correlated to patient prognosis. This is true for a wide spectrum of cancers including colon, breast, prostate, lung, liver, stomach, and bladder.
Use of microRNAs for therapy
Because of the role miRNAs play in p27 regulation, research is underway to determine if antagomiRs that block the activity of the miR221&222 and allow for p27 cell grow inhibition to take place could act as therapeutic cancer drugs.
Role in Regeneration
Knockdown of CDKN1B stimulates regeneration of cochlear hair cells in mice. Since CDKN1B prevents cells from entering the cell cycle, inhibition of the protein could cause re-entry and subsequent division. In mammals where regeneration of cochlear hair cells normally does not occur, this inhibition could help regrow damaged cells who are otherwise incapable of proliferation. In fact, when the CDKN1B gene is disrupted in adult mice, hair cells of the organ of Corti proliferate, while those in control mice do not. Lack of CDKN1B expression appears to release the hair cells from natural cell-cycle arrest. Because hair cell death in the human cochlea is a major cause of hearing loss, the CDKN1B protein could be an important factor in the clinical treatment of deafness.
Interactions
CDKN1B has been shown to interact with:
AKT1,
CKS1B,
Cyclin D3,
Cyclin E1,
Cyclin-dependent kinase 2,
Cyclin-dependent kinase 4,
Grb2,
NUP50
SKP2,
SPDYA, and
XPO1.
See also
Sic1 (homologue in Saccharomyces cerevisiae)
P21waf-1 (another CDK inhibitor)
Hyaluronic acid synthase
Hyaluronidase
References
Further reading
External links
Cell cycle regulators
Tumor suppressor genes | CDKN1B | [
"Chemistry"
] | 3,305 | [
"Cell cycle regulators",
"Signal transduction"
] |
13,562,723 | https://en.wikipedia.org/wiki/RELA | Transcription factor p65 also known as nuclear factor NF-kappa-B p65 subunit is a protein that in humans is encoded by the RELA gene.
RELA, also known as p65, is a REL-associated protein involved in NF-κB heterodimer formation, nuclear translocation and activation. NF-κB is an essential transcription factor complex involved in all types of cellular processes, including cellular metabolism, chemotaxis, etc. Phosphorylation and acetylation of RELA are crucial post-translational modifications required for NF-κB activation. RELA has also been shown to modulate immune responses, and activation of RELA is positively associated with multiple types of cancer.
Gene and expression
RELA, or v-rel avian reticuloendotheliosis viral oncogene homolog A, is also known as p65 or NFKB3. It is located on chromosome 11 q13, and its nucleotide sequence is 1473 nucleotide long. RELA protein has four isoforms, the longest and the predominant one being 551 amino acids. RELA is expressed alongside p50 in various cell types, including epithelial/endothelial cells and neuronal tissues.
Structure
RELA is one member of the NF-κB family, one of the essential transcription factors under intensive study. Seven proteins encoded by five genes are involved in the NF-κB complex, namely p105, p100, p50, p52, RELA, c-REL and RELB. Like other proteins in this complex, RELA contains a N-terminal REL-homology domain (RHD), and also a C-terminal transactivation domain (TAD). RHD is involved in DNA binding, dimerization and NF-κB/REL inhibitor interaction. On the other hand, TAD is responsible for interacting with the basal transcription complex including many coactivators of transcription such as TBP, TFIIB and CREB-CBP. RELA and p50 is the mostly commonly found heterodimer complex among NF-κB homodimers and heterodimers, and is the functional component participating in nuclear translocation and activation of NF-κB.
RELA is a 65 kDa protein.
Phosphorylation
Phosphorylation of RELA plays a key role in regulating NF-κB activation and function. Subsequent to NF-κB nuclear translocation, RELA undergoes site-specific post-translational modifications to further enhance the NF-κB function as a transcription factor. RELA can either be phosphorylated in the RHD region or the TAD region, attracting different interaction partners. Triggered by lipopolysaccharide (LPS), protein kinase A (PKA) specifically phosphorylates serine 276 in the RHD domain in the cytoplasm, controlling NF-κB DNA-binding and oligomerization. On the other hand, mitogen and stress-activated kinase 1 (MSK1) are also able to phosphorylate RELA at residue 276 under TNFα induction in the nucleus, increasing NF-κB response at the transcriptional level. Phosphorylation of serine 311 by protein kinase C zeta type (PKCζ) serves the same purpose.
Two residues in the TAD region are targeted by phosphorylation. After IL-1or TNFα stimulation, serine 529 is phosphorylated by casein kinase II (CKII), while serine 536 is phosphorylated by IκB kinases (IKKs). In response to DNA damage, ribosomal subunit kinase-1 (RSK1) also has the ability to phosphorylate RELA at serine 536 in a p53-dependent manner. A couple of other kinases are also able to phosphorylate RELA at different conditions, including glycogen-synthase kinase-3β (GSK3β), AKT/phosphatidylinositol 3-kinase (PI3K) and NF-κB activating kinase (NAK, i.e. TANK-binding kinase-1 (TBK1) and TRAF2-associated kinase (T2K)). The fact that RELA can be modified by a collection of kinases via phosphorylation at different sites/regions within the protein under different stimulations might suggest a synergistic effect of these modifications.
Phosphorylation at these sites enhances NF-κB transcriptional response via tightened binding to transcription coactivators. For example, CBP and p300 binding to RELA are enhanced when serine 276 or 311 is phosphorylated.
Status of several phosphorylation sites determines RELA stability mediated by the ubiquitin-mediated proteolysis. Cell-type-specific phosphorylation is also observed for RELA. Multiple-site phosphorylation is common in endothelial cells, and different cell types may contain different stimuli, leading to targeted phosphorylation of RELA by different kinases. For instance, IKK2 is found to be mainly responsible for phosphorylating serine 536 in monocytes and macrophages, or in CD40 receptor binding in hepatic stellate cells. IKK1 functions as the major kinase phosphorylating serine 536 under different stimuli, such as the ligand activation of the lymphotoxin-β receptor (LTβR).
Acetylation
In vivo studies revealed that RELA is also under acetylation modification in the nucleus, which is just as important as phosphorylation as a post-translational modification of proteins.
Lysines 218, 221 and 310 are acetylation targets within RELA, and response to acetylation is site-specific. For instance, lysine 221 acetylation facilitates RELA dissociation from IκBα and enhances its DNA-binding affinity. Lysine 310 acetylation is indispensable for the full transcriptional activity of RELA, but does not affect its DNA-binding ability. Hypothesis about RELA acetylation suggests acetylation aids its subsequent recognition by transcriptional co-activators with bromodomains, which are specialized in recognizing acetylated lysine residues. Lysine 122 and 123 acetylation are found to be negatively correlated with RELA transcriptional activation.
Unknown mechanisms mediate the acetylation of RELA possibly using p300/CBP and p300/CBP factor associated coactivators under TNFα or phorbol myristate acetate (PMF) stimulation both in vivo and in vitro. RELA is also under the control of deacetylation via HDAC, and HDAC3 is the mediator of this process both in vivo and in vitro.
Methylation
Methylation of lysine 218 and 221 together or lysine 37 alone in the RHD domain of RELA can lead to increased response to cytokines such as IL-1 in mammalian cell culture.
Interactions
As the prototypical heterodimer complex member of the NF-κB, together with p50, RELA/p65 interacts with various proteins in both the cytoplasm and in the nucleus during the process of classical NF-κB activation and nuclear translocation. In the inactive state, RELA/p50 complex is mainly sequestered by IκBα in the cytosol. TNFα, LPS and other factors serve as activation inducers, followed by phosphorylation at residue 32 and 36 of IκBα, leading to rapid degradation of IκBα via the ubiquitin-proteasomal system and subsequent release of RELA/p50 complex. RELA nuclear localization signal used to be sequestered by IκBα is now exposed, and rapid translocation of the NF-κB occurs. In parallel, there is a non-classical NF-κB activation pathway involving the proteolytic cleavage of p100 into p52 instead of p50. This process does not require RELA, hence will not be discussed in detail here.
After NF-κB nuclear localization due to TNFα stimulation, p50/RELA heterodimer will function as a transcription factor and bind to a variety of genes involved in all kinds of biological processes, such as leukocyte activation/chemotaxis, negative regulation of TNFIKK pathway, cellular metabolism, antigen processing, just to name a few .
Phosphorylation of RELA at different residues also enables its interaction with CDKs and P-TEFb. Phosphorylation at serine 276 in RELA allows its interaction with P-TEFb containing CDK9 and cyclin T1 subunits, and phospho-ser276 RELA-P-TEFb complex is necessary for IL-8 and Gro-β activation. Another mechanism is involved in the activation of genes preloaded with Pol II in a RELA serine 276 phosphorylation independent manner.
RELA has been shown to interact with:
APBA2,
AHR,
ASCC3,
BRCA1,
BTRC,
c-Fos,
c-Jun,
C22orf25,
CDK9,
CEBPB,
CEBPE,
CREBBP,
CSNK2A1,
CSNK2A2,
DHX9,
EP300,
ETHE1,
FUS,
GCN5,
HDAC1,
HDAC2,
HDAC3,
ING4,
IκBα,
KLF5,
MDM2,
MEN1,
MSK1,
MTPN,
NCF1,
NFKB1,
NFKB2,
NFKBIB,
NFKBIE,
NR3C1,
NCOR2,
PARP1,
PDLIM2,
PIAS3,
PIM1,
PIN1,
PKA,
POU2F1,
PPARG,
PPP1R13L,
PRKCZ,
REL,
RFC1,
RNF25,
SIRT1,
SOCS1,
SP1,
STAT3,
TAF4B,
TBP,
TP53, and
TRIB3.
Role in immune system
Gene knockout of NF-κB genes via homologous recombination in mice showed the role of these components in innate and adaptive immune responses. RELA knockout mice is embryonic lethal due to liver apoptosis. Lymphocyte activation failure is also observed, suggesting that RELA is indispensable in the proper development of the immune system. In comparison, deletion of other REL-related genes will not cause embryonic development failure, though different levels of defects are also noted. The fact that cytokines such as TNFα and IL-1 can stimulate the activation of RELA also supports its participation in immune response.
In general, RELA participates in adaptive immunity and responses to invading pathogens via NF-κB activation. Mice without individual NF-κB proteins are deficient in B- and T-cell activation and proliferation, cytokine production and isotype switching. Mutations in RELA is found responsible for inflammatory bowel disease as well.
Cancer
NF-κB/RELA activation has been found to be correlated with cancer development, suggesting the potential of RELA as a cancer biomarker. Specific modification patterns of RELA have also been observed in many cancer types.
Prostate
RELA may have a potential role as biomarker for prostate cancer progression and metastases, as suggested by the association found between RELA nuclear localization and prostate cancer aggressiveness and biochemical recurrence.
Thyroid
Strong correlation between nuclear localization of RELA and clinicopathological parameters for papillary thyroid carcinoma (PTC), suggesting the role of NF-κB activation in tumor growth and aggressiveness in PTC.
Other than usage as an biomarker, serine 536 phosphorylation in RELA is also correlated with nuclear translocation and the expression of some transactivating genes such as COX-2, IL-8 and GST-pi in follicular thyroid carcinomas via morphoproteomic analysis.
Leukemia
Mutations in the transactivation domain of RELA can lead to decrease in transactivating ability and this mutation can be found in lymphoid neoplasia.
Head and neck
Nuclear localization of NF-κB/RELA is positively correlated with tumor micrometastases into lymph and blood and negatively correlated with patient survival outcome in patients with head and neck squamous cell carcinoma (HNSCC). This suggests a role of NF-κB/RELA as a possible target for targeted-therapy.
Breast
There is both a physical and a functional association between RELA and aryl hydrocarbon receptor (AhR), and the subsequent activation of c-myc gene transcription in breast cancer cells. Another paper reported interactions between estrogen receptor (ER) and NF-κB members, including p50 and RELA. It is shown that ERα interacts with both p50 and RELA in vitro and in vivo, and RELA antibody can reduce ERα:ERE complex formation. The paper claims a mutual repression between ER and NF-κB.
Monogenic Behçet’s Disease-like conditions
Behçet's disease-like conditions are increasingly recognized and to date predominantly involve loss-of-function variants in TNFAIP3. However, a RELA mutation that results in truncated protein variant has been reported to cause severe autoinflammatory disease due to impaired NF-κB signaling and increased apoptosis. The phenotypes associated with this disease include mucocutaneous ulcerative syndrome and neuromyelitis optica (NMO).
References
Further reading
External links
Transcription factors | RELA | [
"Chemistry",
"Biology"
] | 2,978 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
13,563,938 | https://en.wikipedia.org/wiki/Universal%20parabolic%20constant | The universal parabolic constant is a mathematical constant.
It is defined as the ratio, for any parabola, of the arc length of the parabolic segment formed by the latus rectum to the focal parameter. The focal parameter is twice the focal length. The ratio is denoted P.
In the diagram, the latus rectum is pictured in blue, the parabolic segment that it forms in red and the focal parameter in green. (The focus of the parabola is the point F and the directrix is the line L.)
The value of P is
. The circle and parabola are unique among conic sections in that they have a universal constant. The analogous ratios for ellipses and hyperbolas depend on their eccentricities. This means that all circles are similar and all parabolas are similar, whereas ellipses and hyperbolas are not.
Derivation
Take as the equation of the parabola. The focal parameter is and the semilatus rectum is .
Properties
P is a transcendental number.
Proof. Suppose that P is algebraic. Then must also be algebraic. However, by the Lindemann–Weierstrass theorem, would be transcendental, which is not the case. Hence P is transcendental.
Since P is transcendental, it is also irrational.
Applications
The average distance from a point randomly selected in the unit square to its center is
Proof.
There is also an interesting geometrical reason why this constant appears in unit squares. The average distance between a center of a unit square and a point on the square's boundary is .
If we uniformly sample every point on the perimeter of the square, take line segments (drawn from the center) corresponding to each point, add them together by joining each line segment next to the other, scaling them down, the curve obtained is a parabola.
References and footnotes
Mathematical constants
Conic sections
Parabolas
Real transcendental numbers | Universal parabolic constant | [
"Mathematics"
] | 396 | [
"Mathematical constants",
"Mathematical objects",
"Numbers",
"nan"
] |
13,564,729 | https://en.wikipedia.org/wiki/P.%20Devadas | Professor P. Devadas (c. 1923 – 18 December 2015) was an amateur astronomer from India.
A fellow of the Royal Astronomical Society and a member of the British Astronomical Association, the Astronomical Society of India, the Planetary Society and the Optical Society of India, Devadas headed the Tamil Nadu Astronomy Association. He taught observational and practical astronomy and astrophysics to school and college students visiting the Tamil Nadu Science and Technology Centre, where he was called upon to give lectures.
A recipient of the Tamil Nadu State Award for popularising science, Devadas manufactured astronomical telescopes and established an engineering firm in Chennai for the production of telescopes in India. The instruments are being supplied to research institutions, university departments, colleges, schools and amateurs.
Devadas died aged 92 at his home in Guindy on 18 December 2015.
References
Amateur astronomers
20th-century Indian astronomers
1920s births
2015 deaths | P. Devadas | [
"Astronomy"
] | 181 | [
"Astronomers",
"Amateur astronomers"
] |
13,564,737 | https://en.wikipedia.org/wiki/Proxy-based%20estimating | Proxy-Based Estimating (PROBE) is an estimating process used in the Personal Software Process (PSP) to estimate size and effort.
Proxy Based Estimating (PROBE), is the estimation method introduced by Watts Humphrey
(of the Software Engineering Institute at Carnegie Mellon University) as part of the
Personal Software Process (a discipline that helps individual software engineers monitor,
test, and improve their own work).
PROBE is based on the idea that if an engineer is building a component similar to one they built previously, then it will take about the same effort as it did in the past.
In the PROBE method, individual engineers use a database to keep track of the size and
effort of all of the work that they do, developing a history of the effort they have put into
their past projects, broken into individual components. Each component in the database is
assigned a type (“calculation,” “data,” “logic,” etc.) and a size (from “very small” to “very
large”).
When a new project must be estimated, it is broken down into tasks that correspond
to these types and sizes. A formula based on linear regression is used to calculate
the estimate for each task.
Additional information on PROBE can be found in A Discipline for Software Engineering by Watts Humphrey (Addison Wesley, 1994).
See also
Estimation in software engineering
External links
Proxy Based Estimating: Concept from chambers.com
Software engineering costs | Proxy-based estimating | [
"Engineering"
] | 289 | [
"Software engineering",
"Software engineering stubs"
] |
13,564,754 | https://en.wikipedia.org/wiki/Veterinary%20virology | Veterinary virology is the study of viruses in non-human animals. It is an important branch of veterinary medicine.
Rhabdoviruses
Rhabdoviruses are a diverse family of single stranded, negative sense RNA viruses that infect a wide range of hosts, from plants and insects, to fish and mammals. The Rhaboviridae family consists of six genera, two of which, cytorhabdoviruses and nucleorhabdoviruses, only infect plants. Novirhabdoviruses infect fish, and vesiculovirus, lyssavirus and ephemerovirus infect mammals, fish and invertebrates. The family includes pathogens such as rabies virus, vesicular stomatitis virus and potato yellow dwarf virus that are of public health, veterinary, and agricultural significance.
Foot-and-mouth disease virus
Foot-and-mouth disease virus (FMDV) is a member of the Aphthovirus genus in the Picornaviridae family and is the cause of foot-and-mouth disease in pigs, cattle, sheep and goats. It is a non-enveloped, positive strand, RNA virus. FMDV is a highly contagious virus. It enters the body through inhalation.
Pestiviruses
Pestiviruses have a single stranded, positive-sense RNA genomes. They cause Classical swine fever (CSF) and Bovine viral diarrhea(BVD). Mucosal disease is a distinct, chronic persistent infection, whereas BVD is an acute infection.
Arteriviruses
Arteriviruses are small, enveloped, animal viruses with an icosahedral core containing a positive-sense RNA genome. The family includes equine arteritis virus (EAV), porcine reproductive and respiratory syndrome virus (PRRSV), lactate dehydrogenase elevating virus (LDV) of mice and simian haemorrhagic fever virus (SHFV).
Coronaviruses
Coronaviruses are enveloped viruses with a positive-sense RNA genome and with a nucleocapsid of helical symmetry. They infect the upper respiratory and gastrointestinal tract of mammals and birds. They are the cause of a wide range of diseases in cats, dog, pigs, rodents, cattle and humans. Transmission is by the faecal-oral route.
Toroviruses
Torovirus is a genus of viruses within the family Coronaviridae, subfamily Torovirinae that primarily infect vertebrates and include Berne virus of horses and Breda virus of cattle. They cause gastroenteritis in mammals, including humans but rarely.
Influenza
Influenza is caused by RNA viruses of the family Orthomyxoviridae and affects birds and mammals.
Avian influenza
Wild aquatic birds are the natural hosts for a large variety of influenza A viruses. Occasionally viruses are transmitted from this reservoir to other species and may then cause devastating outbreaks in domestic poultry or give rise to human influenza pandemics.
Swine influenza
Bluetongue virus
Bluetongue virus (BTV), a member of Orbivirus genus within the Reoviridae family causes serious disease in livestock (sheep, goat, cattle). It is non-enveloped, double-stranded RNA virus. The genome is segmented.
Circoviruses
Circoviruses are small single-stranded DNA viruses. There are two genera: gyrovirus, with one species called chicken anemia virus; and circovirus, which includes porcine circovirus types 1 and 2, psittacine beak and feather disease virus, pigeon circovirus, canary circovirus, goose circovirus.
Herpesviruses
Herpesviruses are ubiquitous pathogens infecting a variety of animals, including humans. Hosts include many economically important species such as abalone, oysters, salmon, poultry (avian infectious laryngotracheitis, Marek's disease), cattle (bovine malignant catarrhal fever), dogs, goats, horses, cats (feline viral rhinotracheitis), and pigs (pseudorabies). Infections may be severe and may result in fatalities or reduced productivity. Therefore, outbreaks of herpesviruses in livestock cause significant financial losses and are an important area of study in veterinary virology.
African swine fever virus
African swine fever virus (ASFV) is a large double-stranded DNA virus which replicates in the cytoplasm of infected cells and is the only member of the Asfarviridae family. The virus causes a lethal haemorraghic disease in domestic pigs. Some strains can cause death of animals within as little as a week after infection. In other species, the virus causes no obvious disease. ASFV is endemic to sub-Saharan Africa and exists in the wild through a cycle of infection between ticks and wild pigs, bushpigs and warthogs.
Retroviruses
Retroviruses are established pathogens of veterinary importance. They are generally a cause of cancer or immune deficiency.
Flaviviruses
Flaviviruses constitute a family of linear, single-stranded RNA(+) viruses. Flaviviruses include the West Nile virus, dengue virus, Tick-borne Encephalitis Virus, Yellow Fever Virus, and several other viruses. Many flavivirus species can replicate in both mammalian and insect cells. Most flaviviruses are arthropod borne and multiply in both vertebrates and arthropods. The viruses in this family that are of veterinary importance include Japanese encephalitis virus, St. Louis encephalitis virus, West Nile virus, Israel turkey meningoencephalomyelitis virus, Sitiawan virus, Wesselsbron virus, yellow fever virus and the tick-borne flaviviruses e.g. louping ill virus.
Paramyxoviruses
Paramyxoviruses are a diverse family of non-segmented negative strand RNA viruses that include many highly pathogenic viruses affecting humans, animals, and birds. These include canine distemper virus (dogs), phocine distemper virus (seals), cetacean morbillivirus (dolphins and porpoises) Newcastle disease virus (birds) and rinderpest virus (cattle). Some paramyxoviruses such as the henipaviruses are zoonotic pathogens, occurring primarily in an animal hosts, but also able to infect humans.
Parvovirus
Parvoviruses are linear, non-segmented single-stranded DNA viruses, with an average genome size of 5000 nucleotides. They are classified as group II viruses in Baltimore classification of viruses. Parvoviruses are among the smallest viruses (hence the name, from Latin parvus meaning small) and are 18–28 nm in diameter.
Parvoviruses can cause disease in some animals, including starfish and humans. Because the viruses require actively dividing cells to replicate, the type of tissue infected varies with the age of the animal. The gastrointestinal tract and lymphatic system can be affected at any age, leading to vomiting, diarrhea and immunosuppression but cerebellar hypoplasia is only seen in cats that were infected in the womb or at less than two weeks of age, and disease of the myocardium is seen in puppies infected between the ages of three and eight weeks.
See also
Bat virome
History of virology
Social history of viruses
Virus evolution
References
Molecular biology
Animal viral diseases | Veterinary virology | [
"Chemistry",
"Biology"
] | 1,575 | [
"Biochemistry",
"Molecular biology"
] |
13,565,582 | https://en.wikipedia.org/wiki/Phenindamine | Phenindamine (Nolahist, Thephorin) is an antihistamine and anticholinergic closely related to cyproheptadine. It was developed by Hoffman-La Roche in the late 1940s. It is used to treat symptoms of the common cold and allergies, such as sneezing, itching, rashes, and hives. Its efficacy against some symptoms of opioid withdrawal was researched in the 1950s and 1960s in a number of countries; William S. Burroughs' book Junkie mentions this technique. Like many other first-generation antihistamines, phenindamine has useful potentiating effects on many narcotic analgesics and is even more useful with those opioids which release histamine when in the body.
Nolahist was originally manufactured in the US by Carnrick Laboratories, and later by Amarin Pharmaceuticals. When that company ceased its American operations, its product line was acquired by Valeant, but they declined to resume manufacturing Nolahist. The last produced lot bore an expiration date of 10/2005, and the product is no longer available.
Phenindamine does exhibit optical isomerism as do other chemicals of its general type ranging from pethidine and alphaprodine to cyproheptadine to certain herbicides and other industrial chemicals; the. In the example at hand, the compound isophenidamine (60295-96-7 MSDS), which is inactive.
See also
Cyproheptadine
References
Abandoned drugs
H1 receptor antagonists
Muscarinic antagonists | Phenindamine | [
"Chemistry"
] | 327 | [
"Drug safety",
"Abandoned drugs"
] |
13,565,605 | https://en.wikipedia.org/wiki/Deptropine | Deptropine (Brontina) also known as dibenzheptropine, is an antihistamine with anticholinergic properties acting at the H1 receptor. It is usually marketed as the citrate salt.
References
H1 receptor antagonists
Tropanes
Dibenzocycloheptenes
Ethers | Deptropine | [
"Chemistry"
] | 69 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
13,565,642 | https://en.wikipedia.org/wiki/Survivable%20Low%20Frequency%20Communications%20System | The AN/FRC-117 Survivable Low Frequency Communications System (SLFCS) was a communications system designed to be able to operate, albeit at low data transfer rates, during and after a nuclear attack.
The system used both very low frequency (VLF), and low frequency (LF) radio bands.
Mission
SLFCS was used for United States nuclear forces' command and control communications for Emergency Action Message dissemination and force direction. Single channel, receive only capability was provided at ICBM launch control centers. The single channel operated between 14 kHz and 60 kHz to receive commands from remotely located Combat Operations Center – Transmit/Receive (T/R) sites; this low frequency range is only slightly affected by nuclear blasts. For example, the Silver Creek site typically transmitted at 34.5 kHz. The transmitter could be tuned to any designated frequency in the above mentioned range. Receivers could receive down to 14.0 kHz.
SLFCS' primary advantage was that it would experience minimal radio signal degradation as a result of nuclear detonations. It would be an alternate means of communication during and after detonations, providing a survivable command and control communications network for the Strategic Air Command (SAC), the Joint Chiefs of Staff (JCS), and North American Aerospace Defense Command (NORAD). SLFCS would also relay signals from the Navy's LF/VLF systems.
Locations
Transmitters
Silver Creek, Nebraska (Detachment 1, 1st Aerospace Communications Group)
Hawes Air Force Station, California (Detachment 2, 33rd Communications Squadron)
PACCS aircraft
NAOC (formerly known as NEACP)
GREEN PINE Stations
The GREEN PINE communication system took messages broadcast over SLFCS and 'upconverted' them to UHF messages for bombers headed north. There were a handful of GREEN PINE stations in the northern portions of Alaska and Canada.
Receive Only
Altus AFB, Oklahoma
Barksdale AFB, Louisiana (8th Air Force Command Post)
Beale AFB, California (9 SRW Command Post)
Blytheville AFB, Arkansas (97 BW Command Post) CLOSED
Carswell AFB, Texas (7 BW Command Post) CLOSED
Castle AFB, California (93 BW Command Post) CLOSED
Davis-Monthan AFB, Arizona - 390th SMW (18 LCCs) CLOSED
Dyess AFB, Texas (96 BW Command Post)
Eielson AFB, Alaska (6 SW Command Post)
Ellsworth AFB, South Dakota - 44th SMW (16 terminals - 15 LCCs and Wing Command Post) CLOSED
Fairchild AFB, Washington
F.E. Warren AFB, Wyoming - 90th Missile Wing, 20th Air Force (21 terminals- 20 LCCs, 1 at 20th AF Missile Operations Center)
Grand Forks AFB, North Dakota - 321st SMW (16 terminals - 15 LCCs and Wing Command Post) CLOSED
Griffiss AFB, New York CLOSED
Grissom AFB, Indiana
K.I. Sawyer AFB, Michigan CLOSED
Little Rock AFB, Arkansas - 308th SMW (18 LCCs) CLOSED
Loring AFB, Maine (42 BW Command Post) CLOSED
Rickenbacker AFB, Ohio
Malmstrom AFB, Montana 341st SMW (20 terminals - 20 LCCs)
Mather AFB, California CLOSED
McConnell AFB, Kansas 22 BW (Command Post), 381st SMW (18 LCCs) CLOSED
Minot AFB, North Dakota (5 BW/91 SMW 15 LCCs)
Pease AFB, New Hampshire (509 BW Command Post) CLOSED
Plattsburgh AFB, New York (380 BW Command Post) CLOSED
Robins AFB, Georgia (19 BW Command Post)
Seymour Johnson AFB, North Carolina (68 BW Command Post)
Travis AFB, California
Vandenberg AFB, California (1 STRAD Command Post)1 LCC O1A
Whiteman AFB, Missouri - 351st SMW (16 terminals - 15 LCCs and Wing Command Post) CLOSED
Wurtsmith AFB, Michigan CLOSED
March AFB, California (15th AF COC)
History
The first program (487L) took six years from the time of the initial requirement to full operation. The second part (616A), which was basically a modification of an already operational system, took 10 years.
Chronology
1961
29 Sep – Headquarters USAF issues Specific Operating Requirement 193, for the Survivable Low Frequency Communications System; system is envisioned to link Alternate Joint Command Center with command centers of SAC, NORAD, SAC numbered air forces with LF radio networks; a total of 18 transmit/receive (T/R) sites and 375 LF-receive only (R/O) in all SAC launch facilities, mobile Minuteman trains, SAC air base control rooms, and SAC UHF positive control stations in the northern tier
1962
12 Mar – Amendment to SOR 193 changes number of transmit T/R sites to 19 (three each at AJCC, SAC, NORAD, two each at 2d Air Force, 15th Air Force and 8th Air Force, one each at Larson AFB, Southern Alaska, Sondrestrom AB, and the United Kingdom; Full Operating Capability was extended from July 1964 to May 1965.
27 Apr – A revised program directive delineated the network; T/R equipment would be installed at HQ SAC, the SAC numbered air force headquarters, and in the ABNCP, Alternate Joint Command Center (AJCC) and NORAD command center. 14 Green Pine stations, missile launch control centers, all SAC bomber wing command posts would have R/O terminals, as would the NORAD regional control centers. Initial Operating Capability (IOC) was placed at 1 Oct 1966.
1968
29 Jul 1968 – Silver Creek site accepted by SAC
19 Aug 1968 – Silver Creek site turned on for continuous operation
5 Sep 1968 – Silver Creek begins operational testing
1971
16 Jun – SLFCS IOC obtained by SAC units
1974
26 Jul – HQ USAF approves Program 616A (Improved SLFCS); system would improve SLFCS by providing anti-jam protection, improved modems, increased range and make it compatible with the Navy LF/VLF system
1978
SAC conducts Initial Operational Test and Evaluation (IOT&E) at Ellsworth AFB, South Dakota for Program 616A; test is successful
1986
30 Sep – deactivation of Hawes Radio Relay Site, Hinkley, California
20 Oct – destruction of Hawes Radio Relay Site by the Army Corps of Engineers
1996
Rapid Execution and Combat Targeting (REACT) upgrade to Minuteman launch control centers complete; advances allow SLFCS messages to be handled automatically by Higher Authority Communications/Rapid Message Processing Element (HAC/RMPE)
2005
11 Nov – last Minuteman Launch Control Center receives Minimum Essential Emergency Communications Network (MEECN) upgrade, rendering SLFCS obsolete.
2010 MMP – Minuteman Minimum Essential Emergency Communications Network Program now in the upgrade portion. Work In Progress. Advanced EHF will be available once upgrade is complete.
Photo gallery
See also
Post Attack Command and Control System (PACCS)
Ground Wave Emergency Network (GWEN)
Minimum Essential Emergency Communications Network (MEECN)
Emergency Rocket Communications System (ERCS)
Hawes Radio Tower – Location of the West Coast SLFCS transmitter until the mid-1980s at Hawes field
Silver Creek Communications Annex – Location of the East Coast SLFCS transmitter until the mid-1990s
References
External links
Mojave Roads: "Hawes Journal"
Military radio systems of the United States
Nuclear warfare
United States Department of Defense
Military communications of the United States
United States nuclear command and control | Survivable Low Frequency Communications System | [
"Chemistry"
] | 1,527 | [
"Radioactivity",
"Nuclear warfare"
] |
13,565,654 | https://en.wikipedia.org/wiki/Bromazine | Bromazine, sold under the brand names Ambodryl, Ambrodil, and Deserol among others, also known as bromodiphenhydramine, is an antihistamine and anticholinergic medication of the ethanolamine class. It is an analogue of diphenhydramine with a bromine substitution on one of the phenyl rings.
Synthesis
Grignard reaction between phenylmagnesium bromide and para-bromobenzaldehyde [1122-91-4] (1) gives p-bromobenzhydrol [29334-16-5] (2). Halogenation with acetyl bromide in benzene solvent gives p-bromo-benzhydrylbromide [18066-89-2] (3). Finally, etherification with deanol completed the synthesis of Bromazine (4).
Side effects
Continuous and/or cumulative use of anticholinergic medications, including first-generation antihistamines, is associated with higher risk for cognitive decline and dementia in elderly people.
References
4-Bromophenyl compounds
Ethers
Dimethylamino compounds
H1 receptor antagonists | Bromazine | [
"Chemistry"
] | 252 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
13,565,803 | https://en.wikipedia.org/wiki/Baltic%20Naval%20Squadron | The Baltic Naval Squadron (BALTRON) was inaugurated in 1998. The main responsibility of BALTRON is to improve the co-operation between the Baltic states in the areas of naval defence and security. Constant readiness to contribute units to NATO-led operations is assured through BALTRON.
Each Baltic state appoints one or two ships to BALTRON for a certain period and staff members for one year. Service in BALTRON provides both the crew and staff officers with an opportunity to serve in an international environment and acquire valuable experience in mine countermeasures. Estonia provides BALTRON with on-shore facilities for the staff.
Membership
There are currently 3 countries in the BALTRON:
References
External links
The Baltic Naval Squadron - BALTRON
Military units and formations of NATO
Military projects of the Baltic states
Military of Lithuania
Military of Latvia
Military units and formations of Estonia | Baltic Naval Squadron | [
"Engineering"
] | 169 | [
"Military projects",
"Military projects of the Baltic states"
] |
13,566,263 | https://en.wikipedia.org/wiki/Dukhin%20number | The Dukhin number () is a dimensionless quantity that characterizes the contribution of the surface conductivity to various electrokinetic and electroacoustic effects, as well as to electrical conductivity and permittivity of fluid heterogeneous systems. The number was named after Stanislav and Andrei Dukhin.
Overview
It was introduced by Lyklema in “Fundamentals of Interface and Colloid Science”. A recent IUPAC Technical Report used this term explicitly and detailed several means of measurement in physical systems.
The Dukhin number is a ratio of the surface conductivity to the fluid bulk electrical conductivity Km multiplied by particle size a:
There is another expression of this number that is valid when the surface conductivity is associated only with ions motion above the slipping plane in the double layer. In this case, the value of the surface conductivity depends on ζ-potential, which leads to the following expression for the Dukhin number for symmetrical electrolyte with equal ions diffusion coefficient:
where the parameter m characterizes the contribution of electro-osmosis into motion of ions within the double layer
F is Faraday constant
T is absolute temperature
R is gas constant
C is ions concentration in bulk
z is ion valency
ζ is electrokinetic potential
ε0 is vacuum dielectric permittivity
εm is fluid dielectric permittivity
η is dynamic viscosity
D is diffusion coefficient
References
Chemical mixtures
Colloidal chemistry
Condensed matter physics
Soft matter | Dukhin number | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 302 | [
"Colloidal chemistry",
"Soft matter",
"Phases of matter",
"Materials science",
"Colloids",
"Surface science",
"Chemical mixtures",
"Condensed matter physics",
"nan",
"Matter"
] |
13,566,669 | https://en.wikipedia.org/wiki/Leo%20Cluster | The Leo Cluster (Abell 1367) is a galaxy cluster about 330 million light-years distant (z = 0.022) in the constellation Leo, with at least 70 major galaxies. The galaxy known as NGC 3842 is the brightest member of this cluster. Along with the Coma Cluster, it is one of the two major clusters comprising the Coma Supercluster, which in turn is part of the CfA2 Great Wall, which is hundreds of millions light years long and is one of the largest known structures in the universe.
A team of scientists decided to observe the Leo Cluster with the intention of creating a catalog of extended ionized gas (EIG) clouds. This data also led to the discovery of many star-forming parents (galaxies) within the cluster. These star-forming galaxies turned out to be very similar to those found in the neighboring Coma cluster. The EIGs in the Leo cluster, however, turned out to be longer in the Leo cluster than the Coma cluster. This likely means that the Leo cluster and its stars are probably younger than most comparable clusters in the universe and evolve at a different pace.
Most dense galaxy clusters are composed mostly of elliptical galaxies. The Leo Cluster, however, mostly contains spiral galaxies, suggesting that it is much younger than other comparable clusters, such as the Coma Cluster. It is also home to one of the universe's largest known black holes, which lies in the center of NGC 3842. The black hole is 9.7 billion times more massive than the Sun.
It can be very difficult for stars to form within the Leo Cluster. This is because infalling galaxies have a tendency to strip gas away from other stars that are attempting to form. This has led to the creation of a "hot zone" where stars are unable to maintain their gas long enough to properly form.
There appears to be a number of subpopulations within the Leo Cluster. The first consists of elliptical galaxies that seem to be roughly as old as the universe. The second subpopulation contains red-sequence lenticular (lens shaped) galaxies whose ages are directly tied to their mass. The third and final subpopulation is of galaxies where star formation is still taking place, and are morphologically distributed.
Gallery
See also
Abell catalogue
List of Abell clusters
References
External links
The Coma Supercluster with images of A1367
Leo-Galaxienhaufen (Abell 1367)
Abell 1367 - The Leo Galaxy Cluster
http://www.slate.com/blogs/bad_astronomy/2013/07/03/leo_cluster_a_buzzing_hive_of_galaxies.html
http://www.cloudynights.com/page/articles/cat/column/small-wonders/small-wonders-leo-r486
1367
Coma Supercluster
Great Wall filament
Abell richness class 2
Galaxy clusters
Leo (constellation) | Leo Cluster | [
"Astronomy"
] | 598 | [
"Galaxy clusters",
"Astronomical objects",
"Leo (constellation)",
"Constellations"
] |
13,566,675 | https://en.wikipedia.org/wiki/Peroxynitrate | Peroxynitrate (or peroxonitrate) refers to salts of the unstable peroxynitric acid, HNO4. Peroxynitrate is unstable and decomposes to nitrate and dioxygen.
No solid peroxynitrate salts are known. However, there is a report that the chemist Sebastian Moiseevich Tanatar produced sodium peroxynitrate octahydrate (NaNO3·H2O2·8H2O) by evaporating a solution of sodium nitrate and hydrogen peroxide until crystallisation begins and then mixing with alcohol to form crystals of the octahydrate.
References
Nitrogen oxyanions | Peroxynitrate | [
"Chemistry"
] | 141 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
13,566,834 | https://en.wikipedia.org/wiki/Ontario%20Institute%20of%20Audio%20Recording%20Technology | The Ontario Institute of Audio Recording Technology (OIART) is a private career college in London, Ontario, Canada. The institute trains audio engineers for a variety of careers in music production, recording arts, audio engineering, sound recording and related fields. Founded in 1983 by engineer and producer Paul Steenhuis, the college focuses solely on sound. Graduates of the OIART program receive a Diploma in Audio Recording Technology.
OIART has been described as "prestigious" and as "the Harvard of Audio Engineering" and attracts students from around the world.
Although not accredited, the school is registered as legally required by the Ontario Ministry of Training, Colleges and Universities. OIART has the largest full-time faculty of any audio school in Canada.
Curriculum
OIART offers an intensive, practical program in audio engineering. The 1300-hour program is taught in the school's eight digital multi-track equipped studios, including two lecture-style theatres. The school features an overall 5:1 student-to-instructor ratio. In lab settings the student-to-instructor ratio is 4:1 and in tutorials the ratio is 1:1. There is one hour of lab for each hour of classroom instruction. All instructors are audio professionals. Graduates typically log more than 600 hours of studio experience during the course.
In 2006 OIART added video game sound production to its curriculum and became the first audio education program in North America to be registered as a game developer with Creative Labs.
Graduation rate and employment prospects
OIART has a completion rate that is significantly higher than the provincial average, with 90 percent of students graduating. This compares with 72.9 percent for the rest of Ontario's private colleges. More than 93 percent of graduates were employed within six months compared with 79.4 percent provincially in 2004-2005.
Graduates have pursued entry-level career opportunities as:
Notable alumni
Dan Brodbeck, Producer
Haluk Kurosman, Co-Founder, GRGDN (Turkish Recording Label)
Siegfried Meier : Recording, Mixing and Mastering engineer, Studio Owner, Musician
Rob Nokes, President & Founder, Sounddogs.com
Mike Tompkins, YouTube personality & Acapella Producer
See also
Audio Engineering Society
Society of Professional Audio Recording Services
References
External links
Private universities and colleges in Ontario
Audio engineering schools
Education in London, Ontario
1983 establishments in Ontario
Universities and colleges established in 1983 | Ontario Institute of Audio Recording Technology | [
"Engineering"
] | 484 | [
"Audio engineering",
"Audio engineering schools"
] |
13,566,984 | https://en.wikipedia.org/wiki/Double%20layer%20%28surface%20science%29 | In surface science, a double layer (DL, also called an electrical double layer, EDL) is a structure that appears on the surface of an object when it is exposed to a fluid. The object might be a solid particle, a gas bubble, a liquid droplet, or a porous body. The DL refers to two parallel layers of charge surrounding the object. The first layer, the surface charge (either positive or negative), consists of ions which are adsorbed onto the object due to chemical interactions. The second layer is composed of ions attracted to the surface charge via the Coulomb force, electrically screening the first layer. This second layer is loosely associated with the object. It is made of free ions that move in the fluid under the influence of electric attraction and thermal motion rather than being firmly anchored. It is thus called the "diffuse layer".
Interfacial DLs are most apparent in systems with a large surface-area-to-volume ratio, such as a colloid or porous bodies with particles or pores (respectively) on the scale of micrometres to nanometres. However, DLs are important to other phenomena, such as the electrochemical behaviour of electrodes.
DLs play a fundamental role in many everyday substances. For instance, homogenized milk exists only because fat droplets are covered with a DL that prevents their coagulation into butter. DLs exist in practically all heterogeneous fluid-based systems, such as blood, paint, ink and ceramic and cement slurry.
The DL is closely related to electrokinetic phenomena and electroacoustic phenomena.
Development of the (interfacial) double layer
Helmholtz
When an electronic conductor is brought in contact with a solid or liquid ionic conductor (electrolyte), a common boundary (interface) among the two phases appears. Hermann von Helmholtz was the first to realize that charged electrodes immersed in electrolyte solutions repel the co-ions of the charge while attracting counterions to their surfaces. Two layers of opposite polarity form at the interface between electrode and electrolyte. In 1853, he showed that an electrical double layer (DL) is essentially a molecular dielectric and stores charge electrostatically. Below the electrolyte's decomposition voltage, the stored charge is linearly dependent on the voltage applied.
This early model predicted a constant differential capacitance independent from the charge density depending on the dielectric constant of the electrolyte solvent and the thickness of the double-layer.
This model, while a good foundation for the description of the interface, does not consider important factors including diffusion/mixing of ions in solution, the possibility of adsorption onto the surface, and the interaction between solvent dipole moments and the electrode.
Gouy–Chapman
Louis Georges Gouy in 1910 and David Leonard Chapman in 1913 both observed that capacitance was not a constant and that it depended on the applied potential and the ionic concentration. The "Gouy–Chapman model" made significant improvements by introducing a diffuse model of the DL. In this model, the charge distribution of ions as a function of distance from the metal surface allows Maxwell–Boltzmann statistics to be applied. Thus the electric potential decreases exponentially away from the surface of the fluid bulk.
Gouy-Chapman layers may bear special relevance in bioelectrochemistry. The observation of long-distance inter-protein electron transfer through the aqueous solution has been attributed to a diffuse region between redox partner proteins (cytochromes c and c1) that is depleted of cations in comparison to the solution bulk, thereby leading to reduced screening, electric fields extending several nanometers, and currents decreasing quasi exponentially with the distance at rate ~1 nm−1. This region is termed "Gouy-Chapman conduit" and is strongly regulated by phosphorylation, which adds one negative charge to the protein surface that disrupts cationic depletion and prevents long-distance charge transport. Similar effects are observed at the redox active site of photosynthetic complexes.
Stern
The Gouy-Chapman model fails for highly charged DLs. In 1924, Otto Stern suggested combining the Helmholtz model with the Gouy-Chapman model: in Stern's model, some ions adhere to the electrode as suggested by Helmholtz, giving an internal Stern layer, while some form a Gouy-Chapman diffuse layer.
The Stern layer accounts for ions' finite size and consequently an ion's closest approach to the electrode is on the order of the ionic radius. The Stern model has its own limitations, namely that it effectively treats ions as point charges, assumes all significant interactions in the diffuse layer are Coulombic, assumes dielectric permittivity to be constant throughout the double layer, and that fluid viscosity is constant plane.
Grahame
D. C. Grahame modified the Stern model in 1947. He proposed that some ionic or uncharged species can penetrate the Stern layer, although the closest approach to the electrode is normally occupied by solvent molecules. This could occur if ions lose their solvation shell as they approach the electrode. He called ions in direct contact with the electrode "specifically adsorbed ions". This model proposed the existence of three regions. The inner Helmholtz plane (IHP) passes through the centres of the specifically adsorbed ions. The outer Helmholtz plane (OHP) passes through the centres of solvated ions at the distance of their closest approach to the electrode. Finally the diffuse layer is the region beyond the OHP.
Bockris/Devanathan/Müller (BDM)
In 1963, J. O'M. Bockris, M. A. V. Devanathan and Klaus Müller proposed the BDM model of the double-layer that included the action of the solvent in the interface. They suggested that the attached molecules of the solvent, such as water, would have a fixed alignment to the electrode surface. This first layer of solvent molecules displays a strong orientation to the electric field depending on the charge. This orientation has great influence on the permittivity of the solvent that varies with field strength. The IHP passes through the centers of these molecules. Specifically adsorbed, partially solvated ions appear in this layer. The solvated ions of the electrolyte are outside the IHP. Through the centers of these ions pass the OHP. The diffuse layer is the region beyond the OHP.
Trasatti/Buzzanca
Further research with double layers on ruthenium dioxide films in 1971 by Sergio Trasatti and Giovanni Buzzanca demonstrated that the electrochemical behavior of these electrodes at low voltages with specific adsorbed ions was like that of capacitors. The specific adsorption of the ions in this region of potential could also involve a partial charge transfer between the ion and the electrode. It was the first step towards understanding pseudocapacitance.
Conway
Between 1975 and 1980, Brian Evans Conway conducted extensive fundamental and development work on ruthenium oxide electrochemical capacitors. In 1991, he described the difference between 'Supercapacitor' and 'Battery' behavior in electrochemical energy storage. In 1999, he coined the term supercapacitor to explain the increased capacitance by surface redox reactions with faradaic charge transfer between electrodes and ions.
His "supercapacitor" stored electrical charge partially in the Helmholtz double-layer and partially as the result of faradaic reactions with "pseudocapacitance" charge transfer of electrons and protons between electrode and electrolyte. The working mechanisms of pseudocapacitors are redox reactions, intercalation and electrosorption.
Marcus
The physical and mathematical basics of electron charge transfer absent chemical bonds leading to pseudocapacitance was developed by Rudolph A. Marcus. Marcus Theory explains the rates of electron transfer reactions—the rate at which an electron can move from one chemical species to another. It was originally formulated to address outer sphere electron transfer reactions, in which two chemical species change only in their charge, with an electron jumping. For redox reactions without making or breaking bonds, Marcus theory takes the place of Henry Eyring's transition state theory which was derived for reactions with structural changes. Marcus received the Nobel Prize in Chemistry in 1992 for this theory.
Mathematical description
There are detailed descriptions of the interfacial DL in many books on colloid and interface science and microscale fluid transport. There is also a recent IUPAC technical report on the subject of interfacial double layer and related electrokinetic phenomena.
As stated by Lyklema, "...the reason for the formation of a "relaxed" ("equilibrium") double layer is the non-electric affinity of charge-determining ions for a surface..." This process leads to the buildup of an electric surface charge, expressed usually in C/m2. This surface charge creates an electrostatic field that then affects the ions in the bulk of the liquid. This electrostatic field, in combination with the thermal motion of the ions, creates a counter charge, and thus screens the electric surface charge. The net electric charge in this screening diffuse layer is equal in magnitude to the net surface charge, but has the opposite polarity. As a result, the complete structure is electrically neutral.
The diffuse layer, or at least part of it, can move under the influence of tangential stress. There is a conventionally introduced slipping plane that separates mobile fluid from fluid that remains attached to the surface. Electric potential at this plane is called electrokinetic potential or zeta potential (also denoted as ζ-potential).
The electric potential on the external boundary of the Stern layer versus the bulk electrolyte is referred to as Stern potential. Electric potential difference between the fluid bulk and the surface is called the electric surface potential.
Usually zeta potential is used for estimating the degree of DL charge. A characteristic value of this electric potential in the DL is 25 mV with a maximum value around 100 mV (up to several volts on electrodes). The chemical composition of the sample at which the ζ-potential is 0 is called the point of zero charge or the iso-electric point. It is usually determined by the solution pH value, since protons and hydroxyl ions are the charge-determining ions for most surfaces.
Zeta potential can be measured using electrophoresis, electroacoustic phenomena, streaming potential, and electroosmotic flow.
The characteristic thickness of the DL is the Debye length, κ−1. It is reciprocally proportional to the square root of the ion concentration C. In aqueous solutions it is typically on the scale of a few nanometers and the thickness decreases with increasing concentration of the electrolyte.
The electric field strength inside the DL can be anywhere from zero to over 109 V/m. These steep electric potential gradients are the reason for the importance of the DLs.
The theory for a flat surface and a symmetrical electrolyte is usually referred to as the Gouy-Chapman theory. It yields a simple relationship between electric charge in the diffuse layer σd and the Stern potential Ψd:
There is no general analytical solution for mixed electrolytes, curved surfaces or even spherical particles. There is an asymptotic solution for spherical particles with low charged DLs. In the case when electric potential over DL is less than 25 mV, the so-called Debye-Huckel approximation holds. It yields the following expression for electric potential Ψ in the spherical DL as a function of the distance r from the particle center:
There are several asymptotic models which play important roles in theoretical developments associated with the interfacial DL.
The first one is "thin DL". This model assumes that DL is much thinner than the colloidal particle or capillary radius. This restricts the value of the Debye length and particle radius as following:
This model offers tremendous simplifications for many subsequent applications. Theory of electrophoresis is just one example. The theory of electroacoustic phenomena is another example.
The thin DL model is valid for most aqueous systems because the Debye length is only a few nanometers in such cases. It breaks down only for nano-colloids in solution with ionic strengths close to water.
The opposing "thick DL" model assumes that the Debye length is larger than particle radius:
This model can be useful for some nano-colloids and non-polar fluids, where the Debye length is much larger.
The last model introduces "overlapped DLs". This is important in concentrated dispersions and emulsions when distances between particles become comparable with the Debye length.
Electrical double layers
The electrical double layer (EDL) is the result of the variation of electric potential near a surface, and has a significant influence on the behaviour of colloids and other surfaces in contact with solutions or solid-state fast ion conductors.
The primary difference between a double layer on an electrode and one on an interface is the mechanism of surface charge formation. With an electrode, it is possible to regulate the surface charge by applying an external electric potential. This application, however, is impossible in colloidal and porous double layers, because for colloidal particles, one does not have access to the interior of the particle to apply a potential difference.
EDLs are analogous to the double layer in plasma.
Differential capacitance
EDLs have an additional parameter defining their characterization: differential capacitance. Differential capacitance, denoted as C, is described by the equation below:
where σ is the surface charge and ψ is the electric surface potential.
Electron transfer in electrical double layer
The formation of electrical double layer (EDL) has been traditionally assumed to be entirely dominated by ion adsorption and redistribution. With considering the fact that the contact electrification between solid-solid is dominated by electron transfer, it is suggested by Wang that the EDL is formed by a two-step process. In the first step, when the molecules in the solution first approach a virgin surface that has no pre-existing surface charges, it may be possible that the atoms/molecules in the solution directly interact with the atoms on the solid surface to form strong overlap of electron clouds. Electron transfer occurs first to make the “neutral” atoms on solid surface become charged, i.e., the formation of ions. In the second step, if there are ions existing in the liquid, such as H+ and OH–, the loosely distributed negative ions in the solution would be attracted to migrate toward the surface bonded ions due to electrostatic interactions, forming an EDL. Both electron transfer and ion transfer co-exist at liquid-solid interface.
See also
Depletion region (structure of semiconductor junction)
DLVO theory
Electroosmotic pump
Interface and colloid science
Nanofluidics
Poisson-Boltzmann equation
Supercapacitor
References
Further reading
External links
The Electrical Double Layer
Chemical mixtures
Colloidal chemistry
Surface science
Electrochemistry
Matter
Soft matter | Double layer (surface science) | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,100 | [
"Colloidal chemistry",
"Soft matter",
"Surface science",
"Colloids",
"Electrochemistry",
"Chemical mixtures",
"Condensed matter physics",
"nan",
"Matter"
] |
13,567,369 | https://en.wikipedia.org/wiki/TRPC4AP | Trpc4-associated protein is a protein that in humans is encoded by the TRPC4AP gene.
Interactions
TRPC4AP has been shown to interact with TNFRSF1A.
See also
TRPC
References
Ion channels
Genes mutated in mice | TRPC4AP | [
"Chemistry"
] | 53 | [
"Neurochemistry",
"Ion channels"
] |
13,567,422 | https://en.wikipedia.org/wiki/Mark%20Jaccard | Mark Kenneth Jaccard (born April 12, 1955) is a Canadian energy economist and author. He develops and applies models that assess sustainability policies for energy and material. Jaccard is a professor of sustainable energy in the School of Resource and Environmental Management (REM) at Simon Fraser University.
Biography
Jaccard was born in Vancouver, British Columbia. His PhD is from the Energy Economics and Policy Institute at the University of Grenoble (now called Université Grenoble Alpes). Jaccard has been a professor at Simon Fraser University since 1986, where he teaches courses in environment and resource economics, sustainable energy and materials, and energy and materials economic and policy modeling. His research focuses on the development and application of energy-economy-emissions models that simulate the likely effects of sustainable energy policies. He has over 100 academic publications. He advises governments, industry and non-government organizations around the world.
Jaccard served as Chair and CEO of the B.C. Utilities Commission (1992 to 1997), on the Intergovernmental Panel on Climate Change (1993–96, 2008–09), and on the China Council for International Cooperation on Environment and Development (1996-2001, 2008–09). In his latest work for the China Council, he was co-chair of a task force on sustainable use of coal, reporting to the Premier of China. In 2007-12 he served as convening lead author for energy policy in the production of the Global Energy Assessment. He served on Canada’s National Roundtable on the Environment and the Economy (2006–09) and since 2006 has been a research fellow at the C.D. Howe Institute. In 2005, his book, Sustainable Fossil Fuels won the Donner Prize for top policy book in Canada. In 2008, he was named Academic of the Year by the Association of British Columbia faculty members. In 2009, he was named a Fellow of the Royal Society of Canada for his lifetime research. In 2012, Jaccard was a recipient of Canada's Clean16 and Clean 50 awards, in recognition of his contributions to sustainability education in Canada. In 2014, he received Simon Fraser University’s first Professor Award for Sustainability, and in that year was also appointed to a distinguished chair as SFU University Professor.
Education
Ph.D.: University of Grenoble, Department of Economics / Institute of Energy Economics and Policy, 1987.
Masters of Natural Resources Management: Simon Fraser University, 1983.
Bachelor of Arts: Simon Fraser University, 1977.
Publications
Books
Mark Jaccard. The Citizen's Guide to Climate Success, 2020. Cambridge University Press.
Jeffrey Simpson, Mark Jaccard and Nic Rivers. Hot Air: Meeting Canada's Climate Change Challenge, 2007. McClelland and Stewart.
Mark Jaccard. Sustainable Fossil Fuels: An Unusual Suspect in the Quest for Clean and Enduring Energy, 2005. Cambridge University Press.
Mark Jaccard, John Nyboer and Bryn Sadownik. The Cost of Climate Policy, 2002. UBC Press. Winner of the Best Policy Book in Canada award of the National Policy Research Initiative, and shortlisted for the Donner award for best policy book in Canada.
Recent publications
Bataille, C., Melton, N. and M. Jaccard, 2014, "Policy uncertainty and diffusion of carbon capture and storage in an optimal region," Climate Policy.
Rhodes, K., Axsen, J. and M. Jaccard, 2014, "Does climate policy require well-informed citizen support?" Global Environmental Change.
Palen, W. et al., 2014, "Consider the global impacts of oil pipelines," Nature, V.510, 465-467.
Liu, Z., Mao, X., Tu, J. and M. Jaccard, 2014, "A comparative assessment of economic-incentive and command-and-control instruments for air pollution and CO2 control in *China’s iron and steel sector," Journal of Environmental Management.
Jaccard, M. 2013, "The Accidental Activist", The Walrus.
Jaccard, M. and S. Goldberg, 2013, "Technology assumptions and climate policy: The interrelated effects of US electricity-transport policy in EMF 24 using CIMS-US." The Energy Journal.
Rhodes, K. and M. Jaccard, 2013, "A tale of two climate policies: Political-economy of British Columbia’s carbon tax and clean electricity standard." Canadian Public Policy.
Jaccard, M., 2012, "The political acceptability of carbon taxes: lessons from British Columbia," In J. Milne and M. Andersen, Handbook of Research on Environmental Taxation Elsevier.
Johansson, T. (et al., including M. Jaccard), 2012, "Global Energy Assessment – Summary for Policy Makers," In Johansson, T., Patwardhan, A., Nakicenovic, N. and L. Gomez-Echeverri (eds.) The Global Energy Assessment: Towards a Sustainable Future, Cambridge: Cambridge University Press.
Jaccard, M. (convening lead author) et al., 2012, "Energy Policies: Objectives and Instruments," Chapter 22 in Johansson, T., Patwardhan, A., Nakicenovic, N. and L. Gomez-Echeverri (eds.) The Global Energy Assessment: Towards a Sustainable Future, Cambridge: Cambridge University Press.
Beugin, D. and M. Jaccard, 2012, "Statistical simulation to estimate uncertain behavioral parameters of hybrid energy-economy models," Environmental Modeling and Assessment.
Jaccard, M. and J. Tu, 2011, "Show some enthusiasm, but not too much: carbon capture and storage development prospects in China," Global Environmental Change, V21, 402-412.
Rivers, N. and M. Jaccard, 2011, "Retrospective evaluation of electric utility demand-side management programs in Canada," The Energy Journal, V.32(4-5), 93-116.
Rivers, N. and M. Jaccard, 2011, "Intensity-based climate change policies in Canada," Canadian Public Policy, V36(4), 409-428.
Murphy, R. and M. Jaccard, 2011, "Energy efficiency and the cost of GHG abatement: A comparison of bottom-up and hybrid models for the US," Energy Policy.
Mitchell, C. (et al., including M. Jaccard), 2011, "Policy, Financing and Implementation," In Edenhofer et al., IPCC Special Report on Renewable Energy Sources and Climate Change Mitigation Cambridge: Cambridge University Press.
Murphy, R. and M. Jaccard, 2011, "Modeling efficiency standards and a carbon tax: Simulations for the US using a hybrid approach," The Energy Journal.
Jaccard, M., Melton, N. and J. Nyboer, 2011, "Institutions and processes for scaling up renewables: Run-of-river hydropower in British Columbia," Energy Policy.
Peters, J., Bataille, C., Rivers, N. and M. Jaccard, 2010, "Taxing emissions, not income: How to moderate the regional impact of federal climate policy in Canada," CD Howe Institute, No.314, 24 pages
Sharp, J., Jaccard, M. and D. Keith, 2009, "Anticipating public attitudes to underground CO2 storage," International Journal of Greenhouse Gas Control.
Axsen, J., Mountain, D. and M. Jaccard, 2009 "Combining stated and revealed choice research to simulate preference dynamics: the case of hybrid-electric vehicles. Resource and Energy Economics.
Jaccard, M. 2009 "Peak oil and market feedbacks: Chicken Little versus Dr. Pangloss," in T. Homer-Dixon (ed.) Carbon Shift, Random House.
Mau, P., Eyzaguirre J., Jaccard, M., Collins-Dodd, C. and K. Tiedemann, 2008, "The neighbor effect: simulating dynamics in consumer preferences for new vehicle technologies," Ecological Economics.
Jaccard, M., 2007, Designing Canada’s Low-Carb Diet: Options for Effective Climate Policy, C.D. Howe Institute Benefactors’ Lecture.
Jaccard, M. and N. Rivers, 2006 "Heterogeneous Capital Stocks and Optimal Timing for CO2 Abatement, Resource and Energy Economics.
Rivers, N. and M. Jaccard, 2006, "Choice of Environmental Policy in the Presence of Learning by Doing," Energy Economics.
Rivers, N. and M. Jaccard, 2006, "Useful Models for Simulating Policies to Induce Technological Change." Energy Policy.
References
External links
Official website
SFU Energy and Materials Research Group
Canadian Industrial Energy End-Use Data and Analysis Centre
Video of his presentation at Simon Fraser University: Global Warming – Global Justice: Trade-Off or Win-Win?
1955 births
Living people
Canadian economists
Writers from Vancouver
Simon Fraser University alumni
Academic staff of Simon Fraser University
Fellows of the Royal Society of Canada
Energy economists
Intergovernmental Panel on Climate Change contributing authors
Climate change mitigation researchers | Mark Jaccard | [
"Engineering"
] | 1,920 | [
"Geoengineering",
"Climate change mitigation researchers"
] |
13,567,507 | https://en.wikipedia.org/wiki/Saxon%20Shore | The Saxon Shore () was a military command of the Late Roman Empire, consisting of a series of fortifications on both sides of the Channel. It was established in the late 3rd century and was led by the "Count of the Saxon Shore". In the late 4th century, his functions were limited to Britain, while the fortifications in Gaul were established as separate commands. Several well-preserved Saxon Shore forts survive in east and south-east England.
Background
During the latter half of the 3rd century, the Roman Empire faced a grave crisis: Weakened by civil wars, the rapid succession of short-lived emperors, and secession in the provinces, the Romans now faced new waves of attacks by barbarian tribes. Most of Britain had been part of the empire since the mid-1st century. It was protected from raids in the north by the Hadrianic and Antonine Walls, while a fleet of some size was also available.
However, as the frontiers came under increasing external pressure, fortifications were built throughout the Empire in order to protect cities and guard strategically important locations. It is in this context that the forts of the Saxon Shore were constructed. Already in the 230s, under Severus Alexander, several units had been withdrawn from the northern frontier and garrisoned at locations in the south, and had built new forts at Brancaster and Caister-on-Sea in Norfolk and Reculver in Kent. Dover was already fortified in the early 2nd century, and the other forts in this group were constructed in the period between the 270s and 290s.
Meaning of the term and role
The only contemporary reference we possess that mentions the name "Saxon Shore" comes in the late 4th-century Notitia Dignitatum, which lists its commander, the Comes Litoris Saxonici per Britanniam ("Count of the Saxon Shore in Britain"), and gives the names of the sites under his command and their respective complements of military personnel. However, due to the absence of further evidence, theories have varied among scholars as to the exact meaning of the name, and also the nature and purpose of the chain of forts it refers to.
Two interpretations were put forward as to the meaning of the adjective "Saxon": either a shore attacked by Saxons, or a shore settled by Saxons. Some argue that the latter hypothesis is supported by Eutropius, who states that during the 280s the sea along the coasts of Belgica and Armorica was "infested with Franks and Saxons", and that this was why Carausius was first put in charge of the fleet there. It also receives support from archaeological finds, as artefacts of a Germanic style have been found in burials, while there is evidence of the presence of Saxons in southern England and the northern coasts of Gaul around Boulogne-sur-Mer and Bayeux from the middle of the 5th century onwards. This, in turn, could mirror a well documented practice of deliberately settling Germanic tribes (Franks became foederati in 358 AD under Emperor Julian) to strengthen Roman defences. Nevertheless, the evidence for extensive Saxon settlement in Britain typically dates to the 5th century, later than the channel defences of the late 3rd and 4th century associated with the Saxon Shore.
The other interpretation holds that the forts fulfilled a coastal defence role against seaborne invaders, mostly Saxons and Franks, and acted as bases for the naval units operating against them. This view is reinforced by the parallel chain of fortifications across the Channel on the northern coasts of Gaul, which complemented the British forts, suggesting a unified defensive system, although this could also be accounted for the Saxons having been settled on both sides of the coast as the archeological evidence presented earlier suggests.
Other scholars like John Cotterill however consider the threat posed by Germanic raiders, at least in the 3rd and early 4th centuries, to be exaggerated. They interpret the construction of the forts at Brancaster, Caister-on-Sea and Reculver in the early 3rd century and their location at the estuaries of navigable rivers as pointing to a different role: fortified points for transport and supply between Britain and Gaul, without any relation (at least at that time) to countering seaborne piracy. This view is supported by contemporary references to the supplying of the army of Julian the Apostate by Caesar with grain from Britain during his campaign in Gaul in 359, and their use as secure landing places by Count Theodosius during the suppression of the Great Conspiracy a few years later.
Another theory, proposed by D.A. White, was that the extended system of large stone forts was disproportionate to any threat by seaborne Germanic raiders, and that it was actually conceived and constructed during the secession of Carausius and Allectus (the Carausian Revolt) in 289–296, and with an entirely different enemy in mind: they were to guard against an attempt at reconquest by the Empire. This view, although widely disputed, has found recent support from archaeological evidence at Pevensey, which dates the fort's construction to the early 290s.
Whatever their original purpose, it is virtually certain that in the late 4th century the forts and their garrisons were employed in operations against Frankish and Saxon pirates. Britain was abandoned by Rome in 410, with Armorica following soon after. The forts on both sides continued to be inhabited in the following centuries, and in Britain in particular several continued in use well into the Anglo-Saxon period.
The forts
In Britain
The nine forts mentioned in the Notitia Dignitatum for Britain are listed here, from north to south, with their garrisons.
Branodunum (Brancaster, Norfolk). One of the earliest forts, dated to the 230s. It was built to guard the Wash approaches and is of a typical rectangular castrum layout. It was garrisoned by the Equites Dalmatae Brandodunenses, although evidence exists suggesting that its original garrison was the cohors I Aquitanorum.
Gariannonum (Burgh Castle, Norfolk). Established between 260 and the mid-270s to guard the River Yare (Gariannus Fluvius), it was garrisoned by the Equites Stablesiani Gariannoneses. Although there is some discussion as to whether this is actually the fort at Caister-on-Sea, and being on the opposite bank of the same estuary as Burgh Castle.
Othona (Bradwell-on-Sea, Essex). Garrisoned by the Numerus Fortensium.
Regulbium (Reculver, Kent). Together with Brancaster one of the earliest forts, built in the 210s to guard the Thames estuary, it is likewise a castrum. It was garrisoned by the cohors I Baetasiorum since the 3rd century.
Rutupiae (Richborough, Kent), garrisoned by parts of the Legio II Augusta.
Dubris (Dover Castle, Kent), garrisoned by the Milites Tungrecani.
Portus Lemanis (Lympne, Kent), garrisoned by the Numerus Turnacensium.
Anderitum (Pevensey Castle, East Sussex), garrisoned by the Numerus Abulcorum.
Portus Adurni (Portchester Castle, Hampshire), garrisoned by a Numerus Exploratorum.
There are a few other sites that clearly belonged to the system of the British branch of the Saxon Shore (the so-called "Wash-Solent limes"), although they are not included in the Notitia, such as the forts at Walton Castle, Suffolk, which has by now sunk into the sea due to erosion, and at Caister-on-Sea in Norfolk. In the south, Carisbrooke Castle on the Isle of Wight and Clausentum (Bitterne, in modern Southampton) are also regarded as westward extensions of the fortification chain. Other sites probably connected to the Saxon Shore system are the sunken fort at Skegness, and the remains of possible signal stations at Thornham in Norfolk, Corton in Suffolk and Hadleigh in Essex.
Further north on the coast, the precautions took the form of central depots at Lindum (Lincoln) and Malton with roads radiating to coastal signal stations. When an alert was relayed to the base, troops could be dispatched along the road. Further up the coast in North Yorkshire, a series of coastal watchtowers (at Huntcliff, Filey, Ravenscar, Goldsborough, and Scarborough) was constructed, linking the southern defences to the northern military zone of the Wall. Similar coastal fortifications are also found in Wales, at Cardiff and Caer Gybi. The only fort in this style in the northern military zone is Lancaster, Lancashire, built sometime in the mid-late 3rd century replacing an earlier fort and extramural community, which may reflect the extent of coastal protection on the north-west coast from invading tribes from Ireland.
In Gaul
The Notitia also includes two separate commands for the northern coast of Gaul, both of which belonged to the Saxon Shore system. However, when the list was compiled, in , Britain had been abandoned by Roman forces. The first command controlled the shores of the province Belgica Secunda (roughly between the estuaries of the Scheldt and the Somme), under the dux Belgicae Secundae with headquarters at Portus Aepatiaci:
Marcae (unidentified location near Calais, possibly Marquise or Marck), garrisoned by the Equites Dalmatae. In the Notitia, together with Grannona, it is the only site on the Gallic shore to be explicitly referred to as lying in litore Saxonico.
Locus Quartensis sive Hornensis (probably at the mouth of the Somme), the port of the classis Sambrica ("Fleet of the Somme")
Portus Aepatiaci (possibly Étaples), garrisoned by the milites Nervii.
Although not mentioned in the Notitia, the port of Gesoriacum or Bononia (Boulogne-sur-Mer), which until 296 was the main base of the Classis Britannica, would also have come under the dux Belgicae Secundae.
To this group also belongs the Roman fort at Oudenburg in Belgium.
Further west, under the dux tractus Armoricani et Nervicani, were mainly the coasts of Armorica, nowadays Normandy and Brittany. The Notitia lists the following sites:
Grannona (disputed location, either at the mouths of the Seine or at Port-en-Bessin), the seat of the dux, garrisoned by the cohors prima nova Armoricana. In the Notitia, it is explicitly mentioned as lying in litore Saxonico.
Rotomagus (Rouen), garrisoned by the milites Ursariensii
Constantia (Coutances), garrisoned by the legio I Flavia Gallicana Constantia
Abricantis (Avranches), garrisoned by the milites Dalmati
Grannona (uncertain whether this is a different location than the first Grannona, perhaps Granville), garrisoned by the milites Grannonensii
Aleto or Aletum (Aleth, near Saint-Malo), garrisoned by the milites Martensii
Osismis (Brest), garrisoned by the milites Mauri Osismiaci
Blabia (perhaps Hennebont), garrisoned by the milites Carronensii
Benetis (possibly Vannes), garrisoned by the milites Mauri Beneti
Manatias (Nantes), garrisoned by the milites superventores
In addition, there are several other sites where a Roman military presence has been suggested. At Alderney, the fort known as "The Nunnery" is known to date to Roman times, and the settlement at Longy Common has been cited as evidence of a Roman military establishment, though the archaeological evidence there is, at best, scant.
In popular culture
In 1888, Alfred Church wrote a historical novel entitled The Count of the Saxon Shore. It is available online.
The American band Saxon Shore takes its name from the region.
The Saxon Shore is the fourth book in Jack Whyte's Camulod Chronicles.
Since 1980, the "Saxon Shore Way" exists, a coastal footpath in Kent which passes by many of the forts.
David Rudkin's play The Saxon Shore takes place near Hadrian's Wall as the Romans are withdrawing from Britain.
References
Notes
Sources
Cottrell, Leonard (1964). The Roman Forts of the Saxon Shore, London: HMSO.
Myers John N.L. (1986) The English Settlements, Oxford University Press
Strugnell, Kenneth Wenham (1973). Seagates to the Saxon Shore, Terence Dalton Ltd.
External links
The Saxon Shore forts on "Roman Britain"
Sites of the Litus Saxonicum forts on Google Maps
History of Pevensey Castle
Fortifications in France
Fortification lines
4th century in Roman Gaul
Roman Britain
Roman fortifications in England
Roman fortifications in France
Military history of the English Channel | Saxon Shore | [
"Engineering"
] | 2,703 | [
"Fortification lines",
"Saxon Shore"
] |
13,567,555 | https://en.wikipedia.org/wiki/RAR-related%20orphan%20receptor%20alpha | RAR-related orphan receptor alpha (RORα), also known as NR1F1 (nuclear receptor subfamily 1, group F, member 1) is a nuclear receptor that in humans is encoded by the RORA gene. RORα participates in the transcriptional regulation of some genes involved in circadian rhythm. In mice, RORα is essential for development of cerebellum through direct regulation of genes expressed in Purkinje cells. It also plays an essential role in the development of type 2 innate lymphoid cells (ILC2) and mutant animals are ILC2 deficient. In addition, although present in normal numbers, the ILC3 and Th17 cells from RORα deficient mice are defective for cytokine production.
Discovery
The first three-human isoforms of RORα were initially cloned and characterized as nuclear receptors in 1994 by Giguère and colleagues, when their structure and function were first studied.
In the early 2000s, various studies demonstrated that RORα displays rhythmic patterns of expression in a circadian cycle in the liver, kidney, retina, and lung. Of interest, it was around this time that RORα abundance was found to be circadian in the mammalian suprachiasmatic nucleus. RORα is necessary for normal circadian rhythms in mice, demonstrating its importance in chronobiology.
Structure
The protein encoded by this gene is a member of the NR1 subfamily of nuclear hormone receptors. In humans, 4 isoforms of RORα have been identified, which are generated via alternative splicing and promoter usage, and exhibit differential tissue-specific expression. The protein structure of RORα consists of four canonical functional groups: an N-terminal (A/B) domain, a DNA-binding domain containing two zinc fingers, a hinge domain, and a C-terminal ligand-binding domain. Within the ROR family, the DNA-binding domain is highly conserved, and the ligand-binding domain is only moderately conserved. Different isoforms of RORα have different binding specificities and strengths of transcriptional activity.
Regulation of circadian rhythm
The core mammalian circadian clock is a negative feedback loop which consists of Per1/Per2, Cry1/Cry2, Bmal1, and Clock. This feedback loop is stabilized through another loop involving the transcriptional regulation of Bmal1. Transactivation of Bmal1 is regulated through the upstream ROR/REV-ERB Response Element (RRE) in the Bmal1 promoter, to which RORα and REV-ERBα bind. This stabilizing regulatory loop itself is induced by the Bmal1/Clock heterodimer, which induces transcription of RORα and REV-ERBα. RORα, which activates transcription of Bmal1, and REV-ERBα, which represses transcription of Bmal1, compete to bind to the RRE. This feedback loop regulating the expression of Bmal1 is thought to stabilize the core clock mechanism, helping to buffer it against changes in the environment.
Mechanism
Specific association with ROR elements (RORE) in regulatory regions is necessary for RORα's function as a transcriptional activator. RORα achieves this by specific binding to a consensus core motif in RORE, RGGTCA. This interaction is possible through the association of RORα's first zinc finger with the core motif in the major groove, the P-box, and the association of its C-terminal extension with the AT-rich region in the 5’ region of RORE.
Homology
RORα, RORβ, and RORγ are all transcriptional activators recognizing ROR-response elements. ROR-alpha is expressed in a variety of cell types and is involved in regulating several aspects of development, inflammatory responses, and lymphocyte development. The RORα isoforms (RORα1 through RORα3) arise via alternative RNA processing, with RORα2 and RORα3 sharing an amino-terminal region different from RORα1. In contrast to RORα, RORβ is expressed in Central Nervous System (CNS) tissues involved in processing sensory information and in generating circadian rhythms while RORγ is critical in lymph node organogenesis and thymopoeisis.
The DNA-binding domains of the DHR3 orphan receptor in Drosophila shows especially close homology within amino and carboxy regions adjacent to the second zinc finger region in RORα, suggesting that this group of residues is important for the proteins' functionalities.
PDP1 and VRI in Drosophila regulate circadian rhythm's by competing for the same binding site, the VP box, similarly to how ROR and REV-ERB competitively bind to RRE. PDP1 and VRI constitute a feedback loop and are functional homologs of ROR and REV-ERB in mammals.
Direct orthologs of this gene have been identified in mice and humans.
Human cytochrome c pseudogene HC2 and RORα share overlapping genomic organization with the HC2 pseudogene located within the RORα2 transcription unit. The nucleotide and deduced amino acid sequences of cytochrome c-processed pseudogene are on the sense strand while those of the RORα2 amino-terminal exon are on the antisense strand.
Interactions
DNA: RORα binds to the P-box of the RORE.
Co-activators:
SRC-1, CBP, p300, TRIP-l, TRIP-230, transcription intermediary protein-1 (TIF-1), peroxisome proliferator-binding protein (PBP), and GRIP-1 physically interact with RORα.
LXXLL motif: ROR interacts with SRC-1, GRIP-l, CBP, and p300 via the LXXLL (L=Leucine, X=any amino acid) motifs on these proteins.
Ubiquitination: RORα is targeted for the proteasome by ubiquitination. A co-repressor, Hairless, stabilizes RORα by protecting it from this process, which also represses RORα.
Sumoylation: UBE21/UBC9: Ubiquitin-conjugating enzyme I interacts with RORs, but its effect is not yet known.
Phosphorylation:
Phosphorylation of RORα1, which inhibits its transcriptional activity, is induced by Protein Kinase C.
ERK2: Extracellular signal-regulated kinase-2 also phosphorylates RORα.
ATXN1: ATXN1 and RORα form part of a protein complex in Purkinje cells.
FOXP3: FOXP3 directly represses the transcriptional activity of RORs.
NME1: ROR has been shown to specifically interact with NME1.
NM23-2: NM23-2 is a nucleoside diphosphate kinase involved in organogenesis and differentiation.
NM23-1: NM23-1 is the product of a tumor metastasis suppressor candidate gene.
As a drug target
Because RORα and REV-ERBα are nuclear receptors that share the same target genes and are involved in processes that regulate metabolism, development, immunity, and circadian rhythm, they show potential as drug targets. Synthetic ligands have a variety of potential therapeutic uses, and can be used to treat diseases such as diabetes, atherosclerosis, autoimmunity, and cancer. T0901317 and SR1001, two synthetic ligands, have been found to be RORα and RORγ inverse agonists that suppress reporter activity and have been shown to delay onset and clinical severity of multiple sclerosis and other Th17 cell-mediated autoimmune diseases. SR1078 has been discovered as a RORα and RORγ agonist that increases the expression of G6PC and FGF21, yielding the therapeutic potential to treat obesity and diabetes as well as cancer of the breast, ovaries, and prostate. SR3335 has also been discovered as a RORα inverse agonist.
CGP 52608
See also
RAR-related orphan receptor
REV-ERBα
Aromatase deficiency
References
Further reading
External links
Intracellular receptors
Transcription factors | RAR-related orphan receptor alpha | [
"Chemistry",
"Biology"
] | 1,761 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
13,567,558 | https://en.wikipedia.org/wiki/General%20communication%20channel | The general communication channel (GCC) was defined by G.709 is an in-band side channel used to carry transmission management and signaling information within optical transport network elements.
Two types of GCC are available:
GCC0 – two bytes within OTUk overhead. GCC0 is terminated at every 3R (re-shaping, re-timing, re-amplification) point and used to carry GMPLS signaling protocol and/or management information.
GCC1/2 – four bytes (each of two bytes) within ODUk overhead. These bytes are used for client end-to-end information and shouldn't be touched by the OTN equipment.
In contrast to SONET/SDH where the data communication channel (DCC) has a constant data rate, GCC data rate depends on the OTN line rate. For example, GCC0 data rate in the case of OTU1 is ~333 kbit/s, and for OTU2 its data rate is ~1.3 Mbit/s.
Computer networking
Optical Transport Network | General communication channel | [
"Technology",
"Engineering"
] | 221 | [
"Computer networking",
"Computer engineering",
"Computer network stubs",
"Computer science",
"Computing stubs"
] |
13,567,584 | https://en.wikipedia.org/wiki/Energy%20efficient%20transformer | In a typical power distribution grid, electric transformer power loss typically contributes to about 40-50% of the total transmission and distribution loss. Energy efficient transformers are therefore an important means to reduce transmission and distribution loss. With the improvement of electrical steel (silicon steel) properties, the losses of a transformer in 2010 can be half that of a similar transformer in the 1970s. With new magnetic materials, it is possible to achieve even higher efficiency. The amorphous metal transformer is a modern example.
References
External links
World's largest Amorphous Metal Power Transformer: 99.31% Efficiency
Amorphous Metals in Electric-Power Distribution Applications
Australian MandatoryEfficiency Requirements for Distribution Transformers
Electronic engineering
Energy conservation
Electric transformers | Energy efficient transformer | [
"Technology",
"Engineering"
] | 149 | [
"Electrical engineering",
"Electronic engineering",
"Computer engineering"
] |
13,568,246 | https://en.wikipedia.org/wiki/Bureau%20of%20Steam%20Engineering | The Bureau of Steam Engineering was a bureau of the United States Navy, created by the act of 5 July 1862, receiving some of the duties of the former Bureau of Construction, Equipment and Repair. It became, by the Naval Appropriation Act of 4 June 1920, the Bureau of Engineering (BuEng). In 1940 it combined with the Bureau of Construction and Repair (BuC&R) and became the Bureau of Ships (BuShips).
Historical background
"Engineering, both in operating the shipboard machinery and in the design and construction of ships, became critically important with the outbreak of the Civil War. The Navy had to blockade a ‘coastline stretching over 3,000 miles from the Potomac to the Mexican border. It had to support the Army on the rivers; it had to search out and destroy Confederate raiders. For all these purposes, the steam engine and the engineer were indispensable. On the day of battle, steam engines drove the Monitor and the Merrimack, the Kearsarge and the Alabama, as well as the gunboats which supported Grant before Fort Donelson and Vicksburg. In 1862, Congress recognized the importance of engineering by creating the Bureau of Steam Engineering.
"When Lee surrendered, the United States Navy was the most effective sea power in the world. That position depended upon engineering which, in turn, was based on the skill of Benjamin F. Isherwood, first Chief of the Bureau of Steam Engineering. He designed and built engines rugged enough to withstand the shock of combat, as well as ill-treatment by poorly trained operating engineers. He also designed and constructed a well-armed cruiser which was faster than any abroad. In addition, American naval leadership rested upon ingenious civilian engineers and inventors such as John Ericsson, who designed and built the Monitor."
The Navy's first marine engineer was a civilian appointment in 1836. Congress authorized the establishment of an Engineer Corps in 1842. The 1862 reorganization gave officers of the Engineer Corps their own bureau with dedicated billets to avoid competition from Construction Corps officers (naval architects) in the separated Bureau of Construction and Repair. In 1864 Congress authorized establishment of a separate United States Naval Academy curriculum for naval constructors and steam engineers; and the academy offered parallel tracks for cadet-midshipmen and cadet-engineers. Shipboard commanding officers became uncomfortable with their increasing dependency on the skills and advice of subordinates trained in matters unfamiliar to them; so a common naval academy curriculum was re-instituted in 1882, and Engineer Corps officers were merged into the unrestricted line in 1899. Junior Engineer Corps officers qualified for general line duties at sea, and senior Engineer Corps officers were restricted to shore assignments in their specialties. The restricted line officer concept of "engineering duty only" (EDO) was revived in 1916 when the Engineer Corps officers proved inadequately prepared for the expanded shipbuilding programs of World War I. The EDO designation expanded to include naval architects of the former Construction Corps when the two Corps were merged into the Bureau of Ships in 1940.
The consolidation with BuEng into BuShips had its origins when , first of the s to be delivered, was found to be heavier than designed and dangerously top-heavy in early 1939. It was determined that an underestimate by BuEng of the weight of a new machinery design was responsible, and that BuC&R did not have sufficient authority to detect or correct the error during the design process. Initially, Acting Secretary of the Navy Charles Edison proposed consolidation of the design divisions of the two bureaus. When the bureau chiefs could not agree on how to do this, he replaced both chiefs in September 1939. The consolidation was finally effected by a law passed by Congress on 20 June 1940.
Commanding officers
Commanding and senior officers of the bureau were:
1862–1869: Benjamin Franklin Isherwood, engineer-in-chief
1869–1873: James Wilson King, engineer-in-chief
1873–1877: William Willis Wood, engineer-in-chief
1877–1883: William Henry Shock, commodore
1883–1887: Charles Harding Loring, commodore
1887–1903: George Wallace Melville, rear admiral
1903–1908: Charles Whiteside Rae, rear admiral
1908: John Kennedy Barton, rear admiral
1909–1913: Hutch Ingham Cone, rear admiral
1913–1921: Robert Stanislaus Griffin, rear admiral
1921–1925: John Keeler Robison, rear admiral
1925–1928: John Halligan Jr., rear admiral
1928–1931: Harry Ervin Yarnell, rear admiral
1931–1935: Samuel Murray Robinson, rear admiral
1935–1939: Harold Gardiner Bowen Sr., rear admiral
1939–1940: Samuel Murray Robinson, rear admiral
See also
Board of Navy Commissioners
Bureau of Ships
References
Citations
Bibliography
Snyder, Philip W., RADM USN, (February 1979) "Bring Back the Corps", Proceedings of the United States Naval Institute
External links
1862 establishments in the United States
1940 disestablishments in the United States
Steam Engineering
Engineering units and formations of the United States military
Marine engineering organizations
Military units and formations established in 1862
Military units and formations disestablished in 1940 | Bureau of Steam Engineering | [
"Engineering"
] | 1,033 | [
"Marine engineering organizations",
"Marine engineering"
] |
13,568,942 | https://en.wikipedia.org/wiki/Prometon | Prometon is a herbicide for annual and perennial broad-leaf weed, brush and grass control mainly in non-cropping situations.
References
Prometon Risk Assessments; Notice of Availability, and Risk Reduction Options. Federal Register: November 7, 2007 (Volume 72, Number 215)
Scorecard – CHEMICAL PROFILES – Chemical Profile – Chemical: PROMETON – CAS Number: 1610-18-0
External links
Herbicides
Triazines
Isopropylamino compounds | Prometon | [
"Biology"
] | 96 | [
"Herbicides",
"Biocides"
] |
13,569,019 | https://en.wikipedia.org/wiki/Dithiopyr | Dithiopyr is a preemergent herbicide for crabgrass control in turf and ornamental grasses. It is effective on 45 grassy and broadleaf weeds. Dithiopyr inhibits root growth of susceptible weeds as well as turf grass and thus should be used only on established turf with a well-developed root system. Its duration of efficacy is approximately 4 months, so lawns should not be reseeded during this time frame following application of the chemical. Dithiopyr acts primarily as a preemergent herbicide but can also be used in early postemergent control of crabgrass.
It is an ingredient in many products including Dimension from Dow AgroSciences.
Mode of action
Dithiopyr acts as a root growth inhibitor, causing cessation of root elongation and inhibition of mitotic cell division. It inhibits formation of microtubules and spindle organizing centers. Dithiopyr may alter microtubule polymerization and stability by "interacting with microtubule associated proteins or microtubule organizing centers rather than interaction directly with tubulin." Mitotic cells are arrested in late prometaphase. Cell entry into mitosis is unaffected.
Synthesis
Dithiopyr can be obtained through a multi-step process starting from ethyl trifluoroacetoacetate and isovaleraldehyde.
References
Preemergent herbicides
Pyridines
Organofluorides
Trifluoromethyl compounds
Microtubule inhibitors
Thioesters | Dithiopyr | [
"Chemistry"
] | 309 | [
"Thioesters",
"Functional groups"
] |
13,569,440 | https://en.wikipedia.org/wiki/Algiers%20Metro | The Algiers Metro () is a rapid transit system that serves Algiers, the capital of Algeria. Originally designed in the 1970s, it opened in 2011 after decades of delays due to financial difficulties and security issues. The Algiers Metro was the second metro system to open in Africa, after the Cairo Metro.
The first phase of Line 1, "Haï el Badr"–"Tafourah-Central Post Office", which had a length of and comprised 10 stations, opened for public service on 1 November 2011. A extension from "Haï el Badr" to "El Harrach Centre" opened for commercial service on 4 July 2015 after test runs in June.
History
During the 1970s, the promoters of the Algiers rapid transit subway project envisioned a network. The project was officially inaugurated in 1982, with technical studies completed in 1985. Authorities retained a German company and a Japanese specialist for building the network. The collapse of oil prices in the 1980s considerably affected the Algerian state's ability to continue funding the project. Authorities discussed the possibility of folding the subway development programme into other mass-transit projects but eventually decided to continue with the original Metro program, albeit slowly.
In 1988–89, Algeria awarded construction contracts to two national companies: COSIDER and GENISIDER. Neither was experienced in running large urban transit development projects. Construction encountered financial and political difficulties, with only four stations being constructed in 15 years. Moreover, the Algiers soil is difficult to dig in, and the city's topography is irregular. Work did not advance significantly for many years.
In 1994, the first long section, called Emir-Abdelkader, was completed. Another section, connecting the Central Post Office to Khélifa-Boukhalfa, was completed soon after. In 1999, the Metro of Algiers Company (EMA) invited international companies to participate in a tender offering, resulting in two new contractors being added to the project: French Systra-Sgte for project management, and Agéro-German GAAMA for construction and completion, within 38 months, of the civil engineering tasks and earthworks.
In 2003, benefiting from the return of economic stability and improved security, the government increased funding and introduced a new organisational and operational structure.
In January 2006, further changes were introduced to the project, with integrated systems development handed to Siemens Transportation Systems. This included the installation of fixed material, signals and electrification. Vinci was responsible for civil engineering, and the Spanish company Construcciones y Auxiliar de Ferrocarriles (CAF) was to deliver a new set of rolling stock, including 14 trains of 6 cars each. The network would use the Trainguard MT CBTC technology, which had already been implemented on line 1 and line 14 of the Paris Métro.
System
With a length of , the first section of Line 1 to open included ten stations, connecting Tafourah–Grande Poste to Haï El Badr. Nine of the ten stations are underground with two central tracks flanked by two long side platforms. Only the Haï El Badr terminus station is on the surface and it has three tracks and two island platforms.
The El Hamma - Haï El Badr section, with its 4 stations and 17 other works for ventilation and cables was carried out within 38 months. Civil engineering work and rail laying were officially completed on 30 June 2007.
The installation and the welding of of tracks were started in April 2007 by the French company South-western Travaux France (TSO) with the first metro car to be delivered to Algiers by December 2007.
In July 2015, this was supplemented by the opening of the , four-station expansion from "Haï el Badr" to "El Harrach Centre". The system now serves 14 stations, over a total route length of approximately .
Stations
Operations
The total cost of the first phase of line 1 rose to 77 billion DZD (900 million euros), consisting of DZD 30 billion for civil engineering and DZD 47 billion for the equipment.
14 six-car trains are being used. Each train is 108m in length with 208 seats and can transport 1,216 people.
The metro line can move 41,000 passengers per hour, the equivalent of 150 million passengers per year, with a headway of under 2 minutes. Trains can travel at speeds of up to , and the line is open from 5 a.m. to 11 p.m.
The metro's daily operation is the responsibility of the RATP Group, which was awarded the contract in August 2007.
Extensions
Invitations to tender were launched for the construction of a section between Bachdjarrah and El Harrach composed of 4 stations and one viaduct above the access road to the Ouchaïah Wadi motorway. It opened for public service on 4 July 2015.
The Gaama group which carried out the first section quoted 250 million euros including the construction of a multimodal station (subway/train/taxis) at the El Harrach railway station.
Two other extensions to Line 1 had a planned public opening in 2017:
a branch line from Haï El Badr to Aïn Naâdja.
an extension north from Tafourah Grande Poste to Place des Martyrs
Network map
See also
Algeria
Algiers Tram
List of metro systems
Transport in Algeria
References
External links
Interactive Algiers Metro Map
Algiers Metro
L'Etablissement de Transport Urbain et Suburbain d’Alger (ETUSA)
Siemens Transportation Systems - Algiers Metro
Subways.net Algiers Metro
UrbanRail.Net – descriptions of all metro systems in the world, each with a schematic map showing all stations.
Electric railways in Algeria
Rail transport in Algeria
Railway lines opened in 2011
Rail transport articles in need of updating
RATP Group
Siemens Mobility projects
Metro
Underground rapid transit in Algeria
2011 establishments in Algeria
Standard gauge railways in Algeria | Algiers Metro | [
"Technology",
"Engineering"
] | 1,175 | [
"Siemens Mobility projects",
"Transport systems"
] |
13,569,582 | https://en.wikipedia.org/wiki/Intelligent%20pump | An intelligent pump is a pump that has the ability to regulate and control flow or pressure. Typical advantages are energy savings, lifetime improvements and system cost reductions. Intelligent pumps are used in boilers and systems, temperature control, water treatment, industrial water supply, wash and clean, machining and desalination.
References
Further reading
External links
Market overview intelligent pumps
Intelligent pump system control market
Manufacturers
Pump control system
Chemical pump
Intelligent micro pump
Residential water pump
Pumps | Intelligent pump | [
"Physics",
"Chemistry",
"Engineering"
] | 90 | [
"Pumps",
"Turbomachinery",
"Physical systems",
"Hydraulics",
"Mechanical engineering",
"Mechanical engineering stubs"
] |
13,569,791 | https://en.wikipedia.org/wiki/Autonomous%20telepresence | Autonomous telepresence is a method of offering remote healthcare in a patient's home using robots and videoconferencing systems to provide a consumer-based mobile platform. At present the existing systems have little or no autonomy and rely on remote operators.
See also
Telepresence
Open-source robotics
References
References
Sparky Jr. Project
Hybridity
Telepresence
Computer-mediated communication | Autonomous telepresence | [
"Technology"
] | 81 | [
"Computer-mediated communication",
"Information systems",
"Computing and society"
] |
13,570,029 | https://en.wikipedia.org/wiki/Nathaniel%20Stern | Nathaniel Stern (born 1977) is an American/South African interdisciplinary artist who works in a variety of media, including photography, interactive art, public art interventions, installation, video art, net.art and printmaking. He is currently a Professor of Art and Mechanical Engineering at the University of Wisconsin–Milwaukee.
Career
Stern graduated with a degree in Textiles and Apparel Design from Cornell University in Ithaca, New York in 1999, and went on to study at the Interactive Telecommunications Program at New York University, graduating in 2001. He later taught digital art at the University of the Witwatersrand, while also practicing as an artist, in Johannesburg, South Africa from 2001 - 2006. He holds a PhD from Trinity College in Dublin, Ireland, where he wrote a dissertation on interactive art and embodiment.
Stern's early study of fashion design, slam poetry and music led to his interest in the relationships between the body and text, and eventually to that of broadly interpreted notions of performance and performativity, especially in relation to the body and embodiment. He states that his interactive work treats "the body and art as cooperative sites of potential resistance," seeing them as mutable entities that "per-form" themselves in relation to their environments, rather than being extant and "pre-formed." Stern's other work often attempts to bring his ideas around performance, embodiment, time and interactivity to more traditional forms like sculpture, drawing and print by combining them with digital and networked media.
Work
Stern's interactive art is centered around bodily provocations, often asking his viewers to "perform" — whether publicly or privately. His pieces attempt to "get people to move in ways they normally wouldn't," and accent said movements in relation to their surroundings. His installation, enter:hektor, for example, asks participants to chase projected words with their arms and bodies in order to trigger spoken word in the space, and his subsequent piece, stuttering, floods the interaction area with too many trigger points, pushing its viewers "not to interact."
His early (2005) and ongoing "Compressionist" prints, conversely, ask Stern himself to move in ways he normally wouldn't. These images are made through performances with scanners, Stern often tying or rigging a flat-bed to his own body, then traversing the landscape — avowedly referencing Impressionism and Abstract Expressionism. These are colored in Photoshop, then printed on metallic paper and/or transformed into hand-made prints, using more traditional techniques.
Stern's video art tends to be in a performative writing style, where he often plays out characters he has created, or uses found footage from films, to explore the fragility of language. He has also worked on collaborative multimedia performances that explore similar issues, usually with more explicit political messages, such as challenging the discourse surrounding HIV/AIDS in South Africa.
Stern's interventionist projects are always site-specific, playful and political. In his Wireframe Series, for example, he has volunteers from the streets of Dubrovnik, Croatia or Johannesburg, South Africa, physically erect temporary, 3D architectural structures made of rope. Each of these custom "public spaces" are "activated" only through their "contact with people," and take on different meanings in each context. And in Doin' my Part to Lighten the Load, Stern challenges power relations between artists and critics, and black and white — as well as electrical "power structures" — by convincing South African arts writer, and editor of Art South Africa magazine, Sean O'Toole, to give up electricity for 24 hours; Stern hired workers off the street to follow O'Toole around his apartment with hand-crank generators and bulbs for the evening.
Stern's "Distill Life" works, multimedia collaborations with Milwaukee artist Jessica Meuninck-Ganger that began in 2009, combine various forms of traditional printmaking with video and machinima. Here the artists "mount translucent prints and drawings on top of video monitors, which appear to bring moving images to life on paper." According to Chris Roper of South Africa's Mail and Guardian, "The work is funny, pretty and accessible, but it’s also complicated, surprising, exceedingly well crafted and rewards a long-term relationship." The works pay homage to and cite a number of artists, including Diego Velázquez, Katsushika Hokusai, Eadweard Muybridge, Claude Monet, Jan van Eyck, William Kentridge, Utagawa Hiroshige and others.
And Stern's Second Life-based mixed reality art work explores "the juxtaposition of old and new media and illuminates the possibilities and limitations of both." "Given Time," for example, enacts a permanent "connection between two simulated people staring wordlessly at each other across real space." According to John Mitchell, Stern uses "Second Life as a medium much like oil paint or marble, hand-drawing two Second Life avatars and pulling them from out of their universe and into ours. In the gallery, they exist on two large screens facing each other, and the viewer may only encounter them by walking between the screens." The piece directly references both the book "Given Time," by Jacques Derrida, and the art work "Untitled (Perfect Lovers)," by Félix González-Torres.
Stern and Creative Commons
Stern is an advocate of Creative Commons (CC), with his blog, and many of his pieces, under CC or GPL. He has been a contributing member of iCommons since its inception, and was an artist in residence with them in 2006 and 2007, the second year of which he ran the program. He believes that "as many people as possible need to see art and talk about it" because it "always brings... value" to "the cultural sphere"; he uses CC as a "tactic for the most effective art work, and with the recognition that this will only bring more value — both culturally and monetarily — to [his] work more generally, whether it's for sale or not."
Wikipedia Art
On February 14, 2009, SternNathaniel Stern and Scott Kildall created a Wikipedia article called "Wikipedia Art", which sought to "invite performative utterances in order to change" what content was acceptable to include in the article itself. It that was simultaneously a self-referential performance art piece called Wikipedia Art. Although the creators encouraged editors to strictly follow Wikipedia guidelines in editing the page, Wikipedia editors determined its intent was nonetheless in violation of site rules, and it was deleted within 15 hours of its initial posting. The resulting controversy received national coverage, including an article in The Wall Street Journal. The Wikimedia Foundation later claimed Stern and Kildall had infringed on the Wikipedia trademark with their own website, wikipediaart.org. The artists publicly released a letter they received in March 2009 from a law firm requesting that they turn over their domain name to Wikipedia. Mike Godwin, then the foundation's legal council, later stated that they would not pursue any further legal action. Mary Louise Schumacher of The Milwaukee Journal Sentinel compared the incident to the "outrage inspired by Marcel Duchamp's urinal or Andy Warhol's Brillo Boxes." Yale research fellow Claire Gordon called the article an example of the "feedback loop" of "Wikipedia’s totalizing claims to knowledge" in a 2011 Huffington Post report.
Wikipedia Art has since been included in the Internet Pavilion of the Venice Biennale for 2009. In 2011 it appeared in a revised form at the Transmediale festival in Berlin, where it was an award finalist.
Tweets in Space
In 2012, Stern and Kildall again partnered on a project called "Tweets In Space", inviting participants to submit tweets to be transmitted to the planet GJ 667Cc, whose conditions scientists believe may be capable of supporting life. The transmission is scheduled to take place in September 2012 at the International Symposium on Electronic Art in Albuquerque, New Mexico. Stern and Kildall used RocketHub to fundraise the money needed to access a transmitter capable of reaching the planet. In addition, code developed for the project is planned for release to open source. According to Stern and Kildall, the goal of "Tweets In Space" is to activate "a potent conversation about communication and life that traverses beyond our borders or understanding."
Exhibitions
In 2003 and 2004, Stern exhibited at the now defunct Brett Kebble Art Awards, South Africa's largest contemporary art competition at the time. Both years, he won prizes for interactive installations, and is credited with being "at least partly responsible for opening space within that national art event for interactive or New Media work generally." From December 2004 to January 2005, Stern held his first major solo exhibition, The Storytellers, at the Johannesburg Art Gallery, South Africa's largest, internationally recognized publicly funded museum.
In 2006 and 2007, Stern held two solo exhibitions of his "Compressionist" prints in South Africa. The series premiered in "Time and Seeing" at Pretoria's experimental Outlet Gallery, and a much larger body of work, including digital and traditional prints, opened as 'Call and Response' at Johannesburg's Art on Paper Gallery (now called Gallery AOP). On January 31, 2008, Ten Cubed Gallery in Second Life was launched. For its inaugural exhibition, Crossing the Void II, owner and curator Haydn Shaughnessy selected five artists working in and with modern technologies: Stern along with Chris Ashley from Oakland, CA, Jon Coffelt from New York, NY, Claire Keating from Cork, Ireland and Scott Kildall from San Francisco, CA.
In 2010, Stern's Distill Life collaborations with Jessica Meuninck-Ganger were presented as part of the "Passing Between" solo exhibition at Gallery AOP in Johannesburg, as well as solo exhibitions at the Museum of Wisconsin Art and Elaine Erickson Gallery in Wisconsin. Given Time launched as part of a solo exhibition at Greylocks Arts in March 2010. The show, which also exhibited "Distill Life" works, was curated by Jo-Anne Green of Turbulence.org. Falling Still, a collaboration with Yevgeniya Kaganovich," premiered in the Art History Gallery at the University of Wisconsin in December 2010.
In June 2011 Stern again worked with Kaganovich on "Strange Vegetation", a project involving "breathing" latex sculptures that react to changes in light and temperature. The installation was displayed at the Villa Terrace Decorative Arts Museum in Milwaukee, Wisconsin. In August of the same year Stern's exhibit "Giverny of the Midwest" opened at Gallery AOP in Johannesburg. The exhibit, inspired by Claude Monet's series Water Lilies, featured images of a pond in Indiana taken with a HP scanner while Stern walked through the pond. In October 2011, Stern and Jessica Meuninck-Ganger's collaborative project "13 Views of a Journey" opened at the Haggerty Museum of Art on the Marquette University campus featuring a blend of print and video imagery.
Throughout his career, Stern's work has also been shown at the Venice Biennale, Museum of Contemporary Art, Sydney, South African National Gallery, Johannesburg Art Gallery, International Print Center New York, Herbert F. Johnson Museum of Art, Pretoria Art Museum, John Michael Kohler Arts Center, Milwaukee Art Museum, Museum of Wisconsin Art, Inter-Society for the Electronic Arts and in festivals and performances all over the world.
Notes
External links
Nathaniel Stern's web site
MacKenzie, Duncan. (2010) Bad at Sports Episode 244: Interview with Nathaniel Stern (audio interview for streaming, download or podcast)
Sherwin, Brian. (2008) Art Space Talk: Nathaniel Stern (interview)
Ridgway, Nicole. NYArts Magazine: "Between Text and Flesh" (bio on Stern's work, 2006)
Johnson, Paddy. iCommons: Art Intercom, featuring artist Nathaniel Stern (interview, 2007). Part I, Part II
Borland, Ralph. A R T T H R O B _ A R T B I O: Nathaniel Stern (South African feature, 2006)
Nathaniel Stern's blog
Wikipedia Art web site
Gallery AOP (Johannesburg, South Africa)
1977 births
American installation artists
American interdisciplinary artists
American multimedia artists
American new media artists
American video artists
Cornell University alumni
Living people
Modern printmakers
Net.artists
South African artists
University of Wisconsin–Milwaukee faculty | Nathaniel Stern | [
"Technology"
] | 2,561 | [
"Multimedia",
"Net.artists"
] |
13,570,238 | https://en.wikipedia.org/wiki/Period%20circadian%20protein%20homolog%201 | Period circadian protein homolog 1 is a protein in humans that is encoded by the PER1 gene.
Function
The PER1 protein is important to the maintenance of circadian rhythms in cells, and may also play a role in the development of cancer. This gene is a member of the period family of genes. It is expressed with a daily oscillating circadian rhythm, or an oscillation that cycles with a period of approximately 24 hours. PER1 is most notably expressed in the region of the brain called the suprachiasmatic nucleus (SCN), which is the primary circadian pacemaker in the mammalian brain. PER1 is also expressed throughout mammalian peripheral tissues. Genes in this family encode components of the circadian rhythms of locomotor activity, metabolism, and behavior. Circadian expression of PER1 in the suprachiasmatic nucleus will free-run in constant darkness, meaning that the 24-hour period of the cycle will persist without the aid of external light cues. Subsequently, a shift in the light/dark cycle evokes a proportional shift of gene expression in the suprachiasmatic nucleus. The time of gene expression is sensitive to light, as light during a mammal's subjective night results in a sudden increase in per expression and thus a shift in phase in the suprachiasmatic nucleus. Alternative splicing has been observed in this gene; however, these variants have not been fully described. There is some disagreement between experts over the occurrence of polymorphisms with functional significance. Many scientists state that there are no known polymorphisms of the human PER1 gene with significance at a population level that results in measurable behavioral or physiological changes. Still, some believe that even silent mutations can cause significant behavioral phenotypes, and result in major phase changes.
Functional conservation of the PER gene is shown in a study by Shigeyoshi et al. 2002. In this study, mouse mPer1 and mPer2 genes were driven by Drosophila timeless promoter in Drosophila melanogaster. They found that both mPer constructs could restore rhythm to arrhythmic flies (per01 flies). Thus mPer1 and mPer2 can function as clock components in flies and may have implications concerning the homology of per genes.
Role in chronobiology
The PER1 gene, also called rigui, is a characteristic circadian oscillator. PER1 is rhythmically transcribed in the SCN, keeping a period of approximately 24 hours. This rhythm is sustained in constant darkness, and can also be entrained to changing light cycles. PER1 is involved in generating circadian rhythms in the SCN, and also has an effect on other oscillations throughout the body. For example, PER1 knockouts affect food entrainable oscillators and methamphetamine-sensitive circadian oscillators, whose periods are altered in the absence of PER1. In addition, mice with knockouts in both the PER1 and PER2 genes show no circadian rhythmicity. Phase shifts in PER1 neurons can be induced by a strong, brief light stimulus to the SCN of rats. This light exposure causes increases in PER1 mRNA, suggesting that the PER1 gene plays an important role in entrainment of the mammalian biological clock to the light-dark cycle.
Feedback mechanism
The PER1 mRNA is expressed in all cells, acting as a part of a transcription-translation negative feedback mechanism, which creates a cell autonomous molecular clock. PER1 transcription is regulated by protein interactions with its five E-box and one D-box elements in its promoter region. Heterodimer CLOCK-BMAL1 activates E-box elements present in the PER1 promoter, as well activating the E box promoters of other components of the molecular clock such as PER2, CRY1, and CRY2. The phase of PER1 mRNA expression varies between tissues, The transcript leaves the nucleus and is translated into a protein with PAS domains, which enable protein-protein interactions. PER1 and PER2 are phosphorylated by CK1ε, which leads to increased ubiquitylation and degradation. This phosphorylation is counteracted by PP1 phosphatase, resulting in a more gradual increase in phosphorylated PER, and an additional control over the period of the molecular clock. Phosphorylation of PER1 can also lead to masking of its leucine-rich nuclear localization sequence and thus impeded heterodimer import.
PER interacts with other PER proteins as well as the E-box regulated, clock controlled proteins CRY1 and CRY2 to create a heterodimer which translocates into the nucleus. There it inhibits CLOCK-BMAL activation. PER1 is not necessary for the creation circadian rhythms, but homozygous PER1 mutants display a shortened period of mRNA expression. While PER1 must be mutated in conjunction with PER2 to result in arhythmiticity, the two translated PER proteins have been shown to have slightly different roles, as PER1 acts preferentially through interaction with other clock proteins.
Clinical significance
PER1 expression may have significant effects on the cell cycle. Cancer is often a result of unregulated cell growth and division, which can be controlled by circadian mechanisms. Therefore, a cell's circadian clock may play a large role in its likelihood of developing into a cancer cell. PER1 is a gene that plays an important role in such a circadian mechanism. Its overexpression, in particular, causes DNA-damage induced apoptosis. In addition, down-regulation of PER1 can enhance tumor growth in mammals. PER1 also interacts with proteins ATM and Chk2. These proteins are key checkpoint proteins in the cell cycle. Cancer patients have a lowered expression of per1. Gery, et al. suggests that regulation of PER1 expression may be useful for cancer treatment in the future.
Gene
Orthologs
The following is a list of some orthologs of the PER1 gene in other species:
PER1 (Rattus norvegicus)
PER1 (Mus musculus)
per1a (Danio rerio)
PER1 (Homo sapiens)
lin-42 (Caenorhabditis elegans)
PER1 (Bos taurus)
per1b (Danio rerio)
PER (Drosophila melanogaster)
PER1 (Xenopus tropicalis)
PER1 (Equus caballus)
PER1 (Macaca mulatta)
PER1 (Sus scrofa)
Paralogs
PER2
PER3
Location
The human PER1 gene is located on chromosome 17 at the following location:
Start: 8,140,470
Finish: 8,156,405
Length: 15,936
Exons: 24
PER1 has 19 transcripts (splice variants).
Discovery
The PER1 ortholog was first discovered by Ronald Konopka and Seymour Benzer in 1971. During 1997, Period 1 (mPer1) and Period 2 (mPer2) genes were discovered (Sun et al., 1997 and Albretch et al., 1997). Through homology screens with the Drosophila per, these genes were discovered. It was independently discovered by Sun et al. 1997, naming it RIGUI and by Tei et al. 1997, who named it hper because of the protein sequence similarity with Drosophila per. They found that the mouse homolog had the properties of a circadian regulator. It had circadian expression in the suprachiasmatic nucleus (SCN), self-sustained oscillation, and entrainment of circadian expression by external light cues.
References
External links
More information on PER1 introns and exons
Splice variants of PER1
Transcription factors
PAS-domain-containing proteins | Period circadian protein homolog 1 | [
"Chemistry",
"Biology"
] | 1,633 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
13,570,959 | https://en.wikipedia.org/wiki/Methanediol | Methanediol, also known as formaldehyde monohydrate or methylene glycol, is an organic compound with chemical formula . It is the simplest geminal diol. In aqueous solutions it coexists with oligomers (short polymers). The compound is closely related and convertible to the industrially significant derivatives paraformaldehyde (), formaldehyde (), and 1,3,5-trioxane ().
Methanediol is a product of the hydration of formaldehyde. The equilibrium constant for hydration is estimated to be 103, predominates in dilute (<0.1%) solution. In more concentrated solutions, it oligomerizes to .
Occurrence
The dianion, methanediolate, is believed to be an intermediate in the crossed Cannizzaro reaction.
Gaseous methanediols can be generated by electron irradiation and sublimation of a mixture of methanol and oxygen ices.
Methanediol is believed to occur as an intermediate in the decomposition of carbonyl compounds in the atmosphere, and as a product of ozonolysis on these compounds.
Safety
Methanediol, rather than formaldehyde, is listed as one of the main ingredients of "Brazilian blowout", a hair-straightening formula marketed in the United States. The equilibrium with formaldehyde has caused concern since formaldehyde in hair straighteners is a health hazard. Research funded by the Professional Keratin Smoothing Council (PKSC), an industry association that represents selected manufacturers of professional-use only keratin smoothing products, has disputed the risk.
See also
Orthoformic acid (methanetriol)
Orthocarbonic acid (methanetetrol)
References
Hydrates
Carbohydrates
Geminal diols
Sugar alcohols | Methanediol | [
"Chemistry"
] | 376 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Sugar alcohols",
"Hydrates",
"Organic compounds",
"Carbohydrate chemistry"
] |
13,571,105 | https://en.wikipedia.org/wiki/TRPM7 | Transient receptor potential cation channel, subfamily M, member 7, also known as TRPM7, is a human gene encoding a protein of the same name.
Function
TRPs, mammalian homologs of the Drosophila transient receptor potential (trp) protein, are ion channels that are thought to mediate capacitative calcium entry into the cell. TRPM7 is a protein that is both an ion channel and a kinase. As a channel, it conducts calcium and monovalent cations to depolarize cells and increase intracellular calcium. As a kinase, it is capable of phosphorylating itself and other substrates. The kinase activity is necessary for channel function, as shown by its dependence on intracellular ATP and by the kinase mutants.
Interactions
TRPM7 has been shown to interact with PLCB1 and PLCB2.
Clinical relevance
Patients with pathogenic variants in the TRPM7 gene suffer from hypomagnesemia, seizures and developmental delay.
Defects in this gene have been associated with magnesium deficiency in human microvascular endothelial cells.
See also
TRPM
References
Further reading
External links
Ion channels | TRPM7 | [
"Chemistry"
] | 232 | [
"Neurochemistry",
"Ion channels"
] |
13,571,230 | https://en.wikipedia.org/wiki/List%20of%20Japanese%20ingredients | The following is a list of ingredients used in Japanese cuisine.
Plant sources
Cereal grain
Rice
Short or medium grain white rice. Regular (non-sticky) rice is called .
Mochi rice (glutinous rice)-sticky rice, sweet rice
(brown rice)
Rice bran () – not usually eaten itself, but used for pickling, and also added to boiling water to parboil tart vegetables
– toasted brown rice grains in and
– Aspergillus cultures
()
(barley)
Flour
starch – an alternative ingredient for potato starch
– soybean flour/meal
– (millet) flour
– starch powder
starch
Rice flour ()
– semi-cooked rice dried and coarsely pulverized; used as alternate breading in deep-fried dish, also used in Kansai-style confection. Medium fine ground types are called and used as breaded crust or for confection. Fine ground are
, – powdery starch made from sticky rice.
flour
Soba flour
starch – substitutes are sold under this name, though authentic starch derives from fern roots. See
Wheat flour
Tempura flour
, , – descending grades of protein content; all purpose, udon flour, cake flour
– name for the starch of rice or wheat. Apparently used for to some extent. In Chinese cuisine, it is used to make the translucent skin of the shrimp .
Noodles
Soba
Ramen
Udon
noodles
Vegetables
Botanic fruits as vegetables
Cucumber ()
Eggplant ()
– mild peppers
– The leaves of the made into are .
– pumpkins, squash
– type of squash/melon.
Cabbage family
– (B. rapa var. perviridis)
- (B. rapa var. nipposinica)
Napa cabbage () – (B. rapa var. glabra)
– (Brassica juncea var. integrifolia or var. of mustard)
– (cultivar of B. rapa var. )
(rapeseed or coleseed flowering-stalks, used like broccoli rabe)
Other leafy vegetables
Spinach ()
Onion family
Vegetables in the onion family are called in Japanese.
– type of chives
– Chinese chives or garlic chive
– formerly thought a variety of scallion, but geneticists discover it to be a cross with the bulb onion (A. × wakegi).
Green onions or scallions
– Often used to denote the types as thick as leeks used in Kantō region, but is not a proper name of a cultivar, and merely taken from the production area of Fukaya, Saitama. In the east, the white part of the onion near the base like to be used.
("multipurpose scallion") – young plants.
– Kyoto cultivar of green onion.
– Cultivar named after Shimonita, Gunma.
Other varieties with articles are (Hiroshima), (Fukui), (Gifu)
– Allium macrostemon, collected from the wild much like field garlic.
– Allium victorialis, much like ramps.
Root vegetables
– Chinese artichoke, Stachys affinis
– Japanese radish
– Arctium lappa
Lotus root ()
Potato ()
Sweet potato ()
Taro () and stalk ()
– Kyoto variety
– stems available fresh or dried; their tartness must be boiled off before use.
– bamboo shoots
, , – Slender bamboo shoots of (Sasa kurilensis), so-called "baby bamboo shoots".
– vital condiment to ramen, made from the Taiwanese giant bamboo (Dendrocalamus latiflorus) and not from the typical bamboo shoot.
– vague name that can denote either Dioscorea spp. (Japanese yam or Chinese yam) below. The root is often grated into a sort of starchy puree. The correct way is to grate the yam against the grains of the . Also the tubercle () used whole.
or (Dioscorea japonica) – considered the true Japanese yam. The name refers to roots dug from the wild.
(D. opposita) – In a strict sense, refers to the long truncheon-like form.
(D. opposita) – A fan-shaped (ginkgo leaf shaped) variety, more viscous than the long form.
(D. polystachya var.) – A round variety even more viscous and highly prized.
– edible tubercles
– lily bulbs
Sprouts
– radish sprouts
– mung sprouts
Soybean sprouts ()
Specialty vegetables
Aralia cordata – "Japanese spikenard"
–a type of butterbur, both stalk and young flower shoots
– dried gourd strips
–
– a term for wild-picked vegetables in general, including fernbrake, bamboo shoots, tree shoots
Pickled vegetables
– term for Japanese pickles.
Nuts
Ginkgo nuts
Azuki bean
– chestnuts
– Japanese walnut (Juglans ailantifolia)
– a type of buckeye or horse chestnut (Aesculus turbinata)
– acorns of Castanopsis spp.
Seeds
Sesame seeds
Black sesame seeds
White sesame seeds
seeds
Wild sesame seeds ()
Hemp seeds () – mixed in with
– usually powdered mustard, or in paste tubes
– Zanthoxylum piperitum
Mushrooms
Shiitake
Wood ear ()
Rhizopogon roseolus ()
Seaweed
– Campylaephora hypnaeoides
– Petalonia binghamiae
– kombu, kelp
or – thin shavings of kelp
– a thin sheet of kelp created as a byproduct
– the thick, pleated portion near the attached base of the seaweed
Nori
– refers to seaweed harvested from sea-rock.
– Aphanothece sacrum, a Kyushu specialty
– also known as and ; agar
Fruits
Citrus
– a new hybrid
Yuzu
Other
Akebia (sausage fruit)
Loquat
– a traditional type of melon
Nashi pear
Persimmon
– Myrica rubra
Soy products
Soy sauce (light, dark, )
– soy sprouts
– soy meal
– dry-roasted soy beans and black soy beans (used in , etc.)
Vegetable proteins
– wheat gluten
– fresh usually sold in sticks (long bars)
Dry – variously shaped and colored. is one variety
– somewhat more doughy (still has starches left)
Tofu
Soft: (silken), ,
Firm: (cotton)
Freeze-dried:
Fried: , , ,
Residue:
Soy milk
Animal sources
Eggs
Chicken
Quail egg
Terrapin eggs, sea-turtle eggs
Meats
Beef
Kobe beef
Matsusaka beef
Mishima beef
Beef tongue, heart, liver, tripe, rumen (), omasum (), abomasum ()
Chicken – called in Western parts (Kansai). There are various heritage breeds called
Nagoya cochin
Shamo – fighting cock
– × Rhode Island red
Unlaid egg yolk ()
Pork
(Berkshire (pig))
or , extinct but reconstructed heritage hog of Okinawa
– a domestic pig × wild boar crossbreed
Boar meat – the (hotpot) dish is called ("peony")
Whey – marketed by
Horse meat, sometimes called – a delicacy. Raw sliced horsemeat is called ; the fatty neck portion from where the mane grows is known as .
Finned fish
Marine fishes
(red-fleshed fish or akami zakana)
skipjack tuna (katsuo) - made into tataki, namaribushi, and processed into katsuobushi
tuna (maguro)
Japanese amberjack (buri / hamachi)
Spanish mackerel (sawara)
Blue-backed fish
These fish are collectively called ao zakana in Japanese.
Japanese jack mackerel (aji)
pacific saury (sanma)
sardine (iwashi)
Niboshi or iriko is dried sardine, important for fish stock and other uses.
mackerel (saba)
or kohada (Konosirus punctatus)
herring (nishin)
aji (Japanese horse mackerel and similar fish) - typical fish for hiraki, or fish that is gutted, butterflied, and half-dried in shade.
White-fleshed fish
These fish are collectively called shiromi zakana in Japanese.
flatfish (karei / hirame) - ribbons of flesh around the fins called engawa are also used. Roe is often stewed.
pike conger (hamo) - in Kyoto-style cuisine, also as high-end surimi.
pufferfish (fugu) - flesh, skin, soft roe eaten as sashimi and hot pot (tecchiri); organs, etc. poisonous; roe also contain tetrodotoxin but a regional specialty food cures it in nuka until safe to eat.
tilefish (amadai) - in a Kyoto-style preparation, it is roasted to be eaten scales and all; used in high-end surimi.
red sea bream (madai) - used widely. the head stewed as kabuto-ni.
Freshwater fish
ayu - the shiokara made from this fish is called .
Japanese eel (unagi)
- refers regionally to different fish, but often the goby type, some are high-end fish.
salmon (sake) - shiojake or salted salmon are often very salty fillets, so lighter salted amajio types may be sought. is salt-cured whole fish. uses snout cartilage.
suzuki
(Family Salangidae)
nigoro buna (Carassius auratus grandoculis) - vital source of funazushi for Shiga-kennians
Marine mammals
baleen whale (kujira)
dolphin (iruka)
Mollusks
Squid and cuttlefish
These fish are collectively called ika in Japanese.
(aori ika)
(surume ika)
(kensaki ika)
(yari ika)
(hotaru ika)
(kō ika)
Octopus
Octopus is called tako in Japanese.
Common octopus (madako)
Giant Pacific octopus (mizudako)
Amphioctopus fangsiao (iidako)
Bivalves
scallop (hotate-gai)
littleneck clam (asari)
freshwater clam (shijimi)
oyster (kaki)
iwagaki (Crassostrea nippona), available during summer months.
clam (hamaguri)
(akagai)
(aoyagi)
Geoduck (mirugai)
(torigai)
Single shelled gastropods and conches
horned turban (sazae)
abalone
Crustaceans
These foods are collectively called ebikani-rui or kokaku rui in Japanese.
Crab
Crab is called kani in Japanese.
snow crab (zuwaigani)
horsehair crab (kegani)
king crab (tarabagani; hanasaki gani=Paralithodes brevipes)
horse crab (gazami)
Kona crab (asahi-gani)
Lobsters, shrimps, and prawns
These shellfish are collectively called ebi in Japanese.
spiny lobster (ise-ebi)
Kuruma prawn (kuruma ebi)
humpback shrimp (botan ebi; Pandalus hypsinotus)
mantis shrimp - (shako)
barnacle
(Palaemon paucidens) - freshwater
Echinoderms
Sea cucumbers (namako) - body, intestines (konowata), ovaries (kuchiko, konoko)
Sea urchin (uni), ovaries
Tunicates
Sea pineapple (hoya)
Roe
salmon roe (ikura)
herring roe (kazunoko)
mullet roe (karasumi) - similar to botargo
pollock roe (tarako (food))
capelin roe (masago)
flying fish roe (tobiko)
crustacean eggs
Liver
ankimo, or monkfish liver.
(Thread-sail filefish) and abalone livers are used as is, or as kimo-ae, i.e., blended with the fish flesh or other ingredients as a type of aemono.
squid and katsuo (skipjack) livers and guts, used to make shiokara.
Processed seafood
anchovy (katakuchi-iwashi), dried to make Niboshi. The larvae are shirasu and made into Tatami iwashi
chikuwa
himono (non-salted dried fish) - some products are bone dry and stiff, incl. ei-hire (skate fins), surume (dried squid), but often refer to fish still supple and succulent.
kamaboko, satsuma age, etc., comprise a class of food called nerimono, and are listed under surimi products.
niboshi
shiokara of various kinds, made from the guts and other portions.
Insects
Some insects have been considered regional delicacies, though often categorized as or bizarre food.
, larvae and pupae of kurosuzumebachi or yellowjacket spp.
, tsukudani made from locusts that infest rice fields. It used to be pretty common wherever rice was grown.
, tsukudani made from stonefly and caddisfly larvae in streams (specialty of Ina, Nagano area).
See also
List of Japanese cooking utensils
List of Japanese dishes
List of Japanese condiments
List of sushi and sashimi ingredients
Sansai
Ingredients
Japanese | List of Japanese ingredients | [
"Technology"
] | 2,857 | [
"Food ingredients",
"Components"
] |
13,571,276 | https://en.wikipedia.org/wiki/TRPV5 | Transient receptor potential cation channel subfamily V member 5 is a calcium channel protein that in humans is encoded by the TRPV5 gene.
Function
The TRPV5 gene is a member of the transient receptor family and the TRPV subfamily. The calcium-selective channel, TRPV5, encoded by this gene has 6 transmembrane-spanning domains, multiple potential phosphorylation sites, an N-linked glycosylation site, and 5 ANK repeats. This protein forms homotetramers or heterotetramers and is activated by a low internal calcium level.
Both TRPV5 and TRPV6 are expressed in kidney and intestinal epithelial cells. TRPV5 is mainly expressed in kidney epithelial cells, where it plays an important role in the reabsorption of Ca2+, whereas TRPV6 is mainly expressed in the intestine. The enzyme α-klotho increases kidney calcium reabsorption by stabilizing TPRV5. Klotho is a beta-glucuronidase-like enzyme that activates TRPV5 by removal of sialic acid.
Clinical significance
Normally, about 95% to 98% of Ca2+ filtered from the blood by the kidney is reabsorbed by the kidney's renal tubule, mediated by TRPV5. Genetic deletion of TRPV5 in mice leads to Ca2+ loss in the urine, and consequential hyperparathyroidism, and bone loss.
Autosomal recessive hypercalciuria has been described in a family with a missense, inactivating genetic variant in TRPV5. This variant, known as p.(Val598Met), affects the TRP helix region of TRPV5, which is thought to control channel pore gating, assembly and protein folding.
Inhibitors
Econazole is a weak inhibitor of both TRPV5 and TRPV6, with an IC50 in the micromolar range
ZINC17988990 is a potent and selective inhibitor of TRPV5, with an IC50 of 177nM and good selectivity over TRPV6 and the other TRPV channel subtypes.
Interactions
TRPV5 has been shown to interact with S100A10.
See also
TRPV
References
Further reading
External links
Ion channels | TRPV5 | [
"Chemistry"
] | 503 | [
"Neurochemistry",
"Ion channels"
] |
13,571,349 | https://en.wikipedia.org/wiki/Rev-ErbA%20beta | Rev-Erb beta (Rev-Erbβ), also known as nuclear receptor subfamily 1 group D member 2 (NR1D2), is a member of the Rev-Erb protein family. Rev-Erbβ, like Rev-Erbα, belongs to the nuclear receptor superfamily of transcription factors and can modulate gene expression through binding to gene promoters. Together with Rev-Erbα, Rev-Erbβ functions as a major regulator of the circadian clock. These two proteins are partially redundant. Current research suggests that Rev-Erbβ is less important in maintaining the circadian clock than Rev-Erbα; knock-out studies of Rev-Erbα result in significant circadian disruption but the same has not been found with Rev-Erbβ. Rev-Erbβ compensation for Rev-Erbα varies across tissues, and further research is needed to elucidate the separate role of Rev-Erbβ.
This gene is expressed in the central and peripheral nervous system, spleen, mandibular maxillary processes, and blood islands. Rev-Erbβ plays a major role in the conduction of inductive signals to aid in controlling differentiating neurons.
Discovery
Rev-Erbβ was discovered in 1994, when B. Dumas et al. isolated its cDNA, naming the new receptor BD73. The name Rev-Erbβ was coined a few months later in a paper by Eva Enmark, Tommi Kainu, Markku Tapio Pelto-Huikko, and Jan Ǻke Gustafsson where they isolated Rev-Erb alpha cDNA in a rat brain.
A new isoform of Rev-Erbβ, named Rev-Erbβ 2, was discovered using rat cDNA a few months later in 1995 by N. Giambiagi and colleagues. They found it to be identical to Rev-Erbβ 1, except that the Rev-Erbβ 1 protein is 195 amino acids longer than Rev-Erbβ 2. However, further research has indicated that the discovered Rev-Erbβ 2 cDNA was likely a splice variant of the Nr1d2 gene that arose through alternative splicing and the use of a different polyadenylation site.
Genetics and Evolution
In mammals, the NR1D2 (nuclear receptor subfamily 1 group D member 2) gene encodes the protein Rev-Erbβ. Unlike NR1D1, the strand opposite NR1D2 does not have any significant reading frames, and the gene is located on the forward strand of chromosome 3. Despite their different locations, the NR1D1 and NR1D2 genes are highly homologous and are paralogs within the genome. In humans, the NR1D2 gene itself contains 10 exons which form 5 splice variants (NR1D2-201 - NR1D2-205), ranging from 5231 base pairs (NR1D2-201) to 600 base pairs (NR1D2-204). However, only NR1D2-201 produces a functional protein. In mammals, NR1D2 (Rev-Erbβ) is expressed throughout the body and with high expression in several tissues, including the brain, liver, skeletal muscle, and adipose tissue.
Comparison of the human NR1D2 sequence with other species indicates a high level of conservation across animals, with 472 discovered orthologs, including in mice, chickens, lizards, and zebrafish. Similarly to NR1D1, this suggests NR1D2 was present in the most recent common animal ancestor. NR1D2 has only one paralog in humans, the NR1D1 gene, which is located on chromosome 17, but it is closely related to other members of the nuclear receptor family and is functionally related to other nuclear receptor genes, such as thyroid hormone receptor beta (THRB), peroxisome proliferator activated receptor delta (PPARD), and retinoic acid receptor beta (RARB). Linkage analysis reveals that NR1D2 and THRB are highly linked due to proximity on chromosome 3, and that they are both linked to RARB. Combined with the linkage between the NR1D1/THRA locus and the RARA gene, this suggests that these two gene clusters arose from a duplication event.
Structure
The human NR1D2 gene produces a protein product (REV-ERBβ) of 579 amino acids. Rev-Erbβ is similar to Rev-Erbα in both its structure and mechanism of transcriptional repression. Like Rev-Erbα, Rev-Erbβ has 3 major functional domains which are common to nuclear receptor proteins, including a DNA-binding domain (DBD) and a ligand-binding domain (LBD) at the C-terminus, which are highly conserved in Rev-Erb orthologs, and a N-terminus domain which allows for activity modulation.
Much like Rev-Erbα, Rev-Erbβ can bind to two classes of DNA response elements via its DBD, which contains two C4-type zinc fingers. These two classes include a DNA sequence commonly referred to as RORE due to its interaction with the transcriptional activator Retinoic Acid Receptor-related Orphan Receptor (ROR) and a direct repeat 2 element of RORE known as RevDR2. The Rev-Erb proteins are unique from other nuclear receptors in that they do not have a helix in the C-terminal that is necessary for coactivator recruitment and activation by nuclear receptors via their LBD. Instead, the Rev-Erbs can repress transcription as a monomer through competitive binding at single RORE elements by preventing the binding of constitutive transcription activator ROR or as a homodimer through binding to RevDR2 sites. The Rev-Erb homodimer is required for its interaction with Nuclear Receptor Co-Repressor (NCoR), or more weakly, with Silencing Mediator of Retinoid and Thyroid Receptors (SMRT). The interaction with NCoR is stabilized by interaction with heme, which binds the to the Rev-Erb ligand-binding pocket. Rev-Erbβ undergoes a conformational change when complexed with heme, as its structure shows that helices 3,7, and 11 move to enlarge the ligand binding pocket in order to accommodate heme. The repression by Rev-Erb proteins also requires interaction of class I histone deacetylase 3 (HDAC3) with NCoR, which results in gene repression via histone deacetylation.
Function
Circadian oscillator
Rev-Erbβ binds to genomic Rev-Erbα-binding sites that have a diurnal profile identical or similar to Rev-Erbα. This protein also helps maintain clock and metabolic gene regulation and protects system functioning when Rev-Erbα is missing. Rev-Erbβ compensates for loss of function from metabolic distress in the case that Rev-Erbα is lost. The liver and metabolic processes can still run when Rev-Erbα is missing and Rev-Erbβ is present. Losing both Rev-Erbα and Rev-Erbβ causes cells to become arrhythmic.
When Rev-Erbβ is missing, there can be significant change in performance of metabolic activity with drastic effects. For example:
Rev-Erbβ deficiency causes a drastic difference in the coupled formation of circadian networks of gene expression, while core clock gene expression remains oscillating.
Having neither Rev-Erbα or Rev-Erbβ does not affect expression rhythms of core clock genes but affects other rhythmically-expressed output genes.
Rev-Erbβ deficiency does not change circadian expression rhythms of PER2.
Metabolism
Rev-Erbβ plays a role in blocking the trans-activation of retinoic acid-related orphan receptor-α (RORα). RORα is involved in the regulation of lipoprotein cholesterol, lipid homeostasis, and inflammation. Rev-Erbβ and RORα are both expressed in similar tissues, such as skeletal muscle. They have similar expression patterns, target genes, and cognate sequences within the skeletal muscle. Rev-Erbβ causes several genes assisting in lipid absorption to decrease expression. Rev-Erbβ controls lipid and energy homoeostasis in skeletal muscle. Rev-Erbβ may be useful in therapeutic treatments of dyslipidemia and regulating muscle growth.
Rev-Erbβ is also a circadian regulated gene; its mRNA displays rhythmic expression in vivo and in serum-synchronized cell cultures. However, it is currently unknown to what extent Rev-Erbβ contributes to oscillations of the core circadian clock. However it has been shown that heme suppresses hepatic gluconeogenic gene expression and glucose output through the related Rev-Erbα receptor which mediates gene repression. Hence, the Rev-Erbα receptor detects heme and thereby coordinates the cellular clock, glucose homeostasis, and energy metabolism.
Rev-Erbβ plays a role in skeletal muscle mitochondrial biogenesis. Originally Rev-Erbβ was thought to be functionally redundant of Rev-Erbα but recent findings prove that there are subtle differences. Rev-Erbβ ligands may be used in the treatment of metabolic disorders, like metabolic syndrome. It has control of skeletal muscle metabolism and energy that can be beneficial in treatment options.
Rev-Erbβ gene contributes to the downstream regulation of clock output genes by generating specific KO mutants. It is still unknown all of the functions Rev-Erbβ has in the core circadian clock and exactly how it differs from Rev-Erbα.
References
Further reading
External links
Intracellular receptors
Transcription factors | Rev-ErbA beta | [
"Chemistry",
"Biology"
] | 2,049 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
13,571,938 | https://en.wikipedia.org/wiki/Plant%20evolutionary%20developmental%20biology | Evolutionary developmental biology (evo-devo) is the study of developmental programs and patterns from an evolutionary perspective. It seeks to understand the various influences shaping the form and nature of life on the planet. Evo-devo arose as a separate branch of science rather recently. An early sign of this occurred in 1999.
Most of the synthesis in evo-devo has been in the field of animal evolution, one reason being the presence of model systems like Drosophila melanogaster, C. elegans, zebrafish and Xenopus laevis. However, since 1980, a wealth of information on plant morphology, coupled with modern molecular techniques has helped shed light on the conserved and unique developmental patterns in the plant kingdom also.
Historical perspective
Before 1900
The origin of the term "morphology" is generally attributed to Johann Wolfgang von Goethe (1749–1832). He was of the opinion that there is an underlying fundamental organisation () in the diversity of flowering plants. In his book The Metamorphosis of Plants, he proposed that the enabled us to predict the forms of plants that had not yet been discovered. Goethe was the first to make the perceptive suggestion that flowers consist of modified leaves. He also entertained different complementary interpretations.
In the middle centuries, several basic foundations of our current understanding of plant morphology were laid down. Nehemiah Grew, Marcello Malpighi, Robert Hooke, Antonie van Leeuwenhoek, Wilhelm von Nageli were just some of the people who helped build knowledge on plant morphology at various levels of organisation. It was the taxonomical classification of Carl Linnaeus in the eighteenth century though, that generated a firm base for the knowledge to stand on and expand. The introduction of the concept of Darwinism in contemporary scientific discourse also had had an effect on the thinking on plant forms and their evolution.
Wilhelm Hofmeister, one of the most brilliant botanists of his times, was the one to diverge away from the idealist way of pursuing botany. Over the course of his life, he brought an interdisciplinary outlook into botanical thinking. He came up with biophysical explanations on phenomena like phototaxis and geotaxis, and also discovered the alternation of generations in the plant life cycle.
1900 to the present
The past century witnessed a rapid progress in the study of plant anatomy. The focus shifted from the population level to more reductionist levels. While the first half of the century saw expansion in developmental knowledge at the tissue and the organ level, in the latter half, especially since the 1990s, there has also been a strong impetus on gaining molecular information.
Edward Charles Jeffrey was one of the early evo-devo researchers of the 20th century. He performed a comparative analyses of the vasculatures of living and fossil gymnosperms and came to the conclusion that the storage parenchyma has been derived from tracheids. His research focussed primarily on plant anatomy in the context of phylogeny. This tradition of evolutionary analyses of plant architectures was further advanced by Katherine Esau, best known for her book The Plant Anatomy. Her work focussed on the origin and development of various tissues in different plants. Working with Vernon Cheadle, she also explained the evolutionary specialization of the phloem tissue with respect to its function.
In 1959 Walter Zimmermann published a revised edition of . This very comprehensive work, which has not been translated into English, has no equal in the literature. It presents plant evolution as the evolution of plant development (hologeny). In this sense it is plant evolutionary developmental biology (plant evo-devo). According to Zimmermann, diversity in plant evolution occurs though various developmental processes. Three very basic processes are heterochrony (changes in the timing of developmental processes), heterotopy (changes in the relative positioning of processes), and heteromorphy (changes in form processes).
In the meantime, by the beginning of the latter half of the 1900s, Arabidopsis thaliana had begun to be used in some developmental studies. The first collection of Arabidopsis thaliana mutants were made around 1945. However it formally became established as a model organism only in 1998.
The recent spurt in information on various plant-related processes has largely been a result of the revolution in molecular biology. Powerful techniques like mutagenesis and complementation were made possible in Arabidopsis thaliana via generation of T-DNA containing mutant lines, recombinant plasmids, techniques like transposon tagging etc. Availability of complete physical and genetic maps, RNAi vectors, and rapid transformation protocols are some of the technologies that have significantly altered the scope of the field. Recently, there has also been a massive increase in the genome and EST sequences of various non-model species, which, coupled with the bioinformatics tools existing today, generate opportunities in the field of plant evo-devo research.
Gérard Cusset provided a detailed in-depth analysis of the history of plant morphology, including plant development and evolution, from its beginnings to the end of the 20th century. Rolf Sattler discussed fundamental principles of plant morphology and plant evo-devo. Rolf Rutishauser surveyed the past and future of plant evo-devo with regard to continuum and process morphology.
Organisms, databases and tools
The most important model systems in plant development have been arabidopsis and maize. Maize has traditionally been the favorite of plant geneticists, while extensive resources in almost every area of plant physiology and development are available for Arabidopsis thaliana. Apart from these, rice, Antirrhinum majus, Brassica, and tomato are also being used in a variety of studies. The genomes of Arabidopsis thaliana and rice have been completely sequenced, while the others are in process. It must be emphasized here that the information from these "model" organisms form the basis of our developmental knowledge. While Brassica has been used primarily because of its convenient location in the phylogenetic tree in the mustard family, Antirrhinum majus is a convenient system for studying leaf architecture. Rice has been traditionally used for studying responses to hormones like abscissic acid and gibberelin as well as responses to stress. However, recently, not just the domesticated rice strain, but also the wild strains have been studied for their underlying genetic architectures.
Some people have objected against extending the results of model organisms to the plant world. One argument is that the effect of gene knockouts in lab conditions wouldn't truly reflect even the same plant's response in the natural world. Also, these supposedly crucial genes might not be responsible for the evolutionary origin of that character. For these reasons, a comparative study of plant traits has been proposed as the way to go now.
Since the past few years, researchers have indeed begun looking at non-model, "non-conventional" organisms using modern genetic tools. One example of this is the Floral Genome Project, which envisages to study the evolution of the current patterns in the genetic architecture of the flower through comparative genetic analyses, with a focus on EST sequences. Like the FGP, there are several such ongoing projects that aim to find out conserved and diverse patterns in evolution of the plant shape. Expressed sequence tag (EST) sequences of quite a few non-model plants like sugarcane, apple, barley, cycas, coffee, to name a few, are available freely online. The Cycad Genomics Project, for example, aims to understand the differences in structure and function of genes between gymnosperms and angiosperms through sampling in the order Cycadales. In the process, it intends to make available information for the study of evolution of seeds, cones and evolution of life cycle patterns. Presently the most important sequenced genomes from an evo-devo point of view include those of A. thaliana (a flowering plant), poplar (a woody plant), Physcomitrella patens (a bryophyte), Maize (extensive genetic information), and Chlamydomonas reinhardtii (a green alga). The impact of such a vast amount of information on understanding common underlying developmental mechanisms can easily be realised.
Apart from EST and genome sequences, several other tools like PCR, yeast two-hybrid system, microarrays, RNA Interference, SAGE, QTL mapping etc. permit the rapid study of plant developmental patterns. Recently, cross-species hybridization has begun to be employed on microarray chips, to study the conservation and divergence in mRNA expression patterns between closely related species. Techniques for analyzing this kind of data have also progressed over the past decade. We now have better models for molecular evolution, more refined analysis algorithms and better computing power as a result of advances in computer sciences.
Evolution of plant morphology
Overview of plant evolution
Evidence suggests that an algal scum formed on the land , but it was not until the Ordovician period, around , that land plants appeared. These began to diversify in the late Silurian period, around , and the fruits of their diversification are displayed in remarkable detail in an early Devonian fossil assemblage known as the Rhynie chert. This chert preserved early plants in cellular detail, petrified in volcanic springs. By the middle of the Devonian period most of the features recognised in plants today are present, including roots and leaves. By the late Devonian, plants had reached a degree of sophistication that allowed them to form forests of tall trees. Evolutionary innovation continued after the Devonian period. Most plant groups were relatively unscathed by the Permo-Triassic extinction event, although the structures of communities changed. This may have set the scene for the evolution of flowering plants in the Triassic (~), which exploded the Cretaceous and Tertiary. The latest major group of plants to evolve were the grasses, which became important in the mid Tertiary, from around . The grasses, as well as many other groups, evolved new mechanisms of metabolism to survive the low and warm, dry conditions of the tropics over the last . Although animals and plants evolved their bodyplan independently, they both express a developmental constraint during mid-embryogenesis that limits their morphological diversification.
Meristems
The meristem architectures differ between angiosperms, gymnosperms and pteridophytes. The gymnosperm vegetative meristem lacks organization into distinct tunica and corpus layers. They possess large cells called central mother cells. In angiosperms, the outermost layer of cells divides anticlinally to generate the new cells, while in gymnosperms, the plane of division in the meristem differs for different cells. However, the apical cells do contain organelles like large vacuoles and starch grains, like the angiosperm meristematic cells.
Pteridophytes, like fern, on the other hand, do not possess a multicellular apical meristem. They possess a tetrahedral apical cell, which goes on to form the plant body. Any somatic mutation in this cell can lead to hereditary transmission of that mutation. The earliest meristem-like organization is seen in an algal organism from group Charales that has a single dividing cell at the tip, much like the pteridophytes, yet simpler.
One can thus see a clear pattern in evolution of the meristematic tissue, from pteridophytes to angiosperms: Pteridophytes, with a single meristematic cell; gymnosperms with a multicellular, but less defined organization; and finally, angiosperms, with the highest degree of organization.
Evolution of plant transcriptional regulation
Transcription factors and transcriptional regulatory networks play key roles in plant development and stress responses, as well as their evolution. During plant landing, many novel transcription factor families emerged and are preferentially wired into the networks of multicellular development, reproduction, and organ development, contributing to more complex morphogenesis of land plants.
Evolution of leaves
Origins of the leaf
Leaves are the primary photosynthetic organs of a plant. Based on their structure, they are classified into two types - microphylls, that lack complex venation patterns and megaphylls, that are large and with a complex venation. It has been proposed that these structures arose independently. Megaphylls, according to the telome theory, have evolved from plants that showed a three-dimensional branching architecture, through three transformations: , which involved formation of a planar architecture, webbing, or formation of the outgrowths between the planar branches and fusion, where these webbed outgrowths fused to form a proper leaf lamina.
Studies have revealed that these three steps happened multiple times in the evolution of today's leaves.
Contrary to the telome theory, developmental studies of compound leaves have shown that, unlike simple leaves, compound leaves branch in three dimensions. Consequently, they appear partially homologous with shoots as postulated by Agnes Arber in her partial-shoot theory of the leaf. They appear to be part of a continuum between morphological categories, especially those of leaf and shoot. Molecular genetics confirmed these conclusions (see below).
It has been proposed that the before the evolution of leaves, plants had the photosynthetic apparatus on the stems. Today's megaphyll leaves probably became commonplace some 360 , about 40 my after the simple leafless plants had colonized the land in the early Devonian period. This spread has been linked to the fall in the atmospheric carbon dioxide concentrations in the late Paleozoic era associated with a rise in density of stomata on leaf surface. This must have allowed for better transpiration rates and gas exchange. Large leaves with less stomata would have heated up in the sun's rays, but an increased stomatal density allowed for a better-cooled leaf, thus making its spread feasible.
Factors influencing leaf architectures
Various physical and physiological forces like light intensity, humidity, temperature, wind speeds etc. are thought to have influenced evolution of leaf shape and size. It is observed that high trees rarely have large leaves, owing to the obstruction they generate for winds. This obstruction can eventually lead to the tearing of leaves, if they are large. Similarly, trees that grow in temperate or taiga regions have pointed leaves, presumably to prevent nucleation of ice onto the leaf surface and reduce water loss due to transpiration. Herbivory, not only by large mammals, but also small insects has been implicated as a driving force in leaf evolution, an example being plants of the genus Aciphylla, that are commonly found in New Zealand. The now-extinct moas (birds) fed upon these plants, and the spines on the leaves probably discouraged the moas from feeding on them. Other members of Aciphylla that did not co-exist with the moas were spineless.
Genetic evidences for leaf evolution
At the genetic level, developmental studies have shown that repression of the KNOX genes is required for initiation of the leaf primordium. This is brought about by ARP genes, which encode transcription factors. Genes of this type have been found in many plants studied till now, and the mechanism i.e. repression of KNOX genes in leaf primordia, seems to be quite conserved. Expression of KNOX genes in leaves produces complex leaves. It is speculated that the ARP function arose quite early in vascular plant evolution, because members of the primitive group lycophytes also have a functionally similar gene Other players that have a conserved role in defining leaf primordia are the phytohormone auxin, gibberelin and cytokinin.
One feature of a plant is its phyllotaxy. The arrangement of leaves on the plant body is such that the plant can maximally harvest light under the given constraints, and hence, one might expect the trait to be genetically robust. However, it may not be so. In maize, a mutation in only one gene called abphyl (abnormal phyllotaxy) was enough to change the phyllotaxy of the leaves. It implies that sometimes, mutational tweaking of a single locus on the genome is enough to generate diversity. The abphyl gene was later on shown to encode a cytokinin response regulator protein.
Once the leaf primordial cells are established from the SAM cells, the new axes for leaf growth are defined, one important (and more studied) among them being the abaxial-adaxial (lower-upper surface) axis. The genes involved in defining this, and the other axes seem to be more or less conserved among higher plants. Proteins of the HD-ZIPIII family have been implicated in defining the adaxial identity. These proteins deviate some cells in the leaf primordium from the default abaxial state, and make them adaxial. It is believed that in early plants with leaves, the leaves just had one type of surface - the abaxial one. This is the underside of today's leaves. The definition of the adaxial identity occurred some 200 million years after the abaxial identity was established. One can thus imagine the early leaves as an intermediate stage in evolution of today's leaves, having just arisen from spiny stem-like outgrowths of their leafless ancestors, covered with stomata all over, and not optimized as much for light harvesting.
How the infinite variety of plant leaves is generated is a subject of intense research. Some common themes have emerged. One of the most significant is the involvement of KNOX genes in generating compound leaves, as in tomato (see above). But this again is not universal. For example, pea uses a different mechanism for doing the same thing. Mutations in genes affecting leaf curvature can also change leaf form, by changing the leaf from flat, to a crinkly shape, like the shape of cabbage leaves. There also exist different morphogen gradients in a developing leaf which define the leaf's axis. Changes in these morphogen gradients may also affect the leaf form. Another very important class of regulators of leaf development are the microRNAs, whose role in this process has just begun to be documented. The coming years should see a rapid development in comparative studies on leaf development, with many EST sequences involved in the process coming online.
Molecular genetics has also shed light on the relation between radial symmetry (characteristic of stems) and dorsiventral symmetry (typical for leaves). James (2009) stated that "it is now widely accepted that... radiality [characteristic of most shoots] and dorsiventrality [characteristic of leaves] are but extremes of a continuous spectrum. In fact, it is simply the timing of the KNOX gene expression!" In fact there is evidence for this continuum already at the beginning of land plant evolution. Furthermore, studies in molecular genetics confirmed that compound leaves are intermediate between simple leaves and shoots, that is, they are partially homologous with simple leaves and shoots, since "it is now generally accepted that compound leaves express both leaf and shoot properties”. This conclusion was reached by several authors on purely morphological grounds.
Evolution of flowers
Flower-like structures first appear in the fossil records some ~130 mya, in the Cretaceous era.
The flowering plants have long been assumed to have evolved from within the gymnosperms; according to the traditional morphological view, they are closely allied to the gnetales. However, recent molecular evidence is at odds to this hypothesis, and further suggests that gnetales are more closely related to some gymnosperm groups than angiosperms, and that gymnosperms form a distinct clade to the angiosperms,. Molecular clock analysis predicts the divergence of flowering plants (anthophytes) and gymnosperms to ~
The main function of a flower is reproduction, which, before the evolution of the flower and angiosperms, was the job of microsporophylls and megasporophylls. A flower can be considered a powerful evolutionary innovation, because its presence allowed the plant world to access new means and mechanisms for reproduction.
Origins of the flower
It seems that on the level of the organ, the leaf may be the ancestor of the flower, or at least some floral organs. When we mutate some crucial genes involved in flower development, we end up with a cluster of leaf-like structures. Thus, sometime in history, the developmental program leading to formation of a leaf must have been altered to generate a flower. There probably also exists an overall robust framework within which the floral diversity has been generated. An example of that is a gene called LEAFY (LFY), which is involved in flower development in Arabidopsis thaliana. The homologs of this gene are found in angiosperms as diverse as tomato, snapdragon, pea, maize and even gymnosperms. Expression of Arabidopsis thaliana LFY in distant plants like poplar and citrus also results in flower-production in these plants. The LFY gene regulates the expression of some gene belonging to the MADS-box family. These genes, in turn, act as direct controllers of flower development.
Evolution of the MADS-box family
The members of the MADS-box family of transcription factors play a very important and evolutionarily conserved role in flower development. According to the ABC model of flower development, three zones - A, B and C - are generated within the developing flower primordium, by the action of some transcription factors, that are members of the MADS-box family. Among these, the functions of the B and C domain genes have been evolutionarily more conserved than the A domain gene. Many of these genes have arisen through gene duplications of ancestral members of this family. Quite a few of them show redundant functions.
The evolution of the MADS-box family has been extensively studied. These genes are present even in pteridophytes, but the spread and diversity is many times higher in angiosperms. There appears to be quite a bit of pattern into how this family has evolved. Consider the evolution of the C-region gene AGAMOUS (AG). It is expressed in today's flowers in the stamens, and the carpel, which are reproductive organs. It's ancestor in gymnosperms also has the same expression pattern. Here, it is expressed in the strobili, an organ that produces pollens or ovules. Similarly, the B-genes' (AP3 and PI) ancestors are expressed only in the male organs in gymnosperms. Their descendants in the modern angiosperms also are expressed only in the stamens, the male reproductive organ. Thus, the same, then-existing components were used by the plants in a novel manner to generate the first flower. This is a recurring pattern in evolution.
Factors influencing floral diversity
How is the enormous diversity in the shape, color and sizes of flowers established? There is enormous variation in the developmental program in different plants. For example, monocots possess structures like lodicules and palea, that were believed to be analogous to the dicot petals and carpels respectively. It turns out that this is true, and the variation is due to slight changes in the MADS-box genes and their expression pattern in the monocots. Another example is that of the toad-flax, Linaria vulgaris, which has two kinds of flower symmetries: radial and bilateral. These symmetries are due to changes in copy number, timing, and location of expression in CYCLOIDEA, which is related to TCP1 in Arabidopsis.
Arabidopsis thaliana has a gene called AGAMOUS that plays an important role in defining how many petals and sepals and other organs are generated. Mutations in this gene give rise to the floral meristem obtaining an indeterminate fate, and many floral organs keep on getting produced. We have flowers like roses, carnations and morning glory, for example, that have very dense floral organs. These flowers have been selected by horticulturists since long for increased number of petals. Researchers have found that the morphology of these flowers is because of strong mutations in the AGAMOUS homolog in these plants, which leads to them making a large number of petals and sepals. Several studies on diverse plants like petunia, tomato, impatiens, maize etc. have suggested that the enormous diversity of flowers is a result of small changes in genes controlling their development.
Some of these changes also cause changes in expression patterns of the developmental genes, resulting in different phenotypes. The Floral Genome Project looked at the EST data from various tissues of many flowering plants. The researchers confirmed that the ABC Model of flower development is not conserved across all angiosperms. Sometimes expression domains change, as in the case of many monocots, and also in some basal angiosperms like Amborella. Different models of flower development like the fading boundaries model, or the overlapping-boundaries model which propose non-rigid domains of expression, may explain these architectures. There is a possibility that from the basal to the modern angiosperms, the domains of floral architecture have gotten more and more fixed through evolution.
Flowering time
Another floral feature that has been a subject of natural selection is flowering time. Some plants flower early in their life cycle, others require a period of vernalization before flowering. This decision is based on factors like temperature, light intensity, presence of pollinators and other environmental signals. In Arabidopsis thaliana it is known that genes like CONSTANS (CO), FRIGIDA, Flowering Locus C (FLC) and FLOWERING LOCUS T (FT) integrate the environmental signals and initiate the flower development pathway. Allelic variation in these loci have been associated with flowering time variations between plants. For example, Arabidopsis thaliana ecotypes that grow in the cold temperate regions require prolonged vernalization before they flower, while the tropical varieties and common lab strains, do not. Much of this variation is due to mutations in the FLC and FRIGIDA genes, rendering them non-functional.
Many genes in the flowering time pathway are conserved across all plants studied to date. However, this does not mean that the mechanism of action is similarly conserved. For example, the monocot rice accelerates its flowering in short-day conditions, while Arabidopsis thaliana, a eudicot, responds to long-day conditions. In both plants, the proteins CO and FT are present but in Arabidopsis thaliana CO enhances FT production, while in rice the CO homolog represses FT production, resulting in completely opposite downstream effects.
Theories of flower evolution
There are many theories that propose how flowers evolved. Some of them are described below.
The Anthophyte Theory was based on the observation that a gymnospermic family Gnetaceae has a flower-like ovule. It has partially developed vessels as found in the angiosperms, and the megasporangium is covered by three envelopes, like the ovary structure of angiosperm flowers. However, many other lines of evidence show that gnetophytes are not related to angiosperms.
The Mostly Male Theory has a more genetic basis. Proponents of this theory point out that the gymnosperms have two very similar copies of the gene LFY while angiosperms only have one. Molecular clock analysis has shown that the other LFY paralog was lost in angiosperms around the same time as flower fossils become abundant, suggesting that this event might have led to floral evolution. According to this theory, loss of one of the LFY paralog led to flowers that were more male, with the ovules being expressed ectopically. These ovules initially performed the function of attracting pollinators, but sometime later, may have been integrated into the core flower.
Adaptive function of flowers
In 1878 Charles Darwin published a book “The Effects of Cross and Self-Fertilization in the Vegetable Kingdom” and in the initial paragraph of chapter XII noted "The first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross-fertilisation is beneficial and self-fertilisation often injurious, at least with the plants on which I experimented." Flowers likely emerged in plant evolution as an adaptation to facilitate cross-fertilisation (outcrossing), a process that allows the masking of recessive deleterious mutations in the genome of progeny. This masking effect is referred to as genetic complementation. This beneficial effect of cross-fertilisation on progeny is also considered to be the basis of hybrid vigor or heterosis. Once flowers became established in a lineage with the adaptive function of promoting cross-fertilization, subsequent switching to inbreeding usually then becomes disadvantageous, in large part because it allows expression of the previously masked deleterious recessive mutations, i.e. inbreeding depression. Also, meiosis, the process in flowering plants by which seed progeny are produced, provides a direct mechanism for repairing germ-line DNA through genetic recombination. Thus, in flowering plants, the two fundamental aspects of sexual reproduction are cross-fertilization (outcrossing) and meiosis and these appear to be maintained respectively by the advantages of genetic complementation and recombinational repair of germline DNA.
Evolution of secondary metabolism
Plant secondary metabolites are low molecular weight compounds, sometimes with complex structures that have no essential role in primary metabolism. They function in processes such as anti-herbivory, pollinator attraction, communication between plants, allelopathy, maintenance of symbiotic associations with soil flora and enhancing the rate of fertilization. Secondary metabolites have great structural and functional diversity and many thousands of enzymes may be involved in their synthesis, coded for by as much as 15–25% of the genome. Many plant secondary metabolites such as the colour and flavor components of saffron and the chemotherapeutic drug taxol are of culinary and medical significance to humans and are therefore of commercial importance. In plants they seem to have diversified using mechanisms such as gene duplications, evolution of novel genes and the development of novel biosynthetic pathways. Studies have shown that diversity in some of these compounds may be positively selected for. Cyanogenic glycosides may have been proposed to have evolved multiple times in different plant lineages, and there are several other instances of convergent evolution. For example, the enzymes for synthesis of limonene – a terpene – are more similar between angiosperms and gymnosperms than to their own terpene synthesis enzymes. This suggests independent evolution of the limonene biosynthetic pathway in these two lineages.
Mechanisms and players in evolution
While environmental factors are significantly responsible for evolutionary change, they act merely as agents for natural selection. Some of the changes develop through interactions with pathogens. Change is inherently brought about via phenomena at the genetic level – mutations, chromosomal rearrangements and epigenetic changes. While the general types of mutations hold true across the living world, in plants, some other mechanisms have been implicated as highly significant.
Polyploidy is a very common feature in plants. It is believed that at least half plants are or have been polyploids. Polyploidy leads to genome doubling, thus generating functional redundancy in most genes. The duplicated genes may attain new function, either by changes in expression pattern or changes in activity. Polyploidy and gene duplication are believed to be among the most powerful forces in evolution of plant form. It is not known though, why genome doubling is such a frequent process in plants. One possible reason is the production of large amounts of secondary metabolites in plant cells. Some of them might interfere in the normal process of chromosomal segregation, leading to polypoidy.
In recent times, plants have been shown to possess significant microRNA families, which are conserved across many plant lineages. In comparison to animals, while the number of plant miRNA families is less, the size of each family is much larger. The miRNA genes are also much more spread out in the genome than those in animals, where they are found clustered. It has been proposed that these miRNA families have expanded by duplications of chromosomal regions. Many miRNA genes involved in regulation of plant development have been found to be quite conserved between plants studied.
Domestication of plants such as maize, rice, barley, wheat etc. has also been a significant driving force in their evolution. Some studies have looked at the origins of the maize plant and found that maize is a domesticated derivative of a wild plant from Mexico called teosinte. Teosinte belongs to the genus Zea, just as maize, but bears very small inflorescence, 5–10 hard cobs, and a highly branched and spread-out stem.
Crosses between a particular teosinte variety and maize yield fertile offspring that are intermediate in phenotype between maize and teosinte. QTL analysis has also revealed some loci that when mutated in maize yield a teosinte-like stem or teosinte-like cobs. Molecular clock analysis of these genes estimates their origins to some 9000 years ago, well in accordance with other records of maize domestication. It is believed that a small group of farmers must have selected some maize-like natural mutant of teosinte some 9000 years ago in Mexico, and subjected it to continuous selection to yield the maize plant as known today.
Another case is that of cauliflower. The edible cauliflower is a domesticated version of the wild plant Brassica oleracea, which does not possess the dense undifferentiated inflorescence, called the curd, that cauliflower possesses. Cauliflower possesses a single mutation in a gene called CAL, controlling meristem differentiation into inflorescence. This causes the cells at the floral meristem to gain an undifferentiated identity, and instead of growing into a flower, they grow into a lump of undifferentiated cells. This mutation has been selected through domestication at least since the Greek empire.
See also
Plant morphology
Comparative phylogenetics
Plant evolution
Evolutionary history of plants
References
Further reading
The Genetics of plant morphological evolution
Plant evolution and development in a post-genomic context
Evolution of leaf developmental mechanisms
Developmental genetics and plant evolution
Evolutionary Developmental Biology
Evolutionary biology
Evolution of plants | Plant evolutionary developmental biology | [
"Biology"
] | 7,162 | [
"Evolutionary biology",
"Evolution of plants",
"Plants"
] |
13,572,167 | https://en.wikipedia.org/wiki/SecPAL | SecPAL is a declarative, logic-based, security policy language that has been developed to support the complex access control requirements of large scale distributed computing environments.
Common access control requirements
Here is a partial-list of some of the challenges that SecPAL addresses:
How does an organization establish a fine-grained trust relationship with another organization across organizational boundaries?
How does a user delegate a subset of a user’s rights (constrained delegation) to another user residing either in the same organization or in a different organization?
How can access control policy be authored and reviewed in a manner that is human readable - allowing auditors and non-technical people to understand such policies?
How does an organization support compliance regulations requiring that a system be able to demonstrate exactly why it was that a user was granted access to a resource?
How can policies be authored, composed and evaluated in a manner that is efficient, deterministic and tractable?
Architecture
The SecPAL Research homepage includes links to the following papers which describe the architecture of SecPAL at varying levels of abstraction.
SecPAL Formal Model ("Design and Semantics of a Decentralized Authorization Language") – Formal description of the abstract types, language semantics and evaluation rules that support deterministic evaluation in efficient time.
SecPAL Schema Specification – Specification describing a practical XML based implementation of the formal model targeted at supporting access control requirements of distributed applications
.NET Research Implementation of SecPAL – C# implementation, C# samples for common authz patterns, and comprehensive developer documentation and a getting started tutorial
Additional research
IEEE Grid 2007 - Fine Grained Access Control Using SecPAL
SecPAL for Privacy
References
Computer languages | SecPAL | [
"Technology"
] | 335 | [
"Computer science",
"Computer languages"
] |
13,572,354 | https://en.wikipedia.org/wiki/Folin%27s%20reagent | Folin's reagent or sodium 1,2-naphthoquinone-4-sulfonate is a chemical reagent used as a derivatizing agent to measure levels of amines and amino acids. The reagent reacts with them in alkaline solution to produce a fluorescent material that can be easily detected.
This should not be confused with Folin-Ciocalteu reagent, that is used to detect phenolic compounds.
The Folin reagent can be used with an acidic secondary reagent to distinguish MDMA and related compounds from PMMA and related compounds.
See also
Pill testing
Sullivan reaction
Dille–Koppanyi reagent
Froehde reagent
Liebermann reagent
Mandelin reagent
Marquis reagent
Mecke reagent
Simon's reagent
Zwikker reagent
References
Biochemistry detection reactions
Chemical tests
Drug testing reagents
1,2-Naphthoquinones
Naphthalenesulfonates | Folin's reagent | [
"Chemistry",
"Biology"
] | 213 | [
"Biochemistry detection reactions",
"Chemical tests",
"Biochemical reactions",
"Drug testing reagents",
"Microbiology techniques"
] |
13,572,433 | https://en.wikipedia.org/wiki/Gough%E2%80%93Joule%20effect | The Gough–Joule effect (a.k.a. Gow–Joule effect) is originally the tendency of elastomers to contract when heated if they are under tension. Elastomers that are not under tension do not see this effect. The term is also used more generally to refer to the dependence of the temperature of any solid on the mechanical deformation. This effect can be observed in nylon strings of classical guitars, whereby the string contracts as a result of heating. The effect is due to the decrease of entropy when long chain molecules are stretched.
If an elastic band is first stretched and then subjected to heating, it will shrink rather than expand. This effect was first observed by John Gough in 1802, and was investigated further by James Joule in the 1850s, when it then became known as the Gough–Joule effect.
Examples in Literature:
Popular Science magazine, January 1972: "A stretched piece of rubber contracts when heated. In doing so, it exerts a measurable increase in its pull. This surprising property of rubber was first observed by James Prescott Joule about a hundred years ago and is known as the Joule effect."
Rubber as an Engineering Material (book), by Khairi Nagdi: "The Joule effect is a phenomenon of practical importance that must be considered by machine designers. The simplest way of demonstrating this effect is to suspend a weight on a rubber band sufficient to elongate it at least 50%. When the stretched rubber band is warmed up by an infrared lamp, it does not elongate because of thermal expansion, as may be expected, but it retracts and lifts the weight."
The effect is important in O-ring seal design, where the seals can be mounted in a peripherally compressed state in hot applications to prolong life.
The effect is also relevant to rotary seals which can bind if the seal shrinks due to overheating.
References
External links
O-ring Gland design notes, PSP Inc.
A solar power science project using the Gow-Joule effect
Elastomers
Condensed matter physics
Rubber properties
James Prescott Joule | Gough–Joule effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 434 | [
"Materials science stubs",
"Synthetic materials",
"Phases of matter",
"Materials science",
"Elastomers",
"Condensed matter physics",
"Condensed matter stubs",
"Matter"
] |
13,573,006 | https://en.wikipedia.org/wiki/Dansyl%20chloride | Dansyl chloride or 5-(dimethylamino)naphthalene-1-sulfonyl chloride is a reagent that reacts with primary amino groups in both aliphatic and aromatic amines to produce stable blue- or blue-green–fluorescent sulfonamide adducts. It can also be made to react with secondary amines. Dansyl chloride is widely used to modify amino acids; specifically, protein sequencing and amino acid analysis.
Dansyl chloride may also be denoted DNSC. Likewise, a similar derivative, dansyl amide is known as DNSA.
In addition, these protein-DNSC conjugates are sensitive to their immediate environment. This, in combination with their ability to accept energy (as in fluorescence resonance energy transfer) from the amino acid tryptophan, allows this labeling technique to be used in investigating protein folding and dynamics.
The fluorescence of these sulfonamide adducts can be enhanced by adding alpha-cyclodextrin. Dansyl chloride is unstable in dimethyl sulfoxide, which should never be used to prepare solutions of the reagent.
The extinction coefficient of dansyl derivatives are important for measuring their concentration in solution. Dansyl chloride is one of the simplest sulfonamide derivatives, so it commonly serves as a starting reagent for the production of other derivatives. Exotic derivatives may have very different extinction coefficients, but others, such as dansyl amide, are similar to dansyl chloride in absorption and fluorescence characteristics. But even for dansyl chloride, there are a variety of extinction coefficient values that have been reported. Some of the values are used to estimate the extent of success in attempts to conjugate the dye to a protein. Other values may be used to determine a precise concentration of a stock solution. See the table below for specific values and their uses.
For all of the studies below, the absorption value is always taken at the maximum that appears between 310 nm and 350 nm. The peak is broad, so the measurement is not very sensitive to wavelength miscalibration of the spectrophotometer, and error due to miscalibration can be avoided by taking the value at the maximum instead of strictly using 330 nm.
Preparation
This compound may be prepared by reacting the corresponding sulfonic acid with excess phosphorus oxychloride (POCl3) at room temperature.
References
Chemical tests
Reagents for organic chemistry
Sulfonyl halides
Naphthalenes
Dimethylamino compounds | Dansyl chloride | [
"Chemistry"
] | 518 | [
"Chemical tests",
"Reagents for organic chemistry"
] |
13,573,117 | https://en.wikipedia.org/wiki/Monkland%20Railways | The Monkland Railways was a railway company formed in 1848 by the merger of three "coal railways" that had been built to serve coal and iron pits around Airdrie in Central Scotland, and connect them to canals for onward transport of the minerals. The newly formed company had a network stretching from Kirkintilloch to Causewayend, near Linlithgow. These coal railways had had mixed fortunes; the discovery of blackband ironstone and the development of the iron smelting industry around Coatbridge had led to phenomenal success, but hoped-for mineral discoveries in the moorland around Slamannan had been disappointing. The pioneering nature of the railways left them with a legacy of obsolete track and locomotives, and new, more modern, railways were being built around them.
The new company responded with connections to other lines, and to Bo'ness Harbour, and built new lines to Bathgate, but it was taken over by the Edinburgh and Glasgow Railway in 1865. Much of the network was dependent on proximity to pits and ironworks and as those became worked out or declined, the traffic on the network declined too, but the Coatbridge - Airdrie - Bathgate line remained open for passengers until 1956. The section east of Airdrie then closed, except for minor freight movements, but it was reopened in 2010, forming a through passenger route between Glasgow and Edinburgh via Airdrie and Bathgate. Part of the Bo'ness extension line was re-opened as the Bo'ness and Kinneil Railway, a heritage line. The remainder of the system has closed.
The North Monkland Railway was an independent line built to serve pits and quarries to the north of Airdrie beyond the reach of the Monkland Railways system. It opened in 1878 and was taken over in 1888, but it closed in the 1960s.
Origins: the coal railways
Monkland and Kirkintilloch Railway
In 1826 the Monkland and Kirkintilloch Railway (M&KR) opened, with the primary purpose of carrying coal from the Monklands collieries, south of Airdrie to Kirkintilloch, from where it could continue to market in Glasgow and Edinburgh over the Forth and Clyde Canal. As a pioneering railway, it adopted a track gauge of 4 ft 6 in, and at first operated as a toll line, allowing independent hauliers to move wagons, using horse traction. It later acquired steam locomotives and ran trains itself. At first it was successful, and when the iron smelting industry became a huge success within the railway's area, it became even more successful.
Ballochney Railway
As coal extraction developed, pits were opened further north and east than the M&KR reached, and the Ballochney Railway was constructed to serve some of them, running from Kipps, near Coatbridge, to pits around Arbuckle and Clarkston, and a quarry. It opened in 1828. The area it reached was on high ground, and two rope-worked inclines were necessary to gain altitude.
Garnkirk and Glasgow Railway
The Garnkirk and Glasgow Railway was opened in 1831 connecting the Monklands directly to Glasgow without the need to transshipment to a canal.
Wishaw and Coltness Railway
The Wishaw and Coltness Railway opened from 1833, connecting iron pits and works further east to Whifflet (then spelt Whifflat) for access to the Coatbridge ironworks.
Slamannan Railway
There was a large area of undeveloped moorland between Airdrie and the banks of the Forth, and a railway was promoted to develop the region. There were optimistic ideas of serving new collieries in the area, as well as the advantage of connecting Monklands to Edinburgh more directly. The Slamannan Railway opened in 1840 between Arbuckle and Causewayend, a wharf on the Union Canal; it had a rope worked incline down to the wharf. Onward transport to Edinburgh involved transshipment to canal barges.
Main line railways
The M&KR and the Ballochney companies enjoyed huge commercial success as the iron smelting industry boomed around Coatbridge, and as successful new mineral extraction started around Airdrie, although the Slamannan company's sought-for new mineral business barely materialised. The coal railways collectively worked in a loose collaboration.
At the same time new intercity railways were being promoted and suddenly the coal railways disadvantages seemed dominant. Their near monopoly of mineral traffic in very small areas now seemed to exclude them from areas where new business was being developed, emphasised by the terminating points at canal basins, requiring transshipment to get to destination. Their primitive track on stone block sleepers, their distinct track gauge of 4 ft 6 in also necessitated transshipment where they connected with the new standard gauge lines. Their obsolete locomotives, horse haulage by independent hauliers is some parts, the rope-worked inclines and the antiquated operating methods were all considerable disadvantages.
In 1842 the Edinburgh and Glasgow Railway (E&GR) opened its main line (to Haymarket at first) on the standard gauge of 4 ft 8½ in with modern locomotives. At this time the Caledonian Railway was promoting a new trunk line from Carlisle to Glasgow and Edinburgh; it got its authorising act of Parliament, the Edinburgh and Glasgow Railway Act 1845 (8 & 9 Vict. c. xci), in 1845 and opened in 1847/1848. It sought acquisition of the Wishaw and Coltness Railway and the Garnkirk and Glasgow Railway to get access to Glasgow, and it concluded a lease of those lines. Suddenly those lines were out of the group of mutually friendly coal railways, and soon they were simply part of the Caledonian Railway.
The three other coal railways (M&KR, Ballochney and Slamannan) decided that their interests lay in collaboration, and they formed a joint working arrangement from 29 March 1845; in effect the three companies worked as one.
In 1844 the M&KR had built a short spur to transshipment sidings with the E&GR at Garngaber, a little east of the present-day Lenzie station. The inconvenience of the transshipment emphasised the disadvantage of the now non-standard track gauge, and it was decided to change the track gauge to standard gauge. They got Parliamentary authority and made the change on 26 July and 27 July 1847.
Operating costs were high: from 1845 to 1848 the ratio for the three railways that formed the Monkland Railways averaged 55%. Giving evidence at the hearing of the Monklands Amalgamation Bill in 1848, George Knight, secretary and General Manager of the three railways explained that:
The Monklands complex consisted of 36 miles of railway proper and 12 miles of sidings, and had connected it with another 48 miles of private railways built by the various extractive and industrial interests. Although a through journey of 25 miles was possible on the system—from the eastern end of the Slamannan to the Kirkintilloch canal basin—30% of all traffic travelled less than a mile, and half of it less than 2½ miles. Hence locomotives were involved in a ceaseless pattern of stopping and shunting, and averaged only 24 miles per day against the 90 miles normal on the Edinburgh & Glasgow.The sidings were expensive to work, and even private sidings required main line points which had to be renewed every three or four years ... these numerous points also meant the employment of a large number of men to supervise them. Traders could also benefit from using the company's waggons, and were not charged for their use on sidings and private lines. [The waggons] averaged only 5¼ miles per day against 23 miles on the Edinburgh & Glasgow.
Formal merger
In 1846 it became clear that the E&GR directors favoured a purchase of the coal railways, giving it immediate access to the collieries and ironworks, and gaining possession of the territory against newly promoted lines. Such a sale appeared at first to please everyone, but Lancashire shareholders in the E&GR felt that the terms of such a takeover were too favourable to the small Scottish lines, and a major row broke out in the E&GR: the scheme was dropped. In this period, numerous other railways were promoted and alliances seemed to be formed and abandoned quickly, but the only large newcomers were the E&GR and the Caledonian Railway.
Having been rebuffed by the E&GR, the Monkland companies decided upon a formal merger, and obtained the necessary sanction by an act of Parliament, the (11 & 12 Vict. c. cxxxiv) on 14 August 1848. The new Monkland Railways Company was formed with a nominal share capital of £329,880, the sum of the capital of the three former companies; the shares were converted as follows:
Monkland and Kirkintilloch Railway £25 shares converted to £22 16s 0d in Monkland Railways shares
Ballochney Railways Railway £25 shares converted to £40 10s 10d in Monkland Railways shares
Slamannan Railway Railway £50 shares converted to £22 15s 10d in Monkland Railways shares.
With revenue of about £100,000 annually it was a profitable concern.
New lines
Slamannan Junction Railway
The Slamannan Railway terminated at Causewayend, a wharf on the Union Canal. This was close to the new E&GR main line, and a connection seemed desirable. An independent company, the Slamannan Junction Railway, was formed to build the link; the submission to Parliament for an act of Parliament was supported financially by the E&GR and the Monkland joint companies together. In fact its shareholders sold the company to the E&GR immediately after obtaining the enabling act of Parliament, the Slamannan Junction Railway Act 1844 (7 & 8 Vict. c. lxx), and the E&GR built the line from Bo'ness Junction (later renamed Manuel High Level) on the E&GR main line to Causewayend. The short line was completed by January 1847, but remained dormant until the Monkland lines altered their line to standard gauge, in August 1847.
Bo'ness
The harbour at Borrowstounness (Bo'ness) was also not far from Causewayend, and a connection to it was desirable, enabling export and coastwise mineral trade. In addition there were ironstone pits and blast furnaces at Kinneil. The nominally independent Slamannan and Borrowstounness Railway (S&BR) had been promoted by the Slamannan company to connect to Bo'ness Harbour, with a link to the E&GR west of Bo'ness Junction (later Manuel) so aligned as to allow through running from the Polmont direction to Bo'ness. The unbuilt line was absorbed into the Monkland Railways at the time of formation of that company, but the subscribed capital of £105,000 was to be kept separate. The Slamannan and Borrowstounness Railway Act 1846 (9 & 10 Vict. c. cvii) of 26 June 1846 specified that the Union Canal was to be crossed by a drawbridge or swing bridge, and that screens were to be provided to avoid frightening horses drawing barges on the canal. In fact the E&GR made considerable difficulties over the construction of the new bridge to pass the S&BR line under their own main line, and construction was delayed until 1848. With a resumption of friendly relations, it now appeared that some construction could be avoided if Slamannan to Bo'ness trains used the Slamannan Junction line to Bo'ness Junction on the E&GR and then the proposed Bo'ness Junction connection towards Bo'ness, so that trains would join and then immediately leave the E&GR main line.
In 1850, as construction was progressing, it was belatedly realised that the configuration of the junctions on the E&GR main line was such that a through movement would be impossible; trains would have to shunt back on the E&GR main line. In addition the E&GR made stipulations about the composition of the Monkland wagon wheels which were impracticable to comply with. Accordingly, the Monkland Railways decided (in May 1850) to complete the originally intended through line from Causewayend after all. The E&GR took umbrage at this and put further difficulties in the way of the underbridge construction and disputation dragged on until May 1851. The Monkland Railways now got a fresh act of Parliament, the (14 & 15 Vict. c. lxii) authorising some deviations of the new line, and the substitution of a fixed bridge over the Union Canal.
The approach to Bo'ness Harbour itself was to be along the foreshore there, and the company was obliged to build a promenade on the sea side of the railway line there. John Wilson, the proprietor of important iron works at Kinneil obtained permission to run some mineral trains there while the line was still under construction, and the first trains ran from Arden on 17 March 1851, but opening from the E&GR line at Bo'ness Junction (Manuel) took place in early August 1851, with the undesirable backshunt on the E&GR main line now apparently permitted. Full opening of the through line took place on 22 December 1851.
Passenger traffic started, after some difficulties in obtaining approval, on 10 June 1856.
Bathgate
The Bathgate Chemical Works was established in 1851, in open country a mile or so south of the town. James Young, an industrial chemist, had developed an industrial process of manufacturing paraffin from torbanite, a type of oil shale. He had obtained a patent for the process in October 1850, and the torbanite had been discovered on the Torbanehill estate, about halfway between Bathgate and Whitburn. Young joined in partnership with Edward William Binney and Edward Meldrum and the Bathgate works started operations in February 1851. It was located alongside the Wilsontown, Morningside and Coltness Railway (WM&CR) on its branch to Bathgate.
The chemical works, the torbanite fields, and the coal deposits in the area generally were attractive as a source of revenue for the Monkland Railways, and they obtained an act of Parliament, the (16 & 17 Vict. c. xc) giving powers in July 1853 to construct a railway from Blackstone (often spelt Blackston) on the Slamannan line just east of Avonbridge to the WM&CR line near Boghead. Boghead is immediately south of Bathgate, and the new line would pass through the torbanite fields, but skirt past Bathgate and join the WM&CR facing away from the town, but towards the Works. In addition, a branch from the WM&CR to Armadale Toll and to Cowdenhead (about a mile west of Armadale town, later Woodend Junction, to collieries) was authorised.
A train of coal wagons passed along the Bathgate branch on 11 June 1855, apparently while the line was still in the possession of the contractors. The company applied for authority to run passenger trains to Bathgate; this was repeatedly refused: there were no platforms nor a turntable at Bathgate, nor any signalling there or at Blackstone. The Board of Trade Inspector visited the line in 1856 to review the proposals for passenger operation; he reported that there was no turntable at Bathgate, but that one had been ordered. He continued:
The Bathgate and Bo'ness [routes] form a junction at Blackstone; from thence the traffic of the two branches will be conducted separately along the single line common to both, as far as Avon Bridge, a distance of three-quarters of a mile, then they will be united in one train, and proceed to Glasgow. To prevent any danger along the portion of line common to the two branches, the Bathgate train, both in going and returning, will have the precedence: the signal man at Blackstone will have instructions not to turn off the signal of the Boness branch until the Bathgate train has passed on its way to Avon-Bridge; of the train proceeding to Bathgate and Boness, the latter will follow the Bathgate train at an interval not less than five minutes.
The turntable was provided, and Monkland Railways passenger operation to Bathgate started on 7 July 1856. The Bathgate station was at the end of Cochrane Street, and later became Bathgate Lower station.
Calderbank
The 1853 act also gave authority for a branch from Colliertree, near Rawyards, southwards to Brownsburn, where the Calderbank Iron Works would join it with an internal private railway. The Monkland Railways portion was to be 1 mile 32 chains (2.3 km). The mineral line was opened on 1 October 1855. (Some contemporary maps misleadingly refer to the Clarkston line at Rawyards as "the Brownsburn Branch".)
Closing the gap
The Monkland Iron and Steel Company had extensive mineral workings in the Armadale area at Cowdenhead, now connected to the extension from Bathgate, and their iron works was at Calderbank, near Airdrie. There was immediately a considerable traffic from the mines to the works, and it made a long detour, starting eastwards from Armadale, away from the direction of Calderbank, and then round via Slamannan. The company observed that the gap of ten miles could be closed relatively cheaply, and a direct line would also connect worthwhile coalfields on the way, as well as the important paper works at Caldercruix. An act of Parliament, the (20 & 21 Vict. c. lxxviii) was obtained for the purpose in July 1857 in the teeth of considerable opposition from rival promoters and others.
The act authorised a large number of branch connections and other lines, and these were constructed in priority order, with the central part of the through connection delayed.
First was a short westwards extension from Cowdenhead to Standhill Junction, and from there turning back to Craigmill (otherwise known as the Woodend Branch), opened on 1 November 1858, to serve the Coltness Iron Company's mineral workings there. Similarly a short eastwards extension was made from a junction to the Clarkston Wester Monkland branch back to Stepends, with a short branch there for Wilson & Co of Summerlee Iron Works. Wilson built an internal network with a zigzag to gain height on Annies Hill. A further branch turned back from Barblues to Meadowhead Pit. The pit was close to the Ballochney workings, but the location was referred to then as Planes, later spelt Plains. These extensions were completed by early February 1860. However the Stepends branch was short lived: it closed in 1878.
That left two sections. The first was the gap from Barblues (sometimes spelt Barbleus, near Stepends) to Standhill Junction (near Blackridge; the junction was with the uncompleted Shotts Iron Works line (below), and that was completed by 27 April 1861 when a trial mineral train passed over the line; full opening to mineral trains was about 10 May 1861. This enabled through running from Coatbridge to Bathgate, but over the Ballochney inclines and running north of Airdrie.
The second gap was the line south of Airdrie, from Sunnyside Junction to Brownieside Junction, avoiding the rope worked inclines. This may have opened, also for mineral traffic only, in early August 1861.
Passenger working between Coatbridge and Bathgate started on 11 August 1862; however there was no direct route to Glasgow yet, except over the former Garnkirk railway Caledonian section.
The New Line is sometimes referred to as the Bathgate and Coatbridge Railway, but it was never independent of the Monkland Railways. However an independent Bathgate, Airdrie and Coatbridge Railway had been proposed in 1856.
Shotts iron works
The important iron works at Shotts was connected to the Wilsontown, Morningside and Coltness Railway but the works owner obviously wanted an alternative carrier, and approached the Monklands company to propose a branch line southwards from the "new line". This was agreed to, and an act of Parliament, the (23 & 24 Vict. c. clxxviii) giving authority for the -mile line was obtained in August 1860. The line opened by 5 February 1862. A short branch off the branch to West Benhar was built in 1864.
Absorbed by the E&GR
The Monkland Railways Company was absorbed by the Edinburgh and Glasgow Railway by the Edinburgh and Glasgow and Monkland Railways Amalgamation Act 1865 (28 & 29 Vict. c. ccxvii), dated 5 July 1865, on 31 July 1865. The following day, that company was itself absorbed by the North British Railway.
The larger company used the acquisition to consolidate its dominance of mineral traffic in the Monklands coalfield and in connection with the iron works in the area. The Monklands section it had acquired was profitable, although its operating costs were very high, and it was concentrated in mining areas generally remote from the large population centres. However the best of the mineral deposits had been worked out, and the focus of the extractive industries had shifted into Caledonian Railway territory.
The North British Railway set about rectifying the lack of good connection to Glasgow, and in 1871 the Coatbridge to Glasgow line was opened, from Whifflet. For the time being the Glasgow terminal was inconveniently located at College, later High Street, but the growth of daily travel to work by suburban train motivated the NBR to work towards a better network in the city. The Airdrie terminal of the Ballochney Railway (Hallcraig Street) was closed to passengers in 1870.
North Monkland Railway
Coal extraction continued to flourish in the second half of the nineteenth century, and new pits opened throughout the Monklands area. Many of these were remote from the network of the Monklands section of the North British Railway, and many private mineral branch lines and tramways were built to close the gaps. Quarrying was also an important activity.
A new railway was promoted to reach some of the pits and quarries north of the Ballochney and Slamannan lines, and the North Monkland Railway got an authorising act of Parliament, the (35 & 36 Vict. c. xci) on 18 July 1872. The line was opened on 18 February 1878, and carried goods and mineral traffic only. It ran from Kipps via Nettlehole and Greengairs, to join the Slamannan line at Southfield Row, an existing colliery spur south of Longriggend.
It connected into numerous collieries on the route, and many short mineral lines were built off the main line to connect the pits.
The line sold itself to the North British Railway effective from 31 July 1888, the £10 shares being bought out at £6 each.
The twentieth century
The Monkland Railways were now just a network of branches of the North British Railway, concentrating on serving collieries and ironworks, and the communities that built up around them. The through Bathgate - Airdrie - Coatbridge line became an important secondary line for passengers and freight.
However many of the more remote localities were dependent on the mineral activity they served, and after World War I there was some geological exhaustion as well as competition from cheap foreign imports. This intensified after World War II, by which time the North British Railway had formed a constituent of the London and North Eastern Railway in 1923, and then been nationalised into the Scottish Region of British Railways in 1948. Now many of the pits and ironworks were declining substantially or closing, and the mineral branches closed with them.
The Rosehall branch had already closed in 1930, and the Slamannan line, passing through remote and thinly populated territory, closed in 1949. The Cairnhill line closed in the 1950s.
The communities of Airdrie and Coatbridge continued to flourish, enhanced by other economic activity associated with the West of Scotland, but the through line from Airdrie to Bathgate closed to passenger traffic in 1956.
A limited goods service continued on the line until l February 1982 but the line then closed completely, except for the short section from Airdrie to Moffat Mills, which remained open for goods traffic; however this was sporadic.
The Benhar mines, the branch network based on the Westcraigs to Shotts Iron Works branch, closed in 1963, and the North Monkland section closed the following year, together with the Bathgate to Blackston Junction line. The original line to Kirkintilloch closed in 1965 except for a short section to Leckethall Siding, which continued until 1982. The Ballochney section closed in 1966.
Reopening
When the Airdrie to Bathgate section closed to goods traffic, a short stub was left at Airdrie to Moffat Mills. Although officially "open" it was in fact dormant for many years. As passenger suburban travel in Greater Glasgow experience a revival, a short extension along this line to a Drumgelloch station, on the eastern margin of Airdrie, was electrified and opened, in 1989.
The line onward from Drumgelloch to Bathgate was reopened on 12 December 2010 as an electrified railway with a frequent passenger service between Edinburgh and Glasgow. This proved remarkably successful. Difficult weather prevented immediate opening of all the intermediate stations, and Armadale opened on 4 March 2011, followed by a new Drumgelloch station, further east than the earlier one and close to the former Clarkston station site, on 6 March 2011.
Current operations
The largest section of the Monkland Railways network now in operation is the line between Coatbridge and Bathgate; it carries (2015) a well-patronised fifteen-minute interval passenger service between Helensburgh and Milngavie, and Edinburgh.
The north-south line between Gartsherrie and Whifflet carries freight, and the Gartsherrie to Garnqueen section carries a passenger service to Cumbernauld, the remnant of the earlier anomaly where Caledonian express trains used this North British Railway section.
The remainder of the network is closed. The Ballochney inclines in the Airdrie area are still easy to identify, and the moorland area of the Slamannan line is relatively undeveloped, except nearer Airdrie where extensive open-cast mining has obliterated any remaining trace of the railway.
References
Sources
North British Railway
Mining railways
Early Scottish railway companies
Pre-grouping British railway companies
Closed railway lines in Scotland
Beeching closures in Scotland
Railway companies established in 1848
Railway companies disestablished in 1865
Standard gauge railways in Scotland
British companies disestablished in 1865
British companies established in 1848
Coal in Scotland | Monkland Railways | [
"Engineering"
] | 5,461 | [
"Mining equipment",
"Mining railways"
] |
13,574,236 | https://en.wikipedia.org/wiki/Sodium%20tetradecyl%20sulfate | Sodium tetradecyl sulfate (STS), also known by its INCI name, sodium myristyl sulfate, is a common anionic surfactant. The compound consists of the sodium salt of the micelle-forming sulfate ester of tetradecanol. It is a white, water-soluble solid of low toxicity with many practical uses.
Applications
Medicine
It the active component of the sclerosant drugs Sotradecol and Fibrovein. It is commonly used in the treatment of varicose and spider veins of the leg, during the procedure of sclerotherapy. Being a detergent, its action is on the lipid molecules in the cells of the vein wall, causing inflammatory destruction of the internal lining of the vein and thrombus formation eventually leading to sclerosis of the vein. It is used in concentrations ranging from 0.1% to 3% for this purpose.
It is occasionally used for the treatment of stabilisation of joints that regularly dislocate, particularly in patients with Ehlers-Danlos syndrome.
In the UK, Ireland, Italy, Australia, New Zealand and South Africa, it is sold under the trade-name Fibro-Vein in concentrations of 0.2%, 0.5%, 1.0%, and 3%.
Synthesis
Tetradecyl alcohol is treated with sulfur trioxide followed by neutralization of the resulting pyrosulfuric acid with sodium hydroxide.
References
Organic sodium salts
Sulfate esters
Anionic surfactants | Sodium tetradecyl sulfate | [
"Chemistry"
] | 318 | [
"Organic sodium salts",
"Salts"
] |
13,574,246 | https://en.wikipedia.org/wiki/Monoethanolamine%20oleate | Monoethanolamine oleate (ethanolammonium oleate) is an organic compound with the formula [CH3(CH2)7CH=CH(CH2)7CO2][H3NCH2CH2OH].. A colorless oily liquid, it is an example of a protic ionic liquid. It is a salt formed by the reaction between monoethanolamine and oleic acid.
Antivaricose agent
As an antivaricose agent, it is injected topically into varicosities to cause sclerosis (closure) of the abnormal vein. It is indicated for the treatment of patients with esophageal varices that have recently bled, to prevent rebleeding. Ethanolamine is not indicated for the treatment of patients with esophageal varices that have not bled. There is no evidence that treatment of this population decreases the likelihood of bleeding. Sclerotherapy with ethanolamine has no beneficial effect upon portal hypertension, the cause of esophageal varices, so that recanalization and collateralization may occur, necessitating reinjection.
References
Ammonium compounds
Carboxylate anions | Monoethanolamine oleate | [
"Chemistry"
] | 246 | [
"Ammonium compounds",
"Salts"
] |
13,574,275 | https://en.wikipedia.org/wiki/Sodium%20apolate | Sodium apolate (INN) or lyapolate sodium (USAN) is a vasoprotective.
References
Organic sodium salts
Vinyl polymers
Sulfonates | Sodium apolate | [
"Chemistry"
] | 36 | [
"Organic sodium salts",
"Salts"
] |
13,574,291 | https://en.wikipedia.org/wiki/Tribenoside | Tribenoside (Glyvenol) is a vasoprotective drug used to treat hemorrhoids. It has mild anti-inflammatory, analgesic, and wound healing properties. Tribenoside stimulates laminin α5 production and laminin-332 deposition to help repair the basement membrane during the wound healing process. It is a mixture of the α- and β-anomers.
Tribenoside has been shown to induce drug hypersensitivity syndrome in association with CMV reactivation.
References
Glucosides
Ethers | Tribenoside | [
"Chemistry"
] | 117 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
13,574,409 | https://en.wikipedia.org/wiki/ATC%20code%20V09 |
V09A Central nervous system
V09AA Technetium (99mTc) compounds
V09AA01 Technetium (99mTc) exametazime
V09AA02 Technetium (99mTc) bicisate
V09AB Iodine (123I) compounds
V09AB01 Iodine iofetamine (123I)
V09AB02 Iodine iolopride (123I)
V09AB03 Iodine ioflupane (123I)
V09AX Other central nervous system diagnostic radiopharmaceuticals
V09AX01 Indium (111In) pentetic acid
V09AX03 Iodine (124I) 2β-carbomethoxy-3β-(4-iodophenyl)-tropane
V09AX04 Flutemetamol (18F)
V09AX05 Florbetapir (18F)
V09AX06 Florbetaben (18F)
V09AX07 Flortaucipir (18F)
V09B Skeleton
V09BA Technetium (99mTc) compounds
V09BA01 Technetium (99mTc) oxidronic acid
V09BA02 Technetium (99mTc) medronic acid
V09BA03 Technetium (99mTc) pyrophosphate
V09BA04 Technetium (99mTc) butedronic acid
V09C Renal system
V09CA Technetium (99mTc) compounds
V09CA01 Technetium (99mTc) pentetic acid
V09CA02 Technetium (99mTc) succimer
V09CA03 Technetium (99mTc) mertiatide
V09CA04 Technetium (99mTc) gluceptate
V09CA05 Technetium (99mTc) gluconate
V09CA06 Technetium (99mTc) ethylenedicysteine
V09CX Other renal system diagnostic radiopharmaceuticals
V09CX01 Sodium iodohippurate (123I)
V09CX02 Sodium iodohippurate (131I)
V09CX03 Sodium iothalamate (125I)
V09CX04 Chromium (51Cr) edetate
V09D Hepatic and reticulo endothelial system
V09DA Technetium (99mTc) compounds
V09DA01 Technetium (99mTc) disofenin
V09DA02 Technetium (99mTc) etifenin
V09DA03 Technetium (99mTc) lidofenin
V09DA04 Technetium (99mTc) mebrofenin
V09DA05 Technetium (99mTc) galtifenin
V09DB Technetium (99mTc), particles and colloids
V09DB01 Technetium (99mTc) nanocolloid
V09DB02 Technetium (99mTc) microcolloid
V09DB03 Technetium (99mTc) millimicrospheres
V09DB04 Technetium (99mTc) tin colloid
V09DB05 Technetium (99mTc) sulfur colloid
V09DB06 Technetium (99mTc) rheniumsulfide colloid
V09DB07 Technetium (99mTc) phytate
V09DX Other hepatic and reticulo endothelial system diagnostic radiopharmaceuticals
V09DX01 Selenium (75Se) tauroselcholic acid
V09E Respiratory system
V09EA Technetium (99mTc) inhalants
V09EA01 Technetium (99mTc) pentetic acid
V09EA02 Technetium (99mTc) technegas
V09EA03 Technetium (99mTc) nanocolloid
V09EB Technetium (99mTc) particles for injection
V09EB01 Technetium (99mTc) macrosalb
V09EB02 Technetium (99mTc) microspheres
V09EX Other respiratory system diagnostic radiopharmaceuticals
V09EX01 Krypton (81mKr) gas
V09EX02 Xenon (127Xe) gas
V09EX03 Xenon (133Xe) gas
V09F Thyroid
V09FX Various thyroid diagnostic radiopharmaceuticals
V09FX01 Technetium (99mTc) pertechnetate
V09FX02 Sodium iodide (123I)
V09FX03 Sodium iodide (131I)
V09FX04 Sodium iodide (124I)
V09G Cardiovascular system
V09GA Technetium (99mTc) compounds
V09GA01 Technetium (99mTc) sestamibi
V09GA02 Technetium (99mTc) tetrofosmin
V09GA03 Technetium (99mTc) teboroxime
V09GA04 Technetium (99mTc) human albumin
V09GA05 Technetium (99mTc) furifosmin
V09GA06 Technetium (99mTc) stannous agent labelled cells
V09GA07 Technetium (99mTc) apcitide
V09GB Iodine (125I) compounds
V09GB01 Fibrinogen (125I)
V09GB02 Iodine (125I) human albumin
V09GX Other cardiovascular system diagnostic radiopharmaceuticals
V09GX01 Thallium (201Tl) chloride
V09GX02 Indium (111In) imciromab
V09GX03 Chromium (51Cr) chromate labelled cells
V09GX04 Rubidium (82Rb) chloride
V09GX05 Ammonia (13N)
V09H Inflammation and infection detection
V09HA Technetium (99mTc) compounds
V09HA01 Technetium (99mTc) human immunoglobulin
V09HA02 Technetium (99mTc) exametazime labelled cells
V09HA03 Technetium (99mTc) antigranulocyte antibody
V09HA04 Technetium (99mTc) sulesomab
V09HB Indium (111In) compounds
V09HB01 Indium (111In) oxinate labelled cells
V09HB02 Indium (111In) tropolonate labelled cells
V09HX Other diagnostic radiopharmaceuticals for inflammation and infection detection
V09HX01 Gallium (67Ga) citrate
V09I Tumour detection
V09IA Technetium (99mTc) compounds
V09IA01 Technetium (99mTc) antiCarcinoEmbryonicAntigen antibody
V09IA02 Technetium (99mTc) antimelanoma antibody
V09IA03 Technetium (99mTc) pentavalent succimer
V09IA04 Technetium (99mTc) votumumab
V09IA05 Technetium (99mTc) depreotide
V09IA06 Technetium (99mTc) arcitumomab
V09IA07 Technetium (99mTc) hynic-octreotide
V09IA08 Technetium (99mTc) etarfolatide
V09IA09 Technetium (99mTc) tilmanocept
V09IA10 Technetium (99mTc) trofolastat chloride
V09IB Indium (111In) compounds
V09IB01 Indium (111In) pentetreotide
V09IB02 Indium (111In) satumomab pendetide
V09IB03 Indium (111In) antiovariumcarcinoma antibody
V09IB04 Indium (111In) capromab pendetide
V09IX Other diagnostic radiopharmaceuticals for tumour detection
V09IX01 Iobenguane (123I)
V09IX02 Iobenguane (131I)
V09IX03 Iodine (125I) CC49-monoclonal antibody
V09IX04 Fludeoxyglucose (18F)
V09IX05 Fluorodopa (18F)
V09IX06 Sodium fluoride (18F)
V09IX07 Fluorocholine (18F)
V09IX08 Fluoroethylcholine (18F)
V09IX09 Gallium (68Ga) edotreotide
V09IX10 Fluoroethyl-L-tyrosine (18F)
V09IX11 Fluoroestradiol (18F)
V09IX12 Fluciclovine (18F)
V09IX13 Methionine (11C)
V09IX14 Gallium (68Ga) gozetotide
V09IX15 Copper (64Cu) dotatate
V09IX16 Piflufolastat (18F)
V09IX17 PSMA-1007 (18F)
V09IX18 Flotufolastat (18F)
V09X Other diagnostic radiopharmaceuticals
V09XA Iodine (131I) compounds
V09XA01 Iodine (131I) norcholesterol
V09XA02 Iodocholesterol (131I)
V09XA03 Iodine (131I) human albumin
V09XX Various diagnostic radiopharmaceuticals
V09XX01 Cobalt (57Co) cyanocobalamine
V09XX02 Cobalt (58Co) cyanocobalamine
V09XX03 Selenium (75Se) norcholesterol
V09XX04 Ferric (59Fe) citrate
References
V09
Medicinal radiochemistry | ATC code V09 | [
"Chemistry"
] | 2,299 | [
"Medicinal radiochemistry",
"Medicinal chemistry"
] |
13,574,492 | https://en.wikipedia.org/wiki/Roof%20lantern | A roof lantern is a daylighting architectural element. Architectural lanterns are part of a larger roof and provide natural light into the space or room below. In contemporary use it is an architectural skylight structure.
A lantern roof will generally mean just the roof of a lantern structure in the West, but has a special meaning in Indian architecture (mostly Buddhist, and stretching into Central Asia and eastern China), where it means a dome-like roof raised by sets of four straight beams placed above each other, "arranged in diminishing squares", and rotated with each set. Normally such a "lantern" is enclosed and provides no light at all.
The term roof top lantern is sometimes used to describe the lamps on roofs of taxis in Japan, designed to reflect the cultural heritage of Japanese paper lanterns.
History
The glazed lantern was developed during the Middle Ages, one notable medieval example being that atop the 14th-century Octagon Tower at Ely Cathedral in England. Roof lanterns of masonry and glass were used in Renaissance architecture, such as in principal cathedrals. In 16th-century France and Italy, they began usage in orangeries, an early form of a conservatory structure with tall windows and a glazed roof section for wintering citrus trees and other plants in non-temperate climates.
Post-Renaissance roof lanterns were made of timber and glass and were often prone to leaking. Initially wood-framed in the 18th and 19th centuries, skylights became even more popular in metal construction with the advent of sheet-metal shops during the Victorian era. Virtually every urban row house of the late-19th and early-20th centuries relied upon a metal-framed skylight to illuminate its enclosed stairwell. More elaborate dwellings of the era showed a fondness for the roof lantern, in which the humble ceiling-window design of the skylight is elaborated into a miniature glass-paneled conservatory-style roof cupola or tower.
Present day
Modern lanterns benefit from advances in glazing and sealing techniques, plus the development of high performance insulated glass and sealants, which reduce energy loss and provide water-tightness in the same manner as conventional skylights. Typically, roof lanterns are constructed using wood, UPVC or aluminium, or a combination of those materials.
They serve as an architectural feature, distinguished from commercial manufactured skylights by their custom design, providing unique views to the outdoors. Roof lanterns for residential homes are usually constructed using a combination of triangular and trapezoidal segments, fitted within a UPVC or aluminium frame. Traditional architectural styles characterise most roof lanterns in the UK. In the U.S., where the term 'custom' skylight is often used, modern styles of roof lanterns are also common in the building vernacular.
Gallery
See also
Chhatri
Conservatory (greenhouse)
Cupola
Daylighting
Passive daylighting
Tholobate, a drum under a dome
References
Britannica Online Encyclopedia: Lantern (architecture)
Energy-saving lighting
Roofs
Windows
Architectural elements
Garden features
Solar architecture | Roof lantern | [
"Technology",
"Engineering"
] | 598 | [
"Structural engineering",
"Building engineering",
"Structural system",
"Architectural elements",
"Roofs",
"Components",
"Architecture"
] |
13,574,555 | https://en.wikipedia.org/wiki/Bufexamac | Bufexamac is a drug used as an anti-inflammatory agent on the skin, as well as rectally. Common brand names include Paraderm and Parfenac. It was withdrawn in Europe and Australia because of allergic reactions.
Indications
Ointments and lotions containing bufexamac are used for the treatment of subacute and chronic eczema of the skin, including atopic eczema, as well as sunburn and other minor burns, and itching. Suppositories containing bufexamac in combination with local anaesthetics are used against haemorrhoids.
Pharmacology
Bufexamac is thought to act by inhibiting the enzyme cyclooxygenase, which would make it a non-steroidal anti-inflammatory drug. Evidence on the mechanism of action is scarce.
Furthermore, bufexamac was identified as a specific inhibitor of class IIB histone deacetylases (HDAC6 and HDAC10).
Side effects
Bufexamac can cause severe contact dermatitis which is often hard to distinguish from the initial condition. As a consequence, the European Medicines Agency recommended to withdraw the marketing approval in April 2010.
References
Anti-inflammatory agents
Hydroxamic acids
Phenol ethers
Withdrawn drugs | Bufexamac | [
"Chemistry"
] | 266 | [
"Withdrawn drugs",
"Functional groups",
"Drug safety",
"Organic compounds",
"Hydroxamic acids"
] |
13,574,563 | https://en.wikipedia.org/wiki/Etofenamate | Etofenamate is a nonsteroidal anti-inflammatory drug (NSAID) used for the treatment of joint and muscular pain. It is available for topical application as a cream, a gel or as a spray.
Etofenamate is acutely toxic if swallowed; it is also very toxic to aquatic life, with long lasting effects.
References
Nonsteroidal anti-inflammatory drugs
Trifluoromethyl compounds
Anthranilates
Ethers
Primary alcohols | Etofenamate | [
"Chemistry"
] | 97 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
174,241 | https://en.wikipedia.org/wiki/Terracotta | Terracotta, also known as terra cotta or terra-cotta (; ; ), is a clay-based non-vitreous ceramic fired at relatively low temperatures. It is therefore a term used for earthenware objects of certain types, as set out below.
Usage and definitions of the term vary, such as:
In art, pottery, applied art, and craft, "terracotta" is a term often used for red-coloured earthenware sculptures or functional articles such as flower pots, water and waste water pipes, and tableware.
In archaeology and art history, "terracotta" is often used to describe objects such as figurines and loom weights not made on a potter's wheel, with vessels and other objects made on a wheel from the same material referred to as earthenware; the choice of term depends on the type of object rather than the material or shaping technique.
Terracotta is also used to refer to the natural brownish-orange color of most terracotta.
In architecture, the term encompasses many building materials made out an fired ceramic for exterior covering. Architectural terracotta can also refer to ornate decorative ceramic elements such as antefixes and revetments, which had a large impact on the appearance of temples and other buildings in the classical architecture of Europe, as well as in the Ancient Near East.
This article covers the sense of terracotta as a medium in sculpture, as in the Terracotta Army and Greek terracotta figurines, and architectural decoration. Neither pottery such as utilitatian earthenware nor East Asian and European sculpture in porcelain are covered.
In art history
Asia and the Middle East
Terracotta female figurines were uncovered by archaeologists in excavations of Mohenjo-daro, Pakistan (3000–1500 BCE). Along with phallus-shaped stones, these suggest some sort of fertility cult. The Burney Relief is an outstanding terracotta plaque from Ancient Mesopotamia of about 1950 BCE. In Mesoamerica, the great majority of Olmec figurines were in terracotta. Many ushabti mortuary statuettes were also made of terracotta in Ancient Egypt.
India
Terracotta has been a medium for art since the Harappan civilization, although techniques used differed in each time period. In the Mauryan times, they were mainly figures of mother goddesses, indicating a fertility cult. Moulds were used for the face, whereas the body was hand-modelled. In the Shungan times, a single mould was used to make the entire figure and depending upon the baking time, the colour differed from red to light orange. The Satavahanas used two different moulds- one for the front and the other for the back and kept a piece of clay in each mould and joined them together, making some artefacts hollow from within. Some Satavahana terracotta artefacts also seem to have a thin strip of clay joining the two moulds. This technique may have been imported from the Romans and is seen nowhere else in the country.
Contemporary centres for terracotta figurines include West Bengal, Bihar, Jharkhand, Rajasthan and Tamil Nadu. In Bishnupur, West Bengal, the terracotta pattern–panels on the temples are known for their intricate details. The Bankura Horse is also very famous and belongs to the Bengal school of terracotta. Madhya Pradesh is one of the most prominent production centres of terracotta art today. The tribes of the Bastar have a rich tradition. They make intricate designs and statues of animals and birds. Hand-painted clay and terracotta products are produced in Gujarat. The Aiyanar cult in Tamil Nadu is associated with life-size terracotta statues.
Traditional terracotta sculptures, mainly religious, also continue to be made. The demand for this craft is seasonal, reaching its peak during the harvest festival, when new pottery and votive idols are required. During the rest of the year, the makers rely on agriculture or some other means of income. The designs are often redundant as crafters apply similar reliefs and techniques for different subjects. Customers suggest subjects and uses for each piece.
To sustain the legacy, the Indian Government has established the Sanskriti Museum of Indian Terracotta in New Delhi. The initiative encourages ongoing work in this medium through displays terracotta from different sub-continent regions and periods. In 2010, the India Post Service issued a stamp commemorating the craft which shows a terracotta doll from the craft museum.
China
Chinese sculpture made great use of terracotta, with and without glazing and color, from a very early date. The famous Terracotta Army of Emperor Qin Shi Huang, 209–210 BCE, was somewhat untypical, and two thousand years ago reliefs were more common, in tombs and elsewhere. Later Buddhist figures were often made in painted and glazed terracotta, with the Yixian glazed pottery luohans, probably of 1150–1250, now in various Western museums, among the most prominent examples. Brick-built tombs from the Han dynasty were often finished on the interior wall with bricks decorated on one face; the techniques included molded reliefs. Later tombs contained many figures of protective spirits and animals and servants for the afterlife, including the famous horses of the Tang dynasty; as an arbitrary matter of terminology these tend not to be referred to as terracottas.
Africa
Precolonial West African sculpture also made extensive use of terracotta. The regions most recognized for producing terracotta art in that part of the world include the Nok culture of central and north-central Nigeria, the Ife-Benin cultural axis in western and southern Nigeria (also noted for its exceptionally naturalistic sculpture), and the Igbo culture area of eastern Nigeria, which excelled in terracotta pottery. These related, but separate, traditions also gave birth to elaborate schools of bronze and brass sculpture in the area.
Europe
The Ancient Greeks' Tanagra figurines were mass-produced mold-cast and fired terracotta figurines, that seem to have been widely affordable in the Hellenistic period, and often purely decorative in function. They were part of a wide range of Greek terracotta figurines, which included larger and higher-quality works such as the Aphrodite Heyl; the Romans too made great numbers of small figurines, which were often used in a religious context as cult statues or temple decorations. Etruscan art often used terracotta in preference to stone even for larger statues, such as the near life-size Apollo of Veii and the Sarcophagus of the Spouses. Campana reliefs are Ancient Roman terracotta reliefs, originally mostly used to make friezes for the outside of buildings, as a cheaper substitute for stone.
European medieval art made little use of terracotta sculpture, until the late 14th century, when it became used in advanced International Gothic workshops in parts of Germany. The Virgin illustrated at the start of the article from Bohemia is the unique example known from there. A few decades later, there was a revival in the Italian Renaissance, inspired by excavated classical terracottas as well as the German examples, which gradually spread to the rest of Europe. In Florence, Luca della Robbia (1399/1400–1482) was a sculptor who founded a family dynasty specializing in glazed and painted terracotta, especially large roundels which were used to decorate the exterior of churches and other buildings. These used the same techniques as contemporary maiolica and other tin-glazed pottery. Other sculptors included Pietro Torrigiano (1472–1528), who produced statues, and in England busts of the Tudor royal family. The unglazed busts of the Roman Emperors adorning Hampton Court Palace, by Giovanni da Maiano, 1521, were another example of Italian work in England. They were originally painted but this has now been lost from weathering.
In the 18th-century unglazed terracotta, which had long been used for preliminary clay models or maquettes that were then fired, became fashionable as a material for small sculptures including portrait busts. It was much easier to work than carved materials, and allowed a more spontaneous approach by the artist. Claude Michel (1738–1814), known as Clodion, was an influential pioneer in France. John Michael Rysbrack (1694–1770), a Flemish portrait sculptor working in England, sold his terracotta modelli for larger works in stone, and produced busts only in terracotta. In the next century the French sculptor Albert-Ernest Carrier-Belleuse made many terracotta pieces, but possibly the most famous is The Abduction of Hippodameia depicting the Greek mythological scene of a centaur kidnapping Hippodameia on her wedding day.
Architecture
History
Architectural terracotta is a broad term encompassing a wide ranging variety of clay-based architectural elements such as wall reliefs, decorative roof elements, and architectural sculpture.
Many ancient and traditional roofing styles included more elaborate sculptural elements than the plain roof tiles, such as Chinese Imperial roof decoration and the antefix of western classical architecture. In India West Bengal made a speciality of terracotta temples, with the sculpted decoration from the same material as the main brick construction.
Architectural terracotta experienced a resurgence in western architecture starting in the mid-19th century. Starting in Europe, architects designed elaborate buildings relying on terracotta detailing for their facades. James Taylor was one of the first producers of architectural terracotta to find success in the United States, using his experience manufacturing the material in England to guide his work in North America.
The Great Chicago Fire of 1871 led to increased demand for fireproof materials in urban settings, and helped drive the following push for architectural terracotta throughout North America. The material remained popular through the early 1900s, with its versatility allowing it to support a variety of architectural styles such as Rennaissance revival, neo-Gothic, and Art deco.
Emerging trends in Modernist architecture favoring the use of concrete and glass significantly reduced demand for architectural terracotta starting in the 1930s. In the time since, the material has experienced a resurgence of interest, favored for work in postmodern and revivalist architectural styles.
Differences from non-architectural terracotta
Unlike art and pottery terracotta, clays used for architectural terracotta can range from dark-bodied stonewares to light-bodied whitewares, ranging depending on what is required for their particular application.
The clays are usually fired to or near vitrification in order to survive continued exposure to harsh outdoor conditions such as freeze-thaw cycles and salt intrusion. Contrary to popular belief, glazing does not seal terracotta from water penetration and a non-porous clay body is necessary to prevent failure from these issues.
Production
Prior to firing, terracotta clays are easy to shape. Shaping techniques include throwing, slip casting as well as others.
After drying, it is placed in a kiln or, more traditionally, in a pit covered with combustible material, then fired. The typical firing temperature is around , though it may be as low as in historic and archaeological examples. During this process, the iron oxides in the body reacts with oxygen, often resulting in the reddish colour known as terracotta. However, color can vary widely, including shades of yellow, orange, buff, red, pink, grey or brown.
A final method is to carve fired bricks or other terracotta shapes. This technique is less common, but examples can be found in the architecture of Bengal on Hindu temples and mosques.
Properties
Terracotta is not watertight, but its porousness decreases when the body is surface-burnished before firing. Glazes can used to decrease permeability and hence increase watertightness.
Unglazed terracotta is suitable for use below ground to carry pressurized water (an archaic use), for garden pots and irrigation or building decoration in many environments, and for oil containers, oil lamps, or ovens. Most other uses require the material to be glazed, such as tableware, sanitary piping, or building decorations built for freezing environments.
Terracotta will also ring if lightly struck, as long as it is not cracked.
Painted (polychrome) terracotta is typically first covered with a thin coat of gesso, then painted. It is widely used, but only suitable for indoor positions and much less durable than fired colors in or under a ceramic glaze. Terracotta sculptures in the West were rarely left in their "raw" fired state until the 18th century.
Advantages in sculpture
As compared to bronze sculpture, terracotta uses a far simpler and quicker process for creating the finished work with much lower material costs. The easier task of modelling, typically with a limited range of knives and wooden shaping tools, but mainly using the fingers, allows the artist to take a more free and flexible approach. Small details that might be impractical to carve in stone, of hair or costume for example, can easily be accomplished in terracotta, and drapery can sometimes be made up of thin sheets of clay that make it much easier to achieve a realistic effect.
Reusable mold-making techniques may be used for production of many identical pieces. Compared to marble sculpture and other stonework, the finished product is far lighter and may be further painted and glazed to produce objects with color or durable simulations of metal patina. Robust durable works for outdoor use require greater thickness and so will be heavier, with more care needed in the drying of the unfinished piece to prevent cracking as the material shrinks. Structural considerations are similar to those required for stone sculpture; there is a limit on the stress that can be imposed on terracotta, and terracotta statues of unsupported standing figures are limited to well under life-size unless extra structural support is added. This is also because large figures are extremely difficult to fire, and surviving examples often show sagging or cracks. The Yixian figures were fired in several pieces, and have iron rods inside to hold the structure together.
Gallery
See also
Architectural terracotta
Cittacotte
John Marriott Blashfield, terracotta manufacturer
Kulhar – traditional terracotta cups
Majapahit Terracotta
Redware
Structural clay tile
Tile Heritage Foundation
Saltillo Terracotta Tile
Bishnupur, Bankura
Panchmura
Bankura horse
Notes
References
Draper, James David and Scherf, Guilhem (eds.), Playing with Fire: European Terracotta Models, 1740-1840, 2003, Metropolitan Museum of Art, , 9781588390998, fully available on Google books
External links
Article on terracotta in Victorian and Edwardian Terracotta Buildings
Bibliography, Smithsonian Institution, Ceramic Tiles and Architectural Terracotta
Friends of Terra Cotta, non-profit foundation to promote education and preservation of architectural Terracotta
Tiles and Architectural Ceramics Society (UK)
Guidance on Matching Terracotta Practical guidance on the repair and replacement of historic terracotta focusing on the difficulties associated with trying to match new to old
Throwing a terracotta pot on a wheel
Slipcasting terracotta
Fogg Museum exhibition of “European Terra-Cotta Sculpture from the Arthur M. Sackler Collections”
Ceramic materials
Pottery
Sculpture materials | Terracotta | [
"Engineering"
] | 3,156 | [
"Ceramic engineering",
"Ceramic materials"
] |
174,247 | https://en.wikipedia.org/wiki/Traceability | Traceability is the capability to trace something. In some cases, it is interpreted as the ability to verify the history, location, or application of an item by means of documented recorded identification.
Other common definitions include the capability (and implementation) of keeping track of a given set or type of information to a given degree, or the ability to chronologically interrelate uniquely identifiable entities in a way that is verifiable.
Traceability is applicable to measurement, supply chain, software development, healthcare and security.
Measurement
The term measurement traceability or metrological traceability is used to refer to an unbroken chain of comparisons relating an instrument's measurements to a known standard. Calibration to a traceable standard can be used to determine an instrument's bias, precision, and accuracy. It may also be used to show a chain of custody - from current interpretation of evidence to the actual evidence in a legal context, or history of handling of any information.
In many countries, national standards for weights and measures are maintained by a National Metrological Institute (NMI) which provides the highest level of standards for the calibration / measurement traceability infrastructure in that country. Examples of government agencies include the National Physical Laboratory, UK (NPL) the National Institute of Standards and Technology (NIST) in the USA, the Physikalisch-Technische Bundesanstalt (PTB) in Germany, the Instituto Nazionale di Ricerca Metrologica (INRiM) in Italy, and the National Research Council of Canada (NRC). As defined by NIST, "Traceability of measurement requires the establishment of an unbroken chain of comparisons to stated references each with a stated uncertainty."
A clock providing is traceable to a time standard such as Coordinated Universal Time or International Atomic Time. The Global Positioning System is a source of traceable time.
Supply chain
Within a product's supply chain, traceability may be both a regulatory and an ethical or environmental issue. Traceability is increasingly becoming a core criterion for sustainability efforts related to supply chains wherein knowing the producer, workers and other links stands as a necessary factor that underlies credible claims of social, economic, or environmental impacts. Environmentally friendly retailers may choose to make information regarding their supply chain freely available to customers, illustrating the fact that the products they sell are manufactured in factories with safe working conditions, by workers that earn a fair wage, using methods that do not damage the environment.
Materials
In regard to materials, traceability refers to the capability to associate a finished part with destructive test results performed on material from the same ingot with the same heat treatment, or to associate a finished part with results of a test performed on a sample from the same melt identified by the unique lot number of the material. Destructive tests typically include chemical composition and mechanical strength tests. A heat number is usually marked on the part or raw material which identifies the ingot it came from, and a lot number may identify the group of parts that experienced the same heat treatment (i.e., were in the same oven at the same time). Material traceability is important to the aerospace, nuclear, and process industry because they frequently make use of high strength materials that look identical to commercial low strength versions. In these industries, a part made of the wrong material is called "counterfeit", even if the substitution was accidental.
This same practice extends throughout industries using military hardware, including the fastener industry.
Logistics
In logistics, traceability refers to the capability for tracing goods along the distribution chain on a batch number or series number basis. Traceability is an important aspect for example in the automotive industry, where it makes recalls possible, or in the food industry where it contributes to food safety.
The international standards organization EPCglobal under GS1 has ratified the EPCglobal Network standards (especially the EPC Information Services EPCIS standard) which codify the syntax and semantics for supply chain events and the secure method for selectively sharing supply chain events with trading partners. These standards for traceability have been used in successful deployments in many industries and there are now a wide range of products that are certified as being compatible with these standards.
Food processing
In food processing (meat processing, fresh produce processing), the term traceability refers to the recording through means of barcodes or RFID tags and other tracking media, all movement of product and steps within the production process. One of the key reasons this is such a critical point is in instances where an issue of contamination arises, and a recall is required. Where traceability has been closely adhered to, it is possible to identify, by precise date/time and exact location which goods must be recalled, and which are safe, potentially saving millions of dollars in the recall process. Traceability within the food processing industry is also utilised to identify key high production and quality areas of a business, versus those of low return, and where points in the production process may be improved.
In food processing software, traceability systems imply the use of a unique piece of data (e.g., order date/time or a serialized sequence number, generally through the use of a barcode / RFID) which can be traced through the entire production flow, linking all sections of the business, including suppliers and future sales through the supply chain. Messages and files at any point in the system can then be audited for correctness and completeness, using the traceability software to find the particular transaction and/or product within the supply chain.
In food systems, ISO 22005, as part of the ISO 22000 family of standards, has been developed to define the principles for food traceability and specifies the basic requirements for the design and implementation of a feed and food traceability system. It can be applied by an organization operating at any step in the feed and food chain.
The European Union's General Food Law came into force in 2002, making traceability compulsory for food and feed operators and requiring those businesses to implement traceability systems. The EU introduced its Trade Control and Expert System, or TRACES, in April 2004. The system provides a central database to track movement of animals within the EU and from third countries.
Australia has its National Livestock Identification System to keep track of livestock from birth to slaughterhouse.
India has started taking initiatives for setting up traceability systems at Government and Corporate levels. Grapenet, an initiative by Agriculture and Processed Food Products Export Development Authority (APEDA), Ministry of Commerce, Government of India is an example in this direction. GrapeNet is an internet based traceability software system for monitoring fresh grapes exported from India to the European Union. GrapeNet is a first of its kind initiative in India that has put in place an end-to-end system for monitoring pesticide residue, achieve product standardization and facilitate tracing back from pallets to the farm of the Indian grower, through the various stages of sampling, testing, certification and packing. Grapenet won the National Award (Gold), in the winners announced for the best e-Governance initiatives undertaken in India in 2007.
The Directorate Generate Foreign Trade (DGFT), Government of India, through its notification dated 04.02.2009 relating to Amendment in Foreign Trade Policy (RE2008)has mandated that Export to the European Union is permitted subject to registration with APEDA, thereby making Grapenet mandatory for all exports of fresh grapes from India to Europe.
Uruguay has also designed a system called "Traceability & Electronic Information System of the Beef Industry".
Forest products
Within the context of supporting legal and sustainable forest supply chains, traceability has emerged in the last decade as a new tool to verify claims and assure buyers about the source of their materials. Mostly led out of Europe, and targeting countries where illegal logging has been a key problem (FLEGT countries), timber tracking is now part of daily business for many enterprises and jurisdictions. Full traceability offers advantages for multiple partners along the supply chain beyond certification systems, including:
Mechanism to comply with local and international policies and regulations.
Reducing the risk of illegal or non-compliant material entering the supply chains.
Providing coordination between authorities and relevant bodies.
Allowing automatic reconciliation of batches and volumes available.
Offering a method of stock control and monitoring.
Triggering real-time alerts of non-compliance.
Reducing likelihood of recording errors.
Improving effectiveness and efficiency.
Increasing transparency.
Promoting company integrity.
A number of timber tracking companies are in operation to service global demand.
Enhanced traceability ensures that the supply chain data is 100% accurate from the forest to the point of export. Nowadays, there are techniques to predict geographical provenance of wood and contribute to the fight against illegal logging.
Systems and software development
In systems and software development, the term traceability (or requirements traceability) refers to the ability to link product requirements back to stakeholders' rationales and forward to corresponding design artifacts, code, and test cases. Traceability supports numerous software engineering activities such as change impact analysis, compliance verification or traceback of code, regression test selection, and requirements validation. It is usually accomplished in the form of a matrix created for the verification and validation of the project. Unfortunately, the practice of constructing and maintaining a requirements trace matrix (RTM) can be very arduous and over time the traces tend to erode into an inaccurate state unless date/time stamped. Alternate automated approaches for generating traces using information retrieval methods have been developed.
In transaction processing software, traceability implies use of a unique piece of data (e.g., order date/time or a serialized sequence number) which can be traced through the entire software flow of all relevant application programs. Messages and files at any point in the system can then be audited for correctness and completeness, using the traceability key to find the particular transaction. This is also sometimes referred to as the transaction footprint.
Health care
Patient safety during healthcare service plays an important role in preventing delayed recovery or even mortality, by increasing and improving the quality of life of citizens, and is considered an indicator of the quality status of health services Maintaining patient safety is a complex task and involves factors inherent to the environment and human actions. New technologies facilitate the traceability tools of patients and medications. This is particularly relevant for drugs that are considered high risk and cost.
Recent research in the healthcare industry emphasizes the significant impact of Blockchain Technology (BCT) on improving the performance of healthcare supply chain management. It highlights BCT's role in enhancing transparency, data immutability, and efficient management, leading to better cooperation among stakeholders and effective risk mitigation in healthcare services.
The World Health Organization has recognized the importance of traceability for medical products of human origin (MPHO) and urged member states "to encourage the implementation of globally consistent coding systems to facilitate national and international traceability".
Security and crime-fighting
To prevent theft, and assist in locating stolen objects, goods may be marked indelibly or undetectably so that they may be determined to be stolen, and in some cases identified. For example, it is sometimes arranged that stolen banknotes are marked with indelible dye to show that they are stolen; they can be identified by their unique serial numbers. Announcing that cash machines were fitted with sprayers of SmartWater, an invisible gel detectable for years, to mark thieves and their clothing when breaking into or tampering with the machine was found in a 2016 pilot scheme to reduce theft by 90%.
See also
Provenance
References
Majcen N., Taylor P. (Editors), Practical examples on traceability, measurement uncertainty and validation in chemistry, Vol 1; , 2010.
External links
National Institute of Standards and Technology
NIST Policy on Traceability
"Traceability" from the Global Legal Information Network Subject Term Index
Video which explains the relationship between Calibration - Traceability - Accreditation with flow measuring devices
Hospital Traceability Solutions|Lūg Healthcare Technology technology tools in the health sector to control and improve the efficiency of their critical processes
Barcodes
Radio-frequency identification
Wireless locating
Software engineering
Systems engineering
Forest certification
Pharmaceuticals policy
de:Rückführbarkeit | Traceability | [
"Technology",
"Engineering"
] | 2,455 | [
"Systems engineering",
"Radio electronics",
"Wireless locating",
"Computer engineering",
"Software engineering",
"Information technology",
"Radio-frequency identification"
] |
174,276 | https://en.wikipedia.org/wiki/Fylfot | The fylfot or fylfot cross ( ) and its mirror image, the gammadion, are types of swastika associated with medieval Anglo-Saxon culture. It is a cross with perpendicular extensions, usually at 90° or close angles, radiating in the same direction. However at least in modern heraldry texts, such as Friar and Woodcock & Robinson (see ) the fylfot differs somewhat from the archetypal form of the swastika: always upright and typically with truncated limbs, as shown in the figure at right.
Etymology
The most commonly cited etymology for this is that it comes from the notion common among 19th-century antiquarians, but based on only a single 1500 manuscript, that it was used to fill empty space at the foot of stained-glass windows in medieval churches. This etymology is often cited in modern dictionaries (such as the Collins English Dictionary and Merriam-Webster Online).
History
The fylfot, together with its sister figures, the gammadion and the swastika, has been found in a great variety of contexts over the centuries. It has occurred in both secular and religious contexts in the British Isles, elsewhere in Europe, in Asia Minor and in Africa.
The gammadion is associated more with Byzantium, Rome and Graeco-Roman culture on the one hand, whereas the fylfot is associated more with Celtic and Anglo-Saxon culture on the other. Although the gammadion is very similar to the fylfot in appearance, it is thought to have originated from the conjunction of four capital 'Gammas' (, the third letter of the Greek alphabet) but that the similarity of the symbols is coincidental.
Both of these swastika-like crosses may have been indigenous to the British Isles before the Roman invasion. Certainly they were in evidence a thousand years earlier but these may have been largely imports. They were certainly substantially in evidence during the Romano-British period with widespread examples of the duplicated Greek fret motif appearing on mosaics. After the withdrawal of the Romans in the early 5th century there followed the Anglo-Saxon and Jutish migrations.
The fylfot is known to have been very popular amongst these incoming tribes from Northern Europe, as it is found on artefacts such as brooches, sword hilts and funerary urns. Although the findings at Sutton Hoo are most instructive about the style of lordly Anglo-Saxon burials, the fylfot or gammadion on the silver dish unearthed there clearly had an Eastern provenance.
The fylfot was widely adopted in the early Christian centuries. It is found extensively in the Roman catacombs. An example of its usage is to be found in the porch of the parish church of Great Canfield, Essex, England. As the parish guide states, the fylfot or gammadion can be traced back to the Roman catacombs where it appears in both Christian and pagan contexts. More recently it has been found on grave-slabs in Scotland and Ireland. A particularly interesting example was found in Barhobble, Wigtownshire in Scotland.
Gospel books also contain examples of this form of the Christian cross. The most notable examples are probably the Book of Kells and the Lindisfarne Gospels. An example of this decoration occurs on the Ardagh Chalice.
From the early 14th century on, the fylfot was often used to adorn Eucharistic robes. During that period it appeared on the monumental brasses that preserved the memory of those priests thus attired. They are mostly to be found in East Anglia and the Home Counties.
Probably its most conspicuous usage has been its incorporation in stained glass windows notably in Cambridge and Edinburgh. In Cambridge it is found in the baptismal window of the Church of the Holy Sepulchre, together with other allied Christian symbols, originating in the 19th century. In Scotland, it is found in a window in the Scottish National War Memorial in Edinburgh. The work was undertaken by Douglas Strachan and installed during the 1920s. He was also responsible for a window in the chapel of Westminster College, Cambridge. A similar usage is to be found in the Central Congregational Church in Providence, Rhode Island, USA, installed in 1893.
The fylfot is sometimes found on church bells in England. It was adopted by the Heathcote family in Derbyshire as part of their iconographic tradition in the 16th and 17th centuries. This is probably an example where pagan and Christian influence both have a part to play as the fylfot was amongst other things the symbol of Thor, the Norse god of thunder and its use on bells suggests it was linked to the dispelling of thunder in popular mythology.
In heraldry
In modern heraldry texts, the fylfot is typically shown with truncated limbs, rather like a cross potent that's had one arm of each T cut off. It's also known as a cross cramponned, ~nnée, or ~nny, as each arm resembles a crampon or angle-iron (compare ). Examples of fylfots in heraldry are extremely rare, and the charge is not mentioned in Oswald Barron's article on "Heraldry" in most 20th-century editions of Encyclopædia Britannica. Parker (1894) includes it in his A glossary of terms used in heraldry, noting that only one instance occurs on coats of arms, that of Chamberlayne.
A 20th-century example (with four heraldic roses) can be seen in the Lotta Svärd emblem.
Modern use of the term
From its use in heraldryor from its use by antiquariesfylfot has become an established word for this symbol, in at least British English.
However, it was only rarely used. Wilson, writing in 1896, says, "The use of Fylfot is confined to comparatively few persons in Great Britain and, possibly, Scandinavia. Outside of these countries it is scarcely known, used, or understood".
In more recent times, fylfot has gained greater currency within the areas of design history and collecting, where it is used to distinguish the swastika motif as used in designs and jewellery from that used in Nazi paraphernalia. After the appropriation of the swastika by Nazi organisations, the term fylfot has been used to distinguish historical and non-Nazi instances of the symbol from those where the term swastika might carry specific connotations. The word "swastika" itself was appropriated into English from Sanskrit in the late 19th century. However, the word and symbol continue to have major religious significance for Buddhists, Hindus, Jains and other eastern faiths. For this reason, some have campaigned to have all uses of the word in a Nazi context changed to use the [hooked cross].
Hansard for 12 June 1996 reports a House of Commons discussion about the badge of No. 273 Fighter Squadron, Royal Air Force. In this, fylfot is used to describe the ancient symbol, and swastika used as if it refers only to the symbol used by the Nazis.
See also
Buddhism
Hinduism
Jainism
Boreyko coat of arms
Triskelion
Brigid's cross
Ugunskrusts
Western use of the Swastika in the early 20th century
Esquarre (heraldry)
References
Bibliography
Stephen Friar (ed.), A New Dictionary of Heraldry (Alpha Books 1987 ); figure, p. 121
Thomas Woodcock and John Martin Robinson, The Oxford Guide to Heraldry (Oxford 1990 ); figure, p. 200
External links
Crosses in heraldry
Cross symbols
Swastika
Visual motifs | Fylfot | [
"Mathematics"
] | 1,574 | [
"Symbols",
"Visual motifs"
] |
174,292 | https://en.wikipedia.org/wiki/Pierre%20Jean%20Georges%20Cabanis | Pierre Jean Georges Cabanis (; 5 June 1757 – 5 May 1808) was a French physiologist, freemason and materialist philosopher.
Life
Cabanis was born at Cosnac (Corrèze), the son of Jean Baptiste Cabanis (1723–1786), a lawyer and agronomist. At the age of ten, he attended the college of Brives, where he showed great aptitude for study, but his independence of spirit was so great that he was almost constantly in a state of rebellion against his teachers and was finally expelled. He was then taken to Paris by his father and left to carry on his studies at his own discretion for two years. From 1773 to 1775 he travelled in Poland and Germany, and on his return to Paris he devoted himself mainly to poetry. About this time he sent to the Académie française a translation of the passage from Homer proposed for their prize, and, though he did not win, he received so much encouragement from his friends that he contemplated translating the whole of the Iliad.
At his father's wish, he gave up writing and decided to engage in a more settled profession, selecting medicine. In 1789 his Observations sur les hôpitaux (Observations on hospitals, 1790) procured him an appointment as administrator of hospitals in Paris, and in 1795 he became professor of hygiene at the medical school of Paris, a post which he exchanged for the chair of legal medicine and the history of medicine in 1799.
Partly because of his poor health, he tended not to practise as a physician, his interests lying in the deeper problems of medical and physiological science. During the last two years of Honoré Mirabeau's life, Cabanis was intimately connected with him; Cabanis wrote the four papers on public education which were found among Mirabeau's papers at his death (and Cabanis edited them soon afterwards in 1791). During the illness which terminated his life Mirabeau trusted entirely to Cabanis' professional skills. Of the death of Mirabeau, Cabanis drew up a detailed narrative, intended as a justification of his treatment of the case. He was enthusiastic about the French Revolution and became a member of the Council of Five Hundred and then of the Senate, and the dissolution of the Directory was the result of a motion which he made to that effect. His political career was brief. Hostile to the policy of Napoleon Bonaparte, he rejected every offer of a place under his government. He died at Meulan.
His body is buried in the Pantheon and his heart in Auteuil Cemetery in Paris.
Works
A complete edition of Cabanis's works was begun in 1825, and five volumes were published. His principal work, Rapports du physique et du moral de l'homme (On the relations between the physical and moral aspects of man, 1802), consists in part of memoirs, read in 1796 and 1797 to the institute, and is a sketch of physiological psychology. Psychology is with Cabanis directly linked on to biology, for sensibility, the fundamental fact, is the highest grade of life and the lowest of intelligence. All the intellectual processes are evolved from sensibility, and sensibility itself is a property of the nervous system. The soul is not an entity, but a faculty; thought is the function of the brain. Just as the stomach and intestines receive food and digest it, so the brain receives impressions, digests them, and has as its organic secretion, thought.
Alongside this materialism, Cabanis held another principle. He belonged in biology to the vitalistic school of G.E. Stahl, and in the posthumous work, Lettre sur les causes premières (1824), the consequences of this opinion became clear. Life is something added to the organism: over and above the universally diffused sensibility there is some living and productive power to which we give the name of Nature. It is impossible to avoid ascribing both intelligence and will to this power. This living power constitutes the ego, which is truly immaterial and immortal. Cabanis did not think that these results were out of harmony with his earlier theory.
His work was highly appreciated by the philosopher Arthur Schopenhauer, who called his work "excellent".
He was a member of the masonic lodge Les Neuf Sœurs from 1778. In 1786, Cabanis was elected an international member of the American Philosophical Society in Philadelphia.
Evolution
Cabanis was an early proponent of evolution. In the Encyclopedia of Philosophy it is stated that he "believed in spontaneous generation. Species have evolved through chance mutations ("fortuitous changes") and planned mutation ("man's experimental attempts") which change the structures of heredity."
He influenced the work of Jean-Baptiste Lamarck, who referred to Cabanis in his Philosophie Zoologique. Cabanis was an advocate of the inheritance of acquired characteristics; he also developed his own theory of instinct.
Cabanis made a statement that recognized a basic understanding of natural selection. Historian Martin S. Staum has written that:
In a simple statement of adaptation and selection theory, Cabanis argued that species that have escaped extinction "have had successively to bend and conform to sequences of circumstances, from which apparently were born, in each particular circumstance, other entirely new species, better adjusted to the new order of things."
Notes
References
(This article has the mistake "Pierre Jean George Cabanis" instead of "Pierre Jean Georges Cabanis".)
Further reading
(This article has the mistake "Pierre Jean George Cabanis" instead of "Pierre Jean Georges Cabanis".)
(This article has the mistake "Pierre Jean George Cabanis" instead of "Pierre Jean Georges Cabanis".)
1757 births
1808 deaths
People from Corrèze
French materialists
Members of the Académie Française
Members of the Council of Five Hundred
Burials at the Panthéon, Paris
Les Neuf Sœurs
French physiologists
French Freemasons
Proto-evolutionary biologists
Members of the American Philosophical Society | Pierre Jean Georges Cabanis | [
"Biology"
] | 1,246 | [
"Non-Darwinian evolution",
"Biology theories",
"Proto-evolutionary biologists"
] |
174,311 | https://en.wikipedia.org/wiki/Benjamin%20Thompson | Colonel Sir Benjamin Thompson, Count Rumford, FRS (26 March 175321 August 1814), was an American-born British military officer, scientist, inventor and nobleman. Born in Woburn, Massachusetts, he supported the Loyalist cause during the American War of Independence, commanding the King's American Dragoons during the conflict. After the war ended in 1783, Thompson moved to London, where he was recognised for his administrative talents and received a knighthood from George III in 1784.
A prolific scientist and inventor, Thompson also created several new warship designs. He subsequently moved to the Electorate of Bavaria and entered into the employ of the Bavarian government, heavily reorganising the Bavarian Army. Thompson was rewarded for his efforts by being made an Imperial Count in 1792 before dying in Paris in 1814.
Early years
Thompson was born in rural Woburn, in the Province of Massachusetts Bay, on 26 March 1753; his birthplace is preserved as a museum. He was educated mainly at the village school, although he sometimes walked almost ten miles to Cambridge with the older Loammi Baldwin to attend lectures by Professor John Winthrop of Harvard College. At the age of 13 he was apprenticed to John Appleton, a merchant of nearby Salem. Thompson excelled at his trade, and coming in contact with refined and well educated people for the first time, adopted many of their characteristics including an interest in science. While recuperating in Woburn in 1769 from an injury, Thompson conducted his first experiments studying the nature of heat and began to correspond with Baldwin and others about them. Later that year he worked several months for a Boston shopkeeper and then apprenticed himself briefly, and unsuccessfully, to a doctor in Woburn.
Thompson's prospects were dim in 1772 but in that year they changed abruptly. He met, charmed and married a rich and well-connected widow, an heiress named Sarah Rolfe (née Walker). Her father was a minister, and her late husband left her property at Rumford, Province of New Hampshire, which is today in the modern city of Concord. They moved to Portsmouth, and through his wife's influence with the governor, he was appointed a major in the New Hampshire Militia. Their child (also named Sarah) was born in 1774.
American Revolutionary War
When the American Revolutionary War began, Thompson, by now a wealthy and influential landowner, came out in opposition to the uprising. He soon used his connections in the state militia to recruit and arm loyalists seeking to aid British forces fighting the rebels. This earned him the enmity of New Hampshire's Patriot faction; he was stripped of his command and a mob attacked and burned Thompson's house. He fled to the British lines, abandoning his wife, as it turned out, permanently. Thompson became a political and military advisor to General Thomas Gage (whom he was already passing information on the Americans to), and later assisted Lord George Germain in the organization and provisioning of Loyalist units.
In 1781, Thompson financed his own military unit - The King's American Dragoons - which primarily served on Long Island in 1782 and early 1783, where they earned local notoriety for demolishing a church and burial ground in order to erect Fort Golgotha in Huntington.
While working with the British armies in America he conducted experiments to measure the force of gunpowder, the results of which were widely acclaimed when published in 1781 in the Philosophical Transactions of the Royal Society. On the strength of this, he arrived in London at the end of the war with a reputation as an accomplished scientist.
Bavarian maturity
In 1785, he moved to Bavaria where he became an aide-de-camp to the Prince-elector Charles Theodore. He spent eleven years in Bavaria, reorganizing the army and establishing workhouses for the poor. He also invented Rumford's Soup, a soup for the poor, and established the cultivation of the potato in Bavaria. He studied methods of cooking, heating, and lighting, including the relative costs and efficiencies of wax candles, tallow candles, and oil lamps.
On Prince Charles' behalf he created the Englischer Garten in Munich in 1789; it remains today and is known as one of the largest urban public parks in the world. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1789.
For his efforts, in 1791 Thompson was made an Imperial Count, becoming Reichsgraf von Rumford. He took the name "Rumford" after the town of Rumford, New Hampshire, which was an older name for Concord where he had been married.
Science and engineering
Experiments on heat
His experiments on gunnery and explosives led to an interest in heat. He devised a method for measuring the specific heat of a solid substance but was disappointed when Johan Wilcke published his parallel discovery first.
Thompson next investigated the insulating properties of various materials, including fur, wool and feathers. He correctly appreciated that the insulating properties of these natural materials arise from the fact that they inhibit the convection of air. He then made the somewhat reckless, and incorrect, inference that air and, in fact, all gases, were perfect non-conductors of heat. He further saw this as evidence of the argument from design, contending that divine providence had arranged for fur on animals in such a way as to guarantee their comfort.
In 1797, he extended his claim about non-conductivity to liquids. The idea raised considerable objections from the scientific establishment, John Dalton and John Leslie making particularly forthright attacks. Instrumentation far exceeding anything available in terms of accuracy and precision would have been needed to verify Thompson's claim. Again, he seems to have been influenced by his theological beliefs and it is likely that he wished to grant water a privileged and providential status in the regulation of human life.
He is considered the founder of the sous-vide food preparation method owing to his experiment with a mutton shoulder. He described this method in one of his essays.
Mechanical equivalent of heat
Rumford's most important scientific work took place in Munich, and centred on the nature of heat, which he contended in "An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction" (1798) was not the caloric of then-current scientific thinking but a form of motion. Rumford had observed the frictional heat generated by boring cannon at the arsenal in Munich. Rumford immersed a cannon barrel in water and arranged for a specially blunted boring tool. He showed that the water could be boiled within roughly two and a half hours and that the supply of frictional heat was seemingly inexhaustible. Rumford confirmed that no physical change had taken place in the material of the cannon by comparing the specific heats of the material machined away and that remaining.
Rumford argued that the seemingly indefinite generation of heat was incompatible with the caloric theory. He contended that the only thing communicated to the barrel was motion.
Rumford made no attempt to further quantify the heat generated or to measure the mechanical equivalent of heat. Though this work met with a hostile reception, it was subsequently important in establishing the laws of conservation of energy later in the 19th century.
Calorific and frigorific radiation
He explained Pictet's experiment, which demonstrates the reflection of cold, by supposing that all bodies emit invisible rays, undulations in the ethereal fluid. He did experiments to support his theories of calorific and frigorific radiation and said the communication of heat was the net effect of calorific (hot) rays and frigorific (cold) rays and the rays emitted by the object. When an object absorbs radiation from a warmer object (calorific rays) its temperature rises, and when it absorbs radiation from a colder object (frigorific rays) its temperature falls. See note 8, "An enquiry concerning the nature of heat and the mode of its communication" Philosophical Transactions of the Royal Society, starting at page 112.
Inventions and design improvements
Thompson was an active and prolific inventor, developing improvements for chimneys, fireplaces and industrial furnaces, as well as inventing the double boiler, a kitchen range, and a coffee percolator roughly between 1810 and 1814. He invented a percolating coffee pot following his pioneering work with the Bavarian Army, where he improved the diet of the soldiers as well as their clothes.
The Rumford fireplace created a sensation in London when he introduced the idea of restricting the chimney opening to increase the updraught, which was a much more efficient way to heat a room than earlier fireplaces. He and his workers modified fireplaces by inserting bricks into the hearth to make the side walls angled, and added a choke to the chimney to increase the speed of air going up the flue. The effect was to produce a streamlined air flow, so all the smoke would go up into the chimney rather than lingering and entering the room. It also had the effect of increasing the efficiency of the fire, and gave extra control of the rate of combustion of the fuel, whether wood or coal. Many fashionable London houses were modified to his instructions, and became smoke-free.
Thompson became a celebrity when news of his success spread. His work was also very profitable, and much imitated when he published his analysis of the way chimneys worked. In many ways, he was similar to Benjamin Franklin, who also invented a new kind of heating stove.
The retention of heat was a recurring theme in his work, as he is also credited with the invention of thermal underwear.
Industrial furnaces
Thompson also significantly improved the design of kilns used to produce quicklime, and Rumford furnaces were soon being constructed throughout Europe. The key innovation involved separating the burning fuel from the limestone, so that the lime produced by the heat of the furnace was not contaminated by ash from the fire.
Light and photometry
Rumford worked in photometry, the measurement of light. He made a photometer and introduced the standard candle, the predecessor of the candela, as a unit of luminous intensity. His standard candle was made from the oil of a sperm whale, to rigid specifications. He also published studies of "illusory" or subjective complementary colours, induced by the shadows created by two lights, one white and one coloured; these observations were cited and generalized by Michel-Eugène Chevreul as his "law of simultaneous colour contrast" in 1839.
Later life
After 1799, he divided his time between France and England. With Sir Joseph Banks, he established the Royal Institution of Great Britain in 1799. The pair chose Sir Humphry Davy as the first lecturer. The institution flourished and became world-famous as a result of Davy's pioneering research. His assistant, Michael Faraday, established the Institution as a premier research laboratory, and also justly famous for its series of public lectures popularizing science. That tradition continues to the present, and the Royal Institution Christmas lectures attract large audiences through their TV broadcasts.
Thompson endowed the Rumford medals of the Royal Society and the American Academy of Arts and Sciences, and endowed the Rumford Chair of Physics at Harvard University. In 1803, he was elected a foreign member of the Royal Swedish Academy of Sciences, and as a member of the American Philosophical Society.
After several affairs and a close friendship with Mary Temple, Viscountess Palmerston, in 1804, he married Marie-Anne Lavoisier, the widow of the great French chemist Antoine Lavoisier. (His American wife, Sarah—whom he had abandoned at the outbreak of the American Revolution—had died in 1792.) Thompson separated from his second wife after three years, but he settled in Paris and continued his scientific work until his death on 21 August 1814. Thompson is buried in the small cemetery of Auteuil in Paris, just across from Adrien-Marie Legendre. Upon his death, his daughter from his first marriage, Sarah Thompson, inherited his title as Countess Rumford.
He was also known to have been a lover of George Germain, 1st Viscount Sackville.
Honours
Colonel, King's American Dragoons.
Knighted, 1784.
Count of the Holy Roman Empire, 1791.
The crater Rumford on the Moon is named after him.
Rumford baking powder (patented 1859) is named after him, having been invented by a former Rumford professor at Harvard University, Eben Norton Horsford (1818–1893), cofounder of the Rumford Chemical Works of East Providence, RI.
Rumford Kitchen at the World's Fair in Chicago, 1893.
A street in the inner city of Munich is named after him.
Rumford Street (and the nearby Rumford Place) in Liverpool, England, are so named due to a soup kitchen established to Count Rumford's plan which formerly stood on land adjacent to Rumford Street.
: Order of the White Eagle (1789).
Bibliography
An Essay on Chimney Fire-Places; With Proposals for Improving Them, to Save Fuel, to Render Dwelling-Houses More Comfortable and Salubrious, and Effectually to Prevent Chimnies from Smoking. Illustrated with Engravings, (1796).
Collected Works of Count Rumford, Volume I, The Nature of Heat, (1968).
Collected Works of Count Rumford, Volume II, Practical Applications of Heat, (1969).
Collected Works of Count Rumford, Volume III, Devices and Techniques, (1969).
Collected Works of Count Rumford, Volume IV, Light and Armament, (1970).
Collected Works of Count Rumford, Volume V, Public Institutions, (1970).
See also
History of thermodynamics
Citations
References
Further reading
External links
Eric Weisstein's World of Science. "Rumford, Benjamin Thompson". (1753–1814)
Dr. Hugh C. Rowlinson "The Contribution of Count Rumford to Domestic Life in Jane Austen’s Time" An article not only detailing the Rumford fireplace, but also Rumford's life and other achievements.
A Biography of Benjamin Thompson, Jr. Written in 1868
Escutcheons of Science
Count Rumford's Birth Place and Museum
Count Rumford Fireplaces website
1753 births
1814 deaths
Loyalists in the American Revolution from New Hampshire
American physicists
British physicists
Rumford
Fellows of the American Academy of Arts and Sciences
Fellows of the Royal Society
Members of the Royal Swedish Academy of Sciences
Harvard University people
People from the Duchy of Bavaria
People from colonial Massachusetts
People from colonial New Hampshire
People from Woburn, Massachusetts
Recipients of the Copley Medal
18th-century American scientists
19th-century American people
18th-century British people
19th-century British people
Recipients of the Order of the White Eagle (Poland)
Thermodynamicists
Knights Bachelor
18th-century English LGBTQ people
English LGBTQ politicians | Benjamin Thompson | [
"Physics",
"Chemistry"
] | 2,990 | [
"Thermodynamics",
"Thermodynamicists"
] |
174,312 | https://en.wikipedia.org/wiki/SQL%20Slammer | SQL Slammer is a 2003 computer worm that caused a denial of service on some Internet hosts and dramatically slowed general Internet traffic. It also crashed routers around the world, causing even more slowdowns. It spread rapidly, infecting most of its 75,000 victims within 10 minutes.
The program exploited a buffer overflow bug in Microsoft's SQL Server and Desktop Engine database products. Although the MS02-039 (CVE-2002-0649) patch had been released six months earlier, many organizations had not yet applied it.
The most infected regions were Europe, North America, and Asia (including East Asia and India).
Technical details
The worm was based on proof of concept code demonstrated at the Black Hat Briefings by David Litchfield, who had initially discovered the buffer overflow vulnerability that the worm exploited. It is a small piece of code that does little other than generate random IP addresses and send itself out to those addresses. If a selected address happens to belong to a host that is running an unpatched copy of Microsoft SQL Server Resolution Service listening on UDP port 1434, the host immediately becomes infected and begins spraying the Internet with more copies of the worm program.
Home PCs are generally not vulnerable to this worm unless they have MSDE installed. The worm is so small that it does not contain code to write itself to disk, so it only stays in memory, and it is easy to remove. For example, Symantec provides a free of charge removal utility, or it can even be removed by restarting SQL Server (although the machine would likely be reinfected immediately).
The worm was made possible by a software security vulnerability in SQL Server first reported by Microsoft on 24 July 2002. A patch had been available from Microsoft for six months prior to the worm's launch, but many installations had not been patched – including many at Microsoft.
The worm began to be noticed early on 25 January 2003 as it slowed systems worldwide. The slowdown was caused by the collapse of numerous routers under the burden of extremely high bombardment traffic from infected servers. Normally, when traffic is too high for routers to handle, the routers are supposed to delay or temporarily stop network traffic. Instead, some routers crashed (became unusable), and the "neighbour" routers would notice that these routers had stopped and should not be contacted (aka "removed from the routing table"). Routers started sending notices to this effect to other routers they knew about. The flood of routing table update notices caused some additional routers to fail, compounding the problem. Eventually the crashed routers' maintainers restarted them, causing them to announce their status, leading to another wave of routing table updates. Soon a significant portion of Internet bandwidth was consumed by routers communicating with each other to update their routing tables, and ordinary data traffic slowed or in some cases stopped altogether. Because the SQL Slammer worm was so small in size, sometimes it was able to get through when legitimate traffic was not.
Two key aspects contributed to SQL Slammer's rapid propagation. The worm infected new hosts over the sessionless UDP protocol, and the entire worm (only 376 bytes) fits inside a single packet. As a result, each infected host could simply "fire and forget" packets as rapidly as possible.
Notes
References
External links
News
BBC NEWS Technology Virus-like attack hits web traffic
MS SQL Server Worm Wreaking Havoc
Wired 11.07: Slammed! A layman's explanation of the Slammer code.
Announcement
Microsoft Security Bulletin MS02-039 and Patch
Symantec Security Response - W32.SQLExp.Worm
Analysis
Inside the Slammer Worm IEEE Security and Privacy Magazine, David Moore, Vern Paxson, Stefan Savage, Colleen Shannon, Stuart Staniford, and Nicholas Weaver
Technical details
Multiple Vulnerabilities in Microsoft SQL Server - Carnegie-Mellon Software Engineering Institute
Exploit-based worms
Denial-of-service attacks
Hacking in the 2000s
Cybercrime in India | SQL Slammer | [
"Technology"
] | 819 | [
"Denial-of-service attacks",
"Computer security exploits"
] |
174,396 | https://en.wikipedia.org/wiki/Bohr%20radius | The Bohr radius () is a physical constant, approximately equal to the most probable distance between the nucleus and the electron in a hydrogen atom in its ground state. It is named after Niels Bohr, due to its role in the Bohr model of an atom. Its value is
Definition and value
The Bohr radius is defined as
where
is the permittivity of free space,
is the reduced Planck constant,
is the mass of an electron,
is the elementary charge,
is the speed of light in vacuum, and
is the fine-structure constant.
The CODATA value of the Bohr radius (in SI units) is
History
In the Bohr model for atomic structure, put forward by Niels Bohr in 1913, electrons orbit a central nucleus under electrostatic attraction. The original derivation posited that electrons have orbital angular momentum in integer multiples of the reduced Planck constant, which successfully matched the observation of discrete energy levels in emission spectra, along with predicting a fixed radius for each of these levels. In the simplest atom, hydrogen, a single electron orbits the nucleus, and its smallest possible orbit, with the lowest energy, has an orbital radius almost equal to the Bohr radius. (It is not exactly the Bohr radius due to the reduced mass effect. They differ by about 0.05%.)
The Bohr model of the atom was superseded by an electron probability cloud adhering to the Schrödinger equation as published in 1926. This is further complicated by spin and quantum vacuum effects to produce fine structure and hyperfine structure. Nevertheless, the Bohr radius formula remains central in atomic physics calculations, due to its simple relationship with fundamental constants (this is why it is defined using the true electron mass rather than the reduced mass, as mentioned above). As such, it became the unit of length in atomic units.
In Schrödinger's quantum-mechanical theory of the hydrogen atom, the Bohr radius is the value of the radial coordinate for which the radial probability density of the electron position is highest. The expected value of the radial distance of the electron, by contrast, is .
Related constants
The Bohr radius is one of a trio of related units of length, the other two being the Compton wavelength of the electron () and the classical electron radius (). Any one of these constants can be written in terms of any of the others using the fine-structure constant :
Hydrogen atom and similar systems
The Bohr radius including the effect of reduced mass in the hydrogen atom is given by
where is the reduced mass of the electron–proton system (with being the mass of proton). The use of reduced mass is a generalization of the two-body problem from classical physics beyond the case in which the approximation that the mass of the orbiting body is negligible compared to the mass of the body being orbited. Since the reduced mass of the electron–proton system is a little bit smaller than the electron mass, the "reduced" Bohr radius is slightly larger than the Bohr radius ( meters).
This result can be generalized to other systems, such as positronium (an electron orbiting a positron) and muonium (an electron orbiting an anti-muon) by using the reduced mass of the system and considering the possible change in charge. Typically, Bohr model relations (radius, energy, etc.) can be easily modified for these exotic systems (up to lowest order) by simply replacing the electron mass with the reduced mass for the system (as well as adjusting the charge when appropriate). For example, the radius of positronium is approximately , since the reduced mass of the positronium system is half the electron mass ().
A hydrogen-like atom will have a Bohr radius which primarily scales as , with the number of protons in the nucleus. Meanwhile, the reduced mass () only becomes better approximated by in the limit of increasing nuclear mass. These results are summarized in the equation
A table of approximate relationships is given below.
See also
Bohr magneton
Rydberg energy
References
External links
Length Scales in Physics: the Bohr Radius
Atomic physics
Physical constants
Units of length
Niels Bohr
Atomic radius | Bohr radius | [
"Physics",
"Chemistry",
"Mathematics"
] | 856 | [
"Units of measurement",
"Physical quantities",
"Units of length",
"Quantity",
"Quantum mechanics",
"Atomic radius",
"Physical constants",
"Atomic physics",
" molecular",
"Atomic",
"Atoms",
"Matter",
" and optical physics"
] |
174,397 | https://en.wikipedia.org/wiki/Computer%20addiction | Computer addiction is a form of behavioral addiction that can be described as the excessive or compulsive use of the computer, which persists despite serious negative consequences for personal, social, or occupational function. Another clear conceptualization is made by Block, who stated that "Conceptually, the diagnosis is a compulsive-impulsive spectrum disorder that involves online and/or offline computer usage and consists of at least three subtypes: excessive gaming, sexual preoccupations, and e-mail/text messaging". Computer addiction is not currently included in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) as an official disorder. The concept of computer addiction is broadly divided into two types, namely offline computer addiction, and online computer addiction. Offline computer addiction is normally used when speaking about excessive gaming behavior, which can be practiced both offline and online. Online computer addiction, also known as Internet addiction, gets more attention in general from scientific research than offline computer addiction, mainly because most cases of computer addiction are related to the excessive use of the Internet.
Experts on Internet addiction have described this syndrome as an individual working intensely on the Internet, prolonged use of the Internet, uncontrollable use of the Internet, unable to use the Internet in an efficient, timely matter, not being interested in the outside world, not spending time with people from the outside world, and an increase in their loneliness and dejection.
Symptoms
Being drawn by the computer as soon as one wakes up and before one goes to bed
Replacing old hobbies with excessive use of the computer and using the computer as one's primary source of entertainment and procrastination
Lacking physical exercise and/or outdoor exposure because of constant use of the computer, which could contribute to many health problems such as obesity
Backache
Headaches
Weight gain or loss
Disturbances in sleep
Carpal tunnel syndrome
Blurred or strained vision
Depression and marital infidelity
Effects
Excessive computer use may result in, or occur with:
Lack of face to face social interaction
Computer vision syndrome
Causes
Kimberly Young indicates that previous research links internet/computer addiction with existing mental health issues, most notably depression. She states that computer addiction has significant effects socially, such as low self-esteem, psychologically and occupationally, which led many subjects to academic failure.
According to a Korean study on internet/computer addiction, pathological use of the internet results in negative life impacts such as job loss, marriage breakdown, financial debt, and academic failure. 70% of internet users in Korea are reported to play online games, 18% of whom are diagnosed as game addicts, which relates to internet/computer addiction. The authors of the article conducted a study using Kimberly Young's questionnaire. The study showed that the majority of those who met the requirements of internet/computer addiction experienced interpersonal difficulties and stress and that those addicted to online games specifically responded that they hoped to avoid reality.
Types
Computers nowadays rely almost entirely on the internet, and thus relevant research articles relating to internet addiction may also be relevant to computer addiction.
Gaming addiction: a hypothetical behavioral addiction characterized by excessive or compulsive use of computer games or video games, which interferes with a person's everyday life. Video game addiction may present itself as compulsive gaming, social isolation, mood swings, diminished imagination, and hyper-focus on in-game achievements, to the exclusion of other events in life.
Social media addiction: Data suggest that participants use social media to fulfill their social needs but are typically dissatisfied. Lonely individuals are drawn to the Internet for emotional support. This could interfere with "real-life socializing" by reducing face-to-face relationships. Some of these views are summed up in an Atlantic article by Stephen Marche entitled Is Facebook Making Us Lonely?, in which the author argues that social media provides more breadth, but not the depth of relationships that humans require and that users begin to find it difficult to distinguish between the meaningful relationships which we foster in the real world and the numerous casual relationships that are formed through social media.
Cyberstalking
According to Prof. Jordana N. Navarro et al. Cyberstalking is known as a behavior that includes but not limited to internet or technology use to stalk or harass an individual over time and in a menacing fashion. Cyberstalking has been on the rise since the 1990s. These cryptic behaviors are also noticeable. Many of the cyberstalking cases are from people that do not know each other. Cyberstalkers are not limited to geographical boundaries, research has suggested various impulses in cyberstalking aside from excreting power and control over the target. Internet addiction and cyberstalking share several key traits that should lend support to new investigations to further scrutinize the relationship between the two disorders. Studies have shown that cyberstalkers can have different motives, but these results are not necessarily indicative of mental health issues. A cyberstalker is usually an emotionally damaged individual, a loner who seeks attention, gratification, and connection and in the process becomes infatuated with someone (Navarro et al. 2015)
Diagnostic Test
Many studies and surveys are being conducted to measure the extent of this type of addiction. Kimberly Young has created a questionnaire based on other disorders to assess the level of addiction. It is called the Internet Addict Diagnostic Questionnaire or IADQ. The questionnaire asks users about their online usage habits as well as their feelings about their internet usage. According to the IADQ sample, Internet Addiction resembles that of a Gambling disorder. Answering positively to five out of the eight questions on the IADQ may be indicative of online addiction.
According to the article "Validating the Distinction between Computer Addiction and Engagement: Online Game Playing and Personality", the authors introduced a test to help identify the differences between addiction and engagement. Based on similar ideas, here are some ways to distinguish between computer engagement and addiction.
Origin of the term and history
Observations about the addictiveness of computers, and more specifically, computer games date back at least to the mid-1970s. Addiction and addictive behavior were common among the users of the PLATO system at the University of Illinois. British e-learning academic Nicholas Rushby suggested in his 1979 book, An Introduction to Educational Computing, that people can be addicted to computers and experience withdrawal symptoms. The term was also used by M. Shotton in 1989 in her book Computer Addiction. However, Shotton concludes that the 'addicts' are not truly addicted. Dependency on computers, she argues, is better understood as a challenging and exciting pastime that can also lead to a professional career in the field. Computers do not turn gregarious, extroverted people into recluses; instead, they offer introverts a source of inspiration, excitement, and intellectual stimulation. Shotton's work seriously questions the legitimacy of the claim that computers cause addiction.
The term became more widespread with the explosive growth of the Internet, as well the availability of the personal computer. Computers and the Internet both started to take shape as a personal and comfortable medium that could be used by anyone who wanted to make use of it. With that explosive growth of individuals making use of PCs and the Internet, the question started to arise whether or not misuse or excessive use of these new technologies could be possible as well. It was hypothesized that, like any technology aimed specifically at human consumption and use, abuse could have severe consequences for the individual in the short term and the society in the long term. In the late nineties people who made use of PCs and the internet were already referred to the term webaholics or cyberholics. Pratarelli et al. suggested at that point already to label "a cluster of behaviors potentially causing problems" as a computer or Internet addiction.
There are other examples of computer overuse that date back to the earliest computer games. Press reports have furthermore noted that some Finnish Defence Forces conscripts were not mature enough to meet the demands of military life and were required to interrupt or postpone military service for a year. One reported source of the lack of needed social skills is an overuse of computer games or the Internet. Forbes termed this overuse "Web fixations", and stated that they were responsible for 12 such interruptions or deferrals over the 5 years from 2000 to 2005.
See also
Behavioral modernity
Computer rage
Digital media use and mental health
Evolutionary mismatch
Underearners Anonymous
Video game addiction
References
Works cited
Dawn Heron. "Time To Log Off: New Diagnostic Criteria For Problematic Internet Use", University of Florida, Gainesville, published in Current Psychology, April 2003 (Identifies incessant posting in chat rooms as a form of emotional disorder).
Orzack, Maressa H. Dr. (1998). "Computer Addiction: What Is It?" Psychiatric Times XV(8).
Shotton, MA (1989), Computer Addiction? A study of computer dependency. New York: Taylor & Francis.
Cromie, William J. Computer Addiction Is Coming On-line. HPAC - Harvard Public Affairs & Communications. Web. 20 Oct. 2010. Computer Addiction Is Coming On-line (Explains symptoms and other various attributes of the new disease).
UTD Counseling Center: Self-Help:Computer Addiction. Home Page - The University of Texas at Dallas. Web. 20 Oct. 2010. UTD Counseling Center: Self-Help:Computer Addiction.
Addictions.com. (n.d.). Computer Addiction. Retrieved December 5, 2013, from Computer Addiction - Signs, Symptoms, Support & Treatment
Navarro, Jordana N., et al. "Addicted to the Thrill of the Virtual Hunt: Examining the Effects of Internet Addiction on the Cyberstalking Behaviors of Juveniles." www.tandfonline.com, Taylor & Francis Group, 4 Apr. 2016, https://www.tandfonline.com/doi/abs/10.1080/01639625.2016.1153366
Digital media use and mental health
Behavioral addiction
Computers | Computer addiction | [
"Technology"
] | 2,056 | [
"Computers"
] |
174,410 | https://en.wikipedia.org/wiki/Armillary%20sphere | An armillary sphere (variations are known as spherical astrolabe, armilla, or armil) is a model of objects in the sky (on the celestial sphere), consisting of a spherical framework of rings, centered on Earth or the Sun, that represent lines of celestial longitude and latitude and other astronomically important features, such as the ecliptic. As such, it differs from a celestial globe, which is a smooth sphere whose principal purpose is to map the constellations. It was invented separately, in ancient China possibly as early as the 4th century BC and ancient Greece during the 3rd century BC, with later uses in the Islamic world and Medieval Europe.
With the Earth as center, an armillary sphere is known as Ptolemaic. With the Sun as center, it is known as Copernican.
The flag of Portugal features an armillary sphere. The armillary sphere is also featured in Portuguese heraldry, associated with the Portuguese discoveries during the Age of Exploration. Manuel I of Portugal, for example, took it as one of his symbols where it appeared on his standard, and on early Chinese export ceramics made for the Portuguese court. In the flag of the Empire of Brazil, the armillary sphere is also featured.
The Beijing Capital International Airport Terminal 3 features a large armillary sphere metal sculpture as an exhibit of Chinese inventions for international and domestic visitors.
Description and use
The exterior parts of this machine are a compages [or framework] of brass rings, which represent the principal circles of the heavens:
The equinoctial A, which is divided into 360 degrees (beginning at its intersection with the ecliptic in Aries) for showing the sun's right ascension in degrees; and also into 24 hours, for showing its right ascension in time.
The ecliptic B, which is divided into 12 signs, and each sign into 30 degrees, and also into the months and days of the year, in such a manner that the degree or point of the ecliptic on which the sun appears, on any given day, stands over that day in the circle of months.
The tropic of Cancer C, touching the ecliptic at the beginning of Cancer in e, and the tropic of Capricorn D, touching the ecliptic at the beginning of Capricorn in f; each circle 23 degrees from the equinoctial circle.
The Arctic Circle E, and the Antarctic Circle F, each circle 23 degrees from its respective pole at N and S.
The equinoctial colure G, passing through the north and south poles of the heavens at N and S, and through the equinoctial points in Aries and Libra, in the ecliptic.
The solstitial colure H, passing through the poles of the heavens, and through the solstitial points in Cancer and Capricorn, in the ecliptic. Each quarter of the equinoctial colure is divided into 90 degrees, from the equinoctial to the poles of the world, for showing the declination of the sun, moon, and stars; and each quarter of the solstitial colure, from the ecliptic as e and f, to its poles b and d, for showing the latitude of the stars.
In the north pole of the ecliptic is a nut b, to which is fixed one end of the quadrantal wire. To the other end is a small sun Y, which is carried around the ecliptic B—B, by turning the nut. In the south pole of the ecliptic is a pin d, on which another quadrantal wire is situated, with a small moon Ζ upon it, which may be moved around by hand. A mechanism causes the moon to move in an orbit which crosses the ecliptic at an angle of 5 degrees, to opposite points called the lunar nodes, and allows for shifting these points backward in the ecliptic, as the lunar nodes shift in the heavens.
Within these circular rings is a small terrestrial globe I, fixed on an axis K, which extends from the north and south poles of the globe at n and s, to those of the celestial sphere at N and S. On this axis the flat celestial meridian L is fixed, which may be set directly over the meridian of any place on the globe, so as to keep over the same meridian upon it. This flat meridian is graduated the same way as the brass meridian of the common globe, and its use is much the same.
To this globe is fitted the movable horizon M, so as to turn upon the two strong wires proceeding from its east and west points to the globe and entering the globe at the opposite points off its equator, which is a movable brass ring set into the globe in a groove all around its equator. The globe may be turned by hand within this ring, so as to place any given meridian upon it, directly under the celestial meridian L. The horizon is divided into 360 degrees all around its outermost edge, within which are the points of the compass, for showing the amplitude of the sun and the moon, both in degrees and points. The celestial meridian L passes through two notches in the north and south points of the horizon, as in a common globe: if the globe is turned around, the horizon and meridian turn with it. At the south pole of the sphere is a circle of 25 hours, fixed to the rings. On the axis is an index which goes around that circle, if the globe is turned around its axis.
The globe assembly is supported on a pedestal N, and may be elevated or depressed upon the joint O, to any number of degrees from 0 to 90 by means of the arc P, which is fixed in the strong brass arm Q. The globe assembly slides in the upright piece R, in which is a screw at r, to fix it at any proper elevation.
In the box T are two wheels (as in Dr Long's sphere) and two pinions, whose axes come out at V and U; either of which may be turned by the small winch W. When the winch is put upon the axis V, and turn backward, the terrestrial globe, with its horizon and celestial meridian, keep at rest; and the whole sphere of circles turns round from east, by south, to west, carrying the sun Y, and moon Z, round the same way, and causing them to rise above and set below the horizon. But when the winch is put upon the axis U, and turned forward, the sphere with the sun and moon keep at rest; and the earth, with its horizon and meridian, turn round from horizon to the sun and moon, to which these bodies came when the earth kept at rest, and they were carried round it; showing that they rise and set in the same points of the horizon, and at the same times in the hour circle, whether the motion be in the earth or in the heaven. If the earthly globe be turned, the hour-index goes round its hour-circle; but if the sphere be turned, the hour-circle goes round below the index.
And so, by this construction, the machine is equally fitted to show either the real motion of the earth, or the apparent motion of the heavens.
To reset the sphere for use, one must first slacken the screw r in the upright stem R, and taking hold of the arm Q, move it up or down until the given degree of latitude for any place lies at the side of the stem R; then the axis of the sphere will be properly elevated, so as to stand parallel to the axis of the terrestrial globe, if the globe assembly is to be aligned to north and south by a small compass: once this is done, the user must count the latitude from the north pole, upon the celestial meridian L, down towards the north notch of the horizon, and set the horizon to that latitude. The user then must turn the nut b until the sun Y comes to the given day of the year in the ecliptic, and the sun will be at its proper place for that day.
To find the place of the moon's ascending node, and also the place of the moon, an ephemeris must be consulted to set them right accordingly. Lastly, the user must turn the winch W, until either the sun comes to the meridian L, or until the meridian comes to the sun (moving the sphere or globe at the user's discretion), and then set the hour-index to the XII, marked noon, the whole sphere will be reset. Then the user must turn the winch, and observe when the sun or moon rises and set sin the horizon. The hour-index will show the times thereof for the given day.
History
China
Throughout Chinese history, astronomers have created celestial globes () to assist the observation of the stars. The Chinese also used the armillary sphere in aiding calendrical computations and calculations.
According to Joseph Needham, the earliest development of the armillary sphere in China goes back to the astronomers Shi Shen and Gan De in the 4th century BC, as they were equipped with a primitive single-ring armillary instrument. This would have allowed them to measure the north polar distance (declination) a measurement that gave the position in a xiu (right ascension). Needham's 4th century BC dating, however, is rejected by British sinologist Christopher Cullen, who traces the beginnings of these devices to the 1st century BC.
During the Western Han dynasty (202 BC9 AD) additional developments made by the astronomers Luoxia Hong (落下閎), Xiangyu Wangren, and Geng Shouchang (耿壽昌) advanced the use of the armillary in its early stage of evolution. In 52 BC, it was the astronomer Geng Shouchang who introduced the first permanently fixed equatorial ring of the armillary sphere. In the subsequent Eastern Han dynasty (23–220 AD) period, the astronomers Fu An and Jia Kui added the ecliptic ring by 84 AD. With the famous statesman, astronomer, and inventor Zhang Heng (張衡, 78–139 AD), the sphere was totally complete in 125 AD, with horizon and meridian rings. The world's first water-powered celestial globe was created by Zhang Heng, who operated his armillary sphere by use of an inflow clepsydra clock.
Subsequent developments were made after the Han dynasty that improved the use of the armillary sphere. In 323 AD the Chinese astronomer Kong Ting was able to reorganize the arrangement of rings on the armillary sphere so that the ecliptic ring could be pegged on to the equator at any point desired. The Chinese astronomer and mathematician Li Chunfeng (李淳風) of the Tang dynasty created one in 633 AD with three spherical layers to calibrate multiple aspects of astronomical observations, calling them 'nests' (chhung). He was also responsible for proposing a plan of having a sighting tube mounted ecliptically in order for the better observation of celestial latitudes. However, it was the Tang Chinese astronomer, mathematician, and monk Yi Xing in the next century who would accomplish this addition to the model of the armillary sphere. Ecliptical mountings of this sort were found on the armillary instruments of Zhou Cong and Shu Yijian in 1050, as well as Shen Kuo's armillary sphere of the later 11th century, but after that point they were no longer employed on Chinese armillary instruments until the arrival of the European Jesuits.
In 723 AD, Yi Xing (一行) and government official Liang Ling-zan (梁令瓚) combined Zhang Heng's water powered celestial globe with an escapement device. With drums hit every quarter-hour and bells rung automatically every full hour, the device was also a striking clock. The famous clock tower that the Chinese polymath Su Song built by 1094 during the Song dynasty would employ Yi Xing's escapement with waterwheel scoops filled by clepsydra drip, and powered a crowning armillary sphere, a central celestial globe, and mechanically operated manikins that would exit mechanically opened doors of the clock tower at specific times to ring bells and gongs to announce the time, or to hold plaques announcing special times of the day. There was also the scientist and statesman Shen Kuo (1031–1095). Being the head official for the Bureau of Astronomy, Shen Kuo was an avid scholar of astronomy, and improved the designs of several astronomical instruments: the gnomon, armillary sphere, clepsydra clock, and sighting tube fixed to observe the pole star indefinitely. When Jamal al-Din of Bukhara was asked to set up an 'Islamic Astronomical Institution' in Khubilai Khan's new capital during the Yuan dynasty, he commissioned a number of astronomical instruments, including an armillary sphere. It was noted that "Chinese astronomers had been building [them] since at least 1092".
Indian Subcontinent
The armillary sphere was used for observation in India since early times, and finds mention in the works of Āryabhata (476 CE). The Goladīpikā—a detailed treatise dealing with globes and the armillary sphere was composed between 1380 and 1460 CE by Parameśvara. On the subject of the usage of the armillary sphere in India, Ōhashi (2008) writes: "The Indian armillary sphere (gola-yantra) was based on equatorial coordinates, unlike the Greek armillary sphere, which was based on ecliptical coordinates, although the Indian armillary sphere also had an ecliptical hoop. Probably, the celestial coordinates of the junction stars of the lunar mansions were determined by the armillary sphere since the seventh century or so."
Hellenistic world and ancient Rome
The Greek astronomer Hipparchus () credited Eratosthenes (276194 BC) as the inventor of the armillary sphere. Names of this device in Greek include astrolabos and krikōtē sphaira "ringed sphere". The English name of this device comes ultimately from the Latin armilla (circle, bracelet), since it has a skeleton made of graduated metal circles linking the poles and representing the equator, the ecliptic, meridians and parallels. Usually a ball representing the Earth or, later, the Sun is placed in its center. It is used to demonstrate the motion of the stars around the Earth. Before the advent of the European telescope in the 17th century, the armillary sphere was the prime instrument of all astronomers in determining celestial positions.
In its simplest form, consisting of a ring fixed in the plane of the equator, the armilla is one of the most ancient of astronomical instruments. Slightly developed, it was crossed by another ring fixed in the plane of the meridian. The first was an equinoctial, the second a solstitial armilla. Shadows were used as indices of the sun's positions, in combinations with angular divisions. When several rings or circles were combined representing the great circles of the heavens, the instrument became an armillary sphere.
Armillary spheres were developed by the Hellenistic Greeks and were used as teaching tools already in the 3rd century BC. In larger and more precise forms they were also used as observational instruments. However, the fully developed armillary sphere with nine circles perhaps did not exist until the mid-2nd century AD, during the Roman Empire. Eratosthenes most probably used a solstitial armilla for measuring the obliquity of the ecliptic. Hipparchus probably used an armillary sphere of four rings. The Greco-Roman geographer and astronomer Ptolemy () describes his instrument, the astrolabon, in his Almagest. It consisted of at least three rings, with a graduated circle inside of which another could slide, carrying two small tubes positioned opposite each other and supported by a vertical plumb-line.
Medieval Middle East and Europe
Persian and Arab astronomers such as Ibrahim al-Fazari and Abbas Ibn Firnas continued to build and improve on armillary spheres. The spherical astrolabe, a variation of both the astrolabe and the armillary sphere, was likely invented during the Middle Ages in the Middle East. About 550 AD, Christian philosopher John Philoponus wrote a treatise on the astrolabe in Greek, which is the earliest extant treatise on the instrument. The earliest description of the spherical astrolabe dates back to the Persian astronomer Nayrizi (fl. 892–902). Pope Sylvester II applied the use of sighting tubes with his armillary sphere in order to fix the position of the pole star and record measurements for the tropics and equator, and used armillary spheres as a teaching device.
Korea
Chinese ideas of astronomy and astronomical instruments were introduced to Korea, where further advancements were also made. Jang Yeong-sil, a Korean inventor, was ordered by King Sejong the Great of Joseon to build an armillary sphere. The sphere, built in 1433 was named Honcheonui (혼천의,渾天儀).
The Honcheonsigye, an armillary sphere activated by a working clock mechanism was built by the Korean astronomer Song Iyeong in 1669. It is the only remaining astronomical clock from the Joseon dynasty. The mechanism of the armillary sphere succeeded that of Sejong era's armillary sphere (Honŭi 渾儀, 1435) and celestial sphere (Honsang 渾象, 1435), and the Jade Clepsydra (Ongnu 玉漏, 1438)'s sun-carriage apparatus. Such mechanisms are similar to Ch'oe Yu-ji (崔攸之, 1603~1673)'s armillary sphere(1657). The structure of time going train and the mechanism of striking-release in the part of clock is influenced by the crown escapement which has been developed from 14th century, and is applied to gear system which had been improved until the middle of 17th century in Western-style clockwork. In particular, timing device of Song I-yŏng's Armillary Clock adopts the early 17th century pendulum clock system which could remarkably improve the accuracy of a clock.
Renaissance
Further advances in this instrument were made by Danish astronomer Tycho Brahe (1546–1601), who constructed three large armillary spheres which he used for highly precise measurements of the positions of the stars and planets. They were described in his Astronomiae Instauratae Mechanica.
Armillary spheres were among the first complex mechanical devices. Their development led to many improvements in techniques and design of all mechanical devices. Renaissance scientists and public figures often had their portraits painted showing them with one hand on an armillary sphere, which represented the zenith of wisdom and knowledge.
The armillary sphere survives as useful for teaching, and may be described as a skeleton celestial globe, the series of rings representing the great circles of the heavens, and revolving on an axis within a horizon. With the earth as center such a sphere is known as Ptolemaic; with the sun as center, as Copernican.
A representation of an armillary sphere is present in the modern flag of Portugal and has been a national symbol since the reign of Manuel I.
Paralympic Games
An artwork-based model of an Armillary sphere has been used since the March 1, 2014 to light the Paralympic heritage flame at Stoke Mandeville Stadium, United Kingdom. The sphere includes a wheelchair that the user can rotate to spark the flame as part of a ceremony to celebrate the past, present and future of the Paralympic Movement in the UK. The Armillary Sphere was created by artist Jon Bausor and will be used for future Heritage Flame events. The flame in the first-ever ceremony was lit by London 2012 gold medallist Hannah Cockroft.
Heraldry and vexillology
The armillary sphere is commonly used in heraldry and vexillology, being mainly known as a symbol associated with Portugal, the Portuguese Empire and the Portuguese discoveries.
In the end of the 15th century, the armillary sphere became the personal heraldic badge of the future King Manuel I of Portugal, when he was still a Prince. The intense use of this badge in documents, monuments, flags and other supports, during the reign of Manuel I, transformed the armillary sphere from a simple personal symbol to a national one that represented the Kingdom of Portugal and in particular its Overseas Empire. As a national symbol, the armillary sphere continued in use after the death of Manuel I.
In the 17th century, it became associated with the Portuguese dominion of Brazil. In 1815, when Brazil gained the status of kingdom united with that of Portugal, its coat of arms was formalized as a golden armillary sphere in a blue field. Representing Brazil, the armillary sphere became also present in the arms and the flag of the United Kingdom of Portugal, Brazil and the Algarves. When Brazil became independent as an empire in 1822, the armillary sphere continued to be present in its national arms and in its national flag. The celestial sphere of the present Flag of Brazil replaced the armillary sphere in 1889.
The armillary sphere was reintroduced in the national arms and in the national Flag of Portugal in 1911.
See also
, describes the late medieval (Ptolemaic) cosmos
, a free-standing
– largest in the world
References
Sources
Encyclopædia Britannica (1771), "Geography".
Darlington, Oscar G. "Gerbert, the Teacher," The American Historical Review (Volume 52, Number 3, 1947): 456–476.
Kern, Ralf: Wissenschaftliche Instrumente in ihrer Zeit. Vom 15. – 19. Jahrhundert. Verlag der Buchhandlung Walther König 2010,
Needham, Joseph (1986). Science and Civilization in China: Volume 3. Taipei: Caves Books, Ltd.
Sivin, Nathan (1995). Science in Ancient China. Brookfield, Vermont: VARIORUM, Ashgate Publishing
Williams, Henry Smith (2004). A History Of Science. Whitefish, MT: Kessinger Publishing. .
External links
Starry Messenger
Armillary Spheres and Teaching Astronomy | Whipple Museum
AstroMedia* Verlag in Germany offers a cardboard construction kit for an armillary sphere ("Das Kleine Tischplanetarium")
Ancient Greek astronomy
Ancient inventions
Astronomical instruments
Chinese inventions
Danish inventions
Greek inventions
Sphere
Heraldic charges
Historical scientific instruments
Iranian inventions
Korean inventions | Armillary sphere | [
"Astronomy"
] | 4,686 | [
"Astronomical instruments"
] |
174,412 | https://en.wikipedia.org/wiki/Birefringence | Birefringence means double refraction. It is the optical property of a material having a refractive index that depends on the polarization and propagation direction of light. These optically anisotropic materials are described as birefringent or birefractive. The birefringence is often quantified as the maximum difference between refractive indices exhibited by the material. Crystals with non-cubic crystal structures are often birefringent, as are plastics under mechanical stress.
Birefringence is responsible for the phenomenon of double refraction whereby a ray of light, when incident upon a birefringent material, is split by polarization into two rays taking slightly different paths. This effect was first described by Danish scientist Rasmus Bartholin in 1669, who observed it in Iceland spar (calcite) crystals which have one of the strongest birefringences. In the 19th century Augustin-Jean Fresnel described the phenomenon in terms of polarization, understanding light as a wave with field components in transverse polarization (perpendicular to the direction of the wave vector).
Explanation
A mathematical description of wave propagation in a birefringent medium is presented below. Following is a qualitative explanation of the phenomenon.
Uniaxial materials
The simplest type of birefringence is described as uniaxial, meaning that there is a single direction governing the optical anisotropy whereby all directions perpendicular to it (or at a given angle to it) are optically equivalent. Thus rotating the material around this axis does not change its optical behaviour. This special direction is known as the optic axis of the material. Light propagating parallel to the optic axis (whose polarization is always perpendicular to the optic axis) is governed by a refractive index (for "ordinary") regardless of its specific polarization. For rays with any other propagation direction, there is one linear polarization that is perpendicular to the optic axis, and a ray with that polarization is called an ordinary ray and is governed by the same refractive index value . For a ray propagating in the same direction but with a polarization perpendicular to that of the ordinary ray, the polarization direction will be partly in the direction of (parallel to) the optic axis, and this extraordinary ray will be governed by a different, direction-dependent refractive index. Because the index of refraction depends on the polarization when unpolarized light enters a uniaxial birefringent material, it is split into two beams travelling in different directions, one having the polarization of the ordinary ray and the other the polarization of the extraordinary ray.
The ordinary ray will always experience a refractive index of , whereas the refractive index of the extraordinary ray will be in between and , depending on the ray direction as described by the index ellipsoid. The magnitude of the difference is quantified by the birefringence
The propagation (as well as reflection coefficient) of the ordinary ray is simply described by as if there were no birefringence involved. The extraordinary ray, as its name suggests, propagates unlike any wave in an isotropic optical material. Its refraction (and reflection) at a surface can be understood using the effective refractive index (a value in between and ). Its power flow (given by the Poynting vector) is not exactly in the direction of the wave vector. This causes an additional shift in that beam, even when launched at normal incidence, as is popularly observed using a crystal of calcite as photographed above. Rotating the calcite crystal will cause one of the two images, that of the extraordinary ray, to rotate slightly around that of the ordinary ray, which remains fixed.
When the light propagates either along or orthogonal to the optic axis, such a lateral shift does not occur. In the first case, both polarizations are perpendicular to the optic axis and see the same effective refractive index, so there is no extraordinary ray. In the second case the extraordinary ray propagates at a different phase velocity (corresponding to ) but still has the power flow in the direction of the wave vector. A crystal with its optic axis in this orientation, parallel to the optical surface, may be used to create a waveplate, in which there is no distortion of the image but an intentional modification of the state of polarization of the incident wave. For instance, a quarter-wave plate is commonly used to create circular polarization from a linearly polarized source.
Biaxial materials
The case of so-called biaxial crystals is substantially more complex. These are characterized by three refractive indices corresponding to three principal axes of the crystal. For most ray directions, both polarizations would be classified as extraordinary rays but with different effective refractive indices. Being extraordinary waves, the direction of power flow is not identical to the direction of the wave vector in either case.
The two refractive indices can be determined using the index ellipsoids for given directions of the polarization. Note that for biaxial crystals the index ellipsoid will not be an ellipsoid of revolution ("spheroid") but is described by three unequal principle refractive indices , and . Thus there is no axis around which a rotation leaves the optical properties invariant (as there is with uniaxial crystals whose index ellipsoid is a spheroid).
Although there is no axis of symmetry, there are two optical axes or binormals which are defined as directions along which light may propagate without birefringence, i.e., directions along which the wavelength is independent of polarization. For this reason, birefringent materials with three distinct refractive indices are called biaxial. Additionally, there are two distinct axes known as optical ray axes or biradials along which the group velocity of the light is independent of polarization.
Double refraction
When an arbitrary beam of light strikes the surface of a birefringent material at non-normal incidence, the polarization component normal to the optic axis (ordinary ray) and the other linear polarization (extraordinary ray) will be refracted toward somewhat different paths. Natural light, so-called unpolarized light, consists of equal amounts of energy in any two orthogonal polarizations. Even linearly polarized light has some energy in both polarizations, unless aligned along one of the two axes of birefringence. According to Snell's law of refraction, the two angles of refraction are governed by the effective refractive index of each of these two polarizations. This is clearly seen, for instance, in the Wollaston prism which separates incoming light into two linear polarizations using prisms composed of a birefringent material such as calcite.
The different angles of refraction for the two polarization components are shown in the figure at the top of this page, with the optic axis along the surface (and perpendicular to the plane of incidence), so that the angle of refraction is different for the polarization (the "ordinary ray" in this case, having its electric vector perpendicular to the optic axis) and the polarization (the "extraordinary ray" in this case, whose electric field polarization includes a component in the direction of the optic axis). In addition, a distinct form of double refraction occurs, even with normal incidence, in cases where the optic axis is not along the refracting surface (nor exactly normal to it); in this case, the dielectric polarization of the birefringent material is not exactly in the direction of the wave's electric field for the extraordinary ray. The direction of power flow (given by the Poynting vector) for this inhomogenous wave is at a finite angle from the direction of the wave vector resulting in an additional separation between these beams. So even in the case of normal incidence, where one would compute the angle of refraction as zero (according to Snell's law, regardless of the effective index of refraction), the energy of the extraordinary ray is propagated at an angle. If exiting the crystal through a face parallel to the incoming face, the direction of both rays will be restored, but leaving a shift between the two beams. This is commonly observed using a piece of calcite cut along its natural cleavage, placed above a paper with writing, as in the above photographs. On the contrary, waveplates specifically have their optic axis along the surface of the plate, so that with (approximately) normal incidence there will be no shift in the image from light of either polarization, simply a relative phase shift between the two light waves.
Terminology
Much of the work involving polarization preceded the understanding of light as a transverse electromagnetic wave, and this has affected some terminology in use. Isotropic materials have symmetry in all directions and the refractive index is the same for any polarization direction. An anisotropic material is called "birefringent" because it will generally refract a single incoming ray in two directions, which we now understand correspond to the two different polarizations. This is true of either a uniaxial or biaxial material.
In a uniaxial material, one ray behaves according to the normal law of refraction (corresponding to the ordinary refractive index), so an incoming ray at normal incidence remains normal to the refracting surface. As explained above, the other polarization can deviate from normal incidence, which cannot be described using the law of refraction. This thus became known as the extraordinary ray. The terms "ordinary" and "extraordinary" are still applied to the polarization components perpendicular to and not perpendicular to the optic axis respectively, even in cases where no double refraction is involved.
A material is termed uniaxial when it has a single direction of symmetry in its optical behavior, which we term the optic axis. It also happens to be the axis of symmetry of the index ellipsoid (a spheroid in this case). The index ellipsoid could still be described according to the refractive indices, , and , along three coordinate axes; in this case two are equal. So if corresponding to the and axes, then the extraordinary index is corresponding to the axis, which is also called the optic axis in this case.
Materials in which all three refractive indices are different are termed biaxial and the origin of this term is more complicated and frequently misunderstood. In a uniaxial crystal, different polarization components of a beam will travel at different phase velocities, except for rays in the direction of what we call the optic axis. Thus the optic axis has the particular property that rays in that direction do not exhibit birefringence, with all polarizations in such a beam experiencing the same index of refraction. It is very different when the three principal refractive indices are all different; then an incoming ray in any of those principal directions will still encounter two different refractive indices. But it turns out that there are two special directions (at an angle to all of the 3 axes) where the refractive indices for different polarizations are again equal. For this reason, these crystals were designated as biaxial, with the two "axes" in this case referring to ray directions in which propagation does not experience birefringence.
Fast and slow rays
In a birefringent material, a wave consists of two polarization components which generally are governed by different effective refractive indices. The so-called slow ray is the component for which the material has the higher effective refractive index (slower phase velocity), while the fast ray is the one with a lower effective refractive index. When a beam is incident on such a material from air (or any material with a lower refractive index), the slow ray is thus refracted more towards the normal than the fast ray. In the example figure at top of this page, it can be seen that refracted ray with s polarization (with its electric vibration along the direction of the optic axis, thus called the extraordinary ray) is the slow ray in given scenario.
Using a thin slab of that material at normal incidence, one would implement a waveplate. In this case, there is essentially no spatial separation between the polarizations, the phase of the wave in the parallel polarization (the slow ray) will be retarded with respect to the perpendicular polarization. These directions are thus known as the slow axis and fast axis of the waveplate.
Positive or negative
Uniaxial birefringence is classified as positive when the extraordinary index of refraction is greater than the ordinary index . Negative birefringence means that is less than zero. In other words, the polarization of the fast (or slow) wave is perpendicular to the optic axis when the birefringence of the crystal is positive (or negative, respectively). In the case of biaxial crystals, all three of the principal axes have different refractive indices, so this designation does not apply. But for any defined ray direction one can just as well designate the fast and slow ray polarizations.
Sources of optical birefringence
While the best known source of birefringence is the entrance of light into an anisotropic crystal, it can result in otherwise optically isotropic materials in a few ways:
Stress birefringence results when a normally isotropic solid is stressed and deformed (i.e., stretched or bent) causing a loss of physical isotropy and consequently a loss of isotropy in the material's permittivity tensor;
Form birefringence, whereby structure elements such as rods, having one refractive index, are suspended in a medium with a different refractive index. When the lattice spacing is much smaller than a wavelength, such a structure is described as a metamaterial;
By the Pockels or Kerr effect, whereby an applied electric field induces birefringence due to nonlinear optics;
By the self or forced alignment into thin films of amphiphilic molecules such as lipids, some surfactants or liquid crystals;
Circular birefringence takes place generally not in materials which are anisotropic but rather ones which are chiral. This can include liquids where there is an enantiomeric excess of a chiral molecule, that is, one that has stereo isomers;
By the Faraday effect, where a longitudinal magnetic field causes some materials to become circularly birefringent (having slightly different indices of refraction for left- and right-handed circular polarizations), similar to optical activity while the field is applied.
Common birefringent materials
The best characterized birefringent materials are crystals. Due to their specific crystal structures their refractive indices are well defined. Depending on the symmetry of a crystal structure (as determined by one of the 32 possible crystallographic point groups), crystals in that group may be forced to be isotropic (not birefringent), to have uniaxial symmetry, or neither in which case it is a biaxial crystal. The crystal structures permitting uniaxial and biaxial birefringence are noted in the two tables, below, listing the two or three principal refractive indices (at wavelength 590 nm) of some better-known crystals.
In addition to induced birefringence while under stress, many plastics obtain permanent birefringence during manufacture due to stresses which are "frozen in" due to mechanical forces present when the plastic is molded or extruded. For example, ordinary cellophane is birefringent. Polarizers are routinely used to detect stress, either applied or frozen-in, in plastics such as polystyrene and polycarbonate.
Cotton fiber is birefringent because of high levels of cellulosic material in the fibre's secondary cell wall which is directionally aligned with the cotton fibers.
Polarized light microscopy is commonly used in biological tissue, as many biological materials are linearly or circularly birefringent. Collagen, found in cartilage, tendon, bone, corneas, and several other areas in the body, is birefringent and commonly studied with polarized light microscopy. Some proteins are also birefringent, exhibiting form birefringence.
Inevitable manufacturing imperfections in optical fiber leads to birefringence, which is one cause of pulse broadening in fiber-optic communications. Such imperfections can be geometrical (lack of circular symmetry), or due to unequal lateral stress applied to the optical fibre. Birefringence is intentionally introduced (for instance, by making the cross-section elliptical) in order to produce polarization-maintaining optical fibers. Birefringence can be induced (or corrected) in optical fibers through bending them which causes anisotropy in form and stress given the axis around which it is bent and radius of curvature.
In addition to anisotropy in the electric polarizability that we have been discussing, anisotropy in the magnetic permeability could be a source of birefringence. At optical frequencies, there is no measurable magnetic polarizability () of natural materials, so this is not an actual source of birefringence.
Measurement
Birefringence and other polarization-based optical effects (such as optical rotation and linear or circular dichroism) can be observed by measuring any change in the polarization of light passing through the material. These measurements are known as polarimetry. Polarized light microscopes, which contain two polarizers that are at 90° to each other on either side of the sample, are used to visualize birefringence, since light that has not been affected by birefringence remains in a polarization that is totally rejected by the second polarizer ("analyzer"). The addition of quarter-wave plates permits examination using circularly polarized light. Determination of the change in polarization state using such an apparatus is the basis of ellipsometry, by which the optical properties of specular surfaces can be gauged through reflection.
Birefringence measurements have been made with phase-modulated systems for examining the transient flow behaviour of fluids. Birefringence of lipid bilayers can be measured using dual-polarization interferometry. This provides a measure of the degree of order within these fluid layers and how this order is disrupted when the layer interacts with other biomolecules.
For the 3D measurement of birefringence, a technique based on holographic tomography can be used.
Applications
Optical devices
Birefringence is used in many optical devices. Liquid-crystal displays, the most common sort of flat-panel display, cause their pixels to become lighter or darker through rotation of the polarization (circular birefringence) of linearly polarized light as viewed through a sheet polarizer at the screen's surface. Similarly, light modulators modulate the intensity of light through electrically induced birefringence of polarized light followed by a polarizer. The Lyot filter is a specialized narrowband spectral filter employing the wavelength dependence of birefringence. Waveplates are thin birefringent sheets widely used in certain optical equipment for modifying the polarization state of light passing through it.
To manufacture polarizers with high transmittance, birefringent crystals are used in devices such as the Glan–Thompson prism, Glan–Taylor prism and other variants. Layered birefringent polymer sheets can also be used for this purpose.
Birefringence also plays an important role in second-harmonic generation and other nonlinear optical processes. The crystals used for these purposes are almost always birefringent. By adjusting the angle of incidence, the effective refractive index of the extraordinary ray can be tuned in order to achieve phase matching, which is required for the efficient operation of these devices.
Medicine
Birefringence is utilized in medical diagnostics. One powerful accessory used with optical microscopes is a pair of crossed polarizing filters. Light from the source is polarized in the direction after passing through the first polarizer, but above the specimen is a polarizer (a so-called analyzer) oriented in the direction. Therefore, no light from the source will be accepted by the analyzer, and the field will appear dark. Areas of the sample possessing birefringence will generally couple some of the -polarized light into the polarization; these areas will then appear bright against the dark background. Modifications to this basic principle can differentiate between positive and negative birefringence.
For instance, needle aspiration of fluid from a gouty joint will reveal negatively birefringent monosodium urate crystals. Calcium pyrophosphate crystals, in contrast, show weak positive birefringence. Urate crystals appear yellow, and calcium pyrophosphate crystals appear blue when their long axes are aligned parallel to that of a red compensator filter, or a crystal of known birefringence is added to the sample for comparison.
The birefringence of tissue inside a living human thigh was measured using polarization-sensitive optical coherence tomography at 1310 nm and a single mode fiber in a needle. Skeletal muscle birefringence was Δn = 1.79 × 10−3 ± 0.18×10−3, adipose Δn = 0.07 × 10−3 ± 0.50 × 10−3, superficial aponeurosis Δn = 5.08 × 10−3 ± 0.73 × 10−3 and interstitial tissue Δn = 0.65 × 10−3 ±0.39 × 10−3. These measurements may be important for the development of a less invasive method to diagnose Duchenne muscular dystrophy.
Birefringence can be observed in amyloid plaques such as are found in the brains of Alzheimer's patients when stained with a dye such as Congo Red. Modified proteins such as immunoglobulin light chains abnormally accumulate between cells, forming fibrils. Multiple folds of these fibers line up and take on a beta-pleated sheet conformation. Congo red dye intercalates between the folds and, when observed under polarized light, causes birefringence.
In ophthalmology, binocular retinal birefringence screening of the Henle fibers (photoreceptor axons that go radially outward from the fovea) provides a reliable detection of strabismus and possibly also of anisometropic amblyopia. In healthy subjects, the maximum retardation induced by the Henle fiber layer is approximately 22 degrees at 840 nm. Furthermore, scanning laser polarimetry uses the birefringence of the optic nerve fiber layer to indirectly quantify its thickness, which is of use in the assessment and monitoring of glaucoma. Polarization-sensitive optical coherence tomography measurements obtained from healthy human subjects have demonstrated a change in birefringence of the retinal nerve fiber layer as a function of location around the optic nerve head. The same technology was recently applied in the living human retina to quantify the polarization properties of vessel walls near the optic nerve. While retinal vessel walls become thicker and less birefringent in patients who suffer from hypertension, hinting at a decrease in vessel wall condition, the vessel walls of diabetic patients do not experience a change in thickness, but do see an increase in birefringence, presumably due to fibrosis or inflammation.
Birefringence characteristics in sperm heads allow the selection of spermatozoa for intracytoplasmic sperm injection. Likewise, zona imaging uses birefringence on oocytes to select the ones with highest chances of successful pregnancy. Birefringence of particles biopsied from pulmonary nodules indicates silicosis.
Dermatologists use dermatoscopes to view skin lesions. Dermoscopes use polarized light, allowing the user to view crystalline structures corresponding to dermal collagen in the skin. These structures may appear as shiny white lines or rosette shapes and are only visible under polarized dermoscopy.
Stress-induced birefringence
Isotropic solids do not exhibit birefringence. When they are under mechanical stress, birefringence results. The stress can be applied externally or is "frozen in" after a birefringent plastic ware is cooled after it is manufactured using injection molding. When such a sample is placed between two crossed polarizers, colour patterns can be observed, because polarization of a light ray is rotated after passing through a birefringent material and the amount of rotation is dependent on wavelength. The experimental method called photoelasticity used for analyzing stress distribution in solids is based on the same principle. There has been recent research on using stress-induced birefringence in a glass plate to generate an optical vortex and full Poincare beams (optical beams that have every possible polarization state across a cross-section).
Other cases of birefringence
Birefringence is observed in anisotropic elastic materials. In these materials, the two polarizations split according to their effective refractive indices, which are also sensitive to stress.
The study of birefringence in shear waves traveling through the solid Earth (the Earth's liquid core does not support shear waves) is widely used in seismology.
Birefringence is widely used in mineralogy to identify rocks, minerals, and gemstones.
Theory
In an isotropic medium (including free space) the so-called electric displacement () is just proportional to the electric field () according to where the material's permittivity is just a scalar (and equal to where is the index of refraction). In an anisotropic material exhibiting birefringence, the relationship between and must now be described using a tensor equation:
where is now a 3 × 3 permittivity tensor. We assume linearity and no magnetic permeability in the medium: . The electric field of a plane wave of angular frequency can be written in the general form:
where is the position vector, is time, and is a vector describing the electric field at , . Then we shall find the possible wave vectors . By combining Maxwell's equations for and , we can eliminate to obtain:
With no free charges, Maxwell's equation for the divergence of vanishes:
We can apply the vector identity to the left hand side of , and use the spatial dependence in which each differentiation in (for instance) results in multiplication by to find:
The right hand side of can be expressed in terms of through application of the permittivity tensor and noting that differentiation in time results in multiplication by , then becomes:
Applying the differentiation rule to we find:
indicates that is orthogonal to the direction of the wavevector , even though that is no longer generally true for as would be the case in an isotropic medium. will not be needed for the further steps in the following derivation.
Finding the allowed values of for a given is easiest done by using Cartesian coordinates with the , and axes chosen in the directions of the symmetry axes of the crystal (or simply choosing in the direction of the optic axis of a uniaxial crystal), resulting in a diagonal matrix for the permittivity tensor :
where the diagonal values are squares of the refractive indices for polarizations along the three principal axes , and . With in this form, and substituting in the speed of light using , the component of the vector equation becomes
where , , are the components of (at any given position in space and time) and , , are the components of . Rearranging, we can write (and similarly for the and components of )
This is a set of linear equations in , , , so it can have a nontrivial solution (that is, one other than ) as long as the following determinant is zero:
Evaluating the determinant of , and rearranging the terms according to the powers of , the constant terms cancel. After eliminating the common factor from the remaining terms, we obtain
In the case of a uniaxial material, choosing the optic axis to be in the direction so that and , this expression can be factored into
Setting either of the factors in to zero will define an ellipsoidal surface in the space of wavevectors that are allowed for a given . The first factor being zero defines a sphere; this is the solution for so-called ordinary rays, in which the effective refractive index is exactly regardless of the direction of . The second defines a spheroid symmetric about the axis. This solution corresponds to the so-called extraordinary rays in which the effective refractive index is in between and , depending on the direction of . Therefore, for any arbitrary direction of propagation (other than in the direction of the optic axis), two distinct wavevectors are allowed corresponding to the polarizations of the ordinary and extraordinary rays.
For a biaxial material a similar but more complicated condition on the two waves can be described; the locus of allowed vectors (the wavevector surface) is a 4th-degree two-sheeted surface, so that in a given direction there are generally two permitted vectors (and their opposites). By inspection one can see that is generally satisfied for two positive values of . Or, for a specified optical frequency and direction normal to the wavefronts , it is satisfied for two wavenumbers (or propagation constants) (and thus effective refractive indices) corresponding to the propagation of two linear polarizations in that direction.
When those two propagation constants are equal then the effective refractive index is independent of polarization, and there is consequently no birefringence encountered by a wave traveling in that particular direction. For a uniaxial crystal, this is the optic axis, the ±z direction according to the above construction. But when all three refractive indices (or permittivities), , and are distinct, it can be shown that there are exactly two such directions, where the two sheets of the wave-vector surface touch; these directions are not at all obvious and do not lie along any of the three principal axes (, , according to the above convention). Historically that accounts for the use of the term "biaxial" for such crystals, as the existence of exactly two such special directions (considered "axes") was discovered well before polarization and birefringence were understood physically. These two special directions are generally not of particular interest; biaxial crystals are rather specified by their three refractive indices corresponding to the three axes of symmetry.
A general state of polarization launched into the medium can always be decomposed into two waves, one in each of those two polarizations, which will then propagate with different wavenumbers . Applying the different phase of propagation to those two waves over a specified propagation distance will result in a generally different net polarization state at that point; this is the principle of the waveplate for instance. With a waveplate, there is no spatial displacement between the two rays as their vectors are still in the same direction. That is true when each of the two polarizations is either normal to the optic axis (the ordinary ray) or parallel to it (the extraordinary ray).
In the more general case, there is a difference not only in the magnitude but the direction of the two rays. For instance, the photograph through a calcite crystal (top of page) shows a shifted image in the two polarizations; this is due to the optic axis being neither parallel nor normal to the crystal surface. And even when the optic axis is parallel to the surface, this will occur for waves launched at non-normal incidence (as depicted in the explanatory figure). In these cases the two vectors can be found by solving constrained by the boundary condition which requires that the components of the two transmitted waves' vectors, and the vector of the incident wave, as projected onto the surface of the interface, must all be identical. For a uniaxial crystal it will be found that there is not a spatial shift for the ordinary ray (hence its name) which will refract as if the material were non-birefringent with an index the same as the two axes which are not the optic axis. For a biaxial crystal neither ray is deemed "ordinary" nor would generally be refracted according to a refractive index equal to one of the principal axes.
See also
Cotton–Mouton effect
Crystal optics
Dichroism
Iceland spar
Index ellipsoid
John Kerr
Optical rotation
Periodic poling
Pleochroism
Huygens principle of double refraction
Notes
References
Bibliography
M. Born and E. Wolf, 2002, Principles of Optics, 7th Ed., Cambridge University Press, 1999 (reprinted with corrections, 2002).
A. Fresnel, 1827, "Mémoire sur la double réfraction", Mémoires de l'Académie Royale des Sciences de l'Institut de France, vol. (for 1824, printed 1827), pp.45–176; reprinted as "Second mémoire..." in Fresnel, 1866–70, vol. 2, pp.479–596; translated by A.W. Hobson as "Memoir on double refraction", in R.Taylor (ed.), Scientific Memoirs, vol. (London: Taylor & Francis, 1852), pp.238–333. (Cited page numbers are from the translation.)
A. Fresnel (ed. H. de Sénarmont, E. Verdet, and L. Fresnel), 1866–70, Oeuvres complètes d'Augustin Fresnel (3 volumes), Paris: Imprimerie Impériale; vol. 1 (1866), vol. 2 (1868), vol. 3 (1870).
External links
Stress Analysis Apparatus (based on Birefringence theory)
Video of stress birefringence in Polymethylmethacrylate (PMMA or Plexiglas).
Artist Austine Wood Comarow employs birefringence to create kinetic figurative images.
The Birefringence of Thin Ice (Tom Wagner, photographer)
Polarization (waves)
Optical mineralogy
Asymmetry | Birefringence | [
"Physics"
] | 7,191 | [
"Astrophysics",
"Polarization (waves)",
"Symmetry",
"Asymmetry"
] |
174,431 | https://en.wikipedia.org/wiki/Fiberglass | Fiberglass (American English) or fibreglass (Commonwealth English) is a common type of fiber-reinforced plastic using glass fiber. The fibers may be randomly arranged, flattened into a sheet called a chopped strand mat, or woven into glass cloth. The plastic matrix may be a thermoset polymer matrix—most often based on thermosetting polymers such as epoxy, polyester resin, or vinyl ester resin—or a thermoplastic.
Cheaper and more flexible than carbon fiber, it is stronger than many metals by weight, non-magnetic, non-conductive, transparent to electromagnetic radiation, can be molded into complex shapes, and is chemically inert under many circumstances. Applications include aircraft, boats, automobiles, bath tubs and enclosures, swimming pools, hot tubs, septic tanks, water tanks, roofing, pipes, cladding, orthopedic casts, surfboards, and external door skins.
Other common names for fiberglass are glass-reinforced plastic (GRP), glass-fiber reinforced plastic (GFRP) or GFK (from ). Because glass fiber itself is sometimes referred to as "fiberglass", the composite is also called fiberglass-reinforced plastic (FRP). This article uses "fiberglass" to refer to the complete fiber-reinforced composite material, rather than only to the glass fiber within it.
History
Glass fibers have been produced for centuries, but the earliest patent was awarded to the Prussian inventor Hermann Hammesfahr (1845–1914) in the U.S. in 1880.
Mass production of glass strands was accidentally discovered in 1932 when Games Slayter, a researcher at Owens-Illinois, directed a jet of compressed air at a stream of molten glass and produced fibers. A patent for this method of producing glass wool was first applied for in 1933. Owens joined with the Corning company in 1935 and the method was adapted by Owens Corning to produce its patented "Fiberglas" (spelled with one "s") in 1936. Originally, Fiberglas was a glass wool with fibers entrapping a great deal of gas, making it useful as an insulator, especially at high temperatures.
A suitable resin for combining the fiberglass with a plastic to produce a composite material was developed in 1936 by DuPont. The first ancestor of modern polyester resins is Cyanamid's resin of 1942. Peroxide curing systems were used by then. With the combination of fiberglass and resin the gas content of the material was replaced by plastic. This reduced the insulation properties to values typical of the plastic, but now for the first time, the composite showed great strength and promise as a structural and building material. Many glass fiber composites continued to be called "fiberglass" (as a generic name) and the name was also used for the low-density glass wool product containing gas instead of plastic.
Ray Greene of Owens Corning is credited with producing the first composite boat in 1937 but did not proceed further at the time because of the brittle nature of the plastic used. In 1939 Russia was reported to have constructed a passenger boat of plastic materials, and the United States a fuselage and wings of an aircraft. The first car to have a fiberglass body was a 1946 prototype of the Stout Scarab, but the model did not enter production.
Fiber
Unlike glass fibers used for insulation, for the final structure to be strong, the fiber's surfaces must be almost entirely free of defects, as this permits the fibers to reach gigapascal tensile strengths. If a bulk piece of glass were defect-free, it would be as strong as glass fibers; however, it is generally impractical to produce and maintain bulk material in a defect-free state outside of laboratory conditions.
Production
The process of manufacturing fiberglass is called pultrusion. The manufacturing process for glass fibers suitable for reinforcement uses large furnaces to gradually melt the silica sand, limestone, kaolin clay, fluorspar, colemanite, dolomite and other minerals until a liquid forms. It is then extruded through bushings (spinneret), which are bundles of very small orifices (typically 5–25 micrometres in diameter for E-Glass, 9 micrometres for S-Glass).
These filaments are then sized (coated) with a chemical solution. The individual filaments are now bundled in large numbers to provide a roving. The diameter of the filaments, and the number of filaments in the roving, determine its weight, typically expressed in one of two measurement systems:
yield, or yards per pound (the number of yards of fiber in one pound of material; thus a smaller number means a heavier roving). Examples of standard yields are 225yield, 450yield, 675yield.
tex, or grams per km (how many grams 1 km of roving weighs, inverted from yield; thus a smaller number means a lighter roving). Examples of standard tex are 750tex, 1100tex, 2200tex.
These rovings are then either used directly in a composite application such as pultrusion, filament winding (pipe), gun roving (where an automated gun chops the glass into short lengths and drops it into a jet of resin, projected onto the surface of a mold), or in an intermediary step, to manufacture fabrics such as chopped strand mat (CSM) (made of randomly oriented small cut lengths of fiber all bonded together), woven fabrics, knit fabrics or unidirectional fabrics.
Chopped strand mat
Chopped strand mat (CSM) is a form of reinforcement used in fiberglass. It consists of glass fibers laid randomly across each other and held together by a binder. It is typically processed using the hand lay-up technique, where sheets of material are placed on a mold and brushed with resin. Because the binder dissolves in resin, the material easily conforms to different shapes when wetted out. After the resin cures, the hardened product can be taken from the mold and finished. Using chopped strand mat gives the fiberglass isotropic in-plane material properties.
Sizing
A coating or primer is applied to the roving to help protect the glass filaments for processing and manipulation and to ensure proper bonding to the resin matrix, thus allowing for the transfer of shear loads from the glass fibers to the thermoset plastic. Without this bonding, the fibers can 'slip' in the matrix causing localized failure.
Properties
An individual structural glass fiber is both stiff and strong in tension and compression—that is, along its axis. Although it might be assumed that the fiber is weak in compression, it is actually only the long aspect ratio of the fiber which makes it seem so; i.e., because a typical fiber is long and narrow, it buckles easily. On the other hand, the glass fiber is weak in shear—that is, across its axis. Therefore, if a collection of fibers can be arranged permanently in a preferred direction within a material, and if they can be prevented from buckling in compression, the material will be preferentially strong in that direction.
Furthermore, by laying multiple layers of fiber on top of one another, with each layer oriented in various preferred directions, the material's overall stiffness and strength can be efficiently controlled. In fiberglass, it is the plastic matrix which permanently constrains the structural glass fibers to directions chosen by the designer. With chopped strand mat, this directionality is essentially an entire two-dimensional plane; with woven fabrics or unidirectional layers, directionality of stiffness and strength can be more precisely controlled within the plane.
A fiberglass component is typically of a thin "shell" construction, sometimes filled on the inside with structural foam, as in the case of surfboards. The component may be of nearly arbitrary shape, limited only by the complexity and tolerances of the mold used for manufacturing the shell.
The mechanical functionality of materials is heavily reliant on the combined performances of both the resin (AKA matrix) and fibers. For example, in severe temperature conditions (over 180 °C), the resin component of the composite may lose its functionality, partially due to bond deterioration of resin and fiber. However, GFRPs can still show significant residual strength after experiencing high temperatures (200 °C).
One notable feature of fiberglass is that the resins used are subject to contraction during the curing process. For polyester this contraction is often 5–6%; for epoxy, about 2%. Because the fibers do not contract, this differential can create changes in the shape of the part during curing. Distortions can appear hours, days, or weeks after the resin has set. While this distortion can be minimized by symmetric use of the fibers in the design, a certain amount of internal stress is created; and if it becomes too great, cracks form.
Types
The most common types of glass fiber used in fiberglass is E-glass, which is alumino-borosilicate glass with less than 1% w/w alkali oxides, mainly used for glass-reinforced plastics. Other types of glass used are A-glass (Alkali-lime glass with little or no boron oxide), E-CR-glass (Electrical/Chemical Resistance; alumino-lime silicate with less than 1% w/w alkali oxides, with high acid resistance), C-glass (alkali-lime glass with high boron oxide content, used for glass staple fibers and insulation), D-glass (borosilicate glass, named for its low Dielectric constant), R-glass (alumino silicate glass without MgO and CaO with high mechanical requirements as Reinforcement), and S-glass (alumino silicate glass without CaO but with high MgO content with high tensile strength).
Pure silica (silicon dioxide), when cooled as fused quartz into a glass with no true melting point, can be used as a glass fiber for fiberglass but has the drawback that it must be worked at very high temperatures. In order to lower the necessary work temperature, other materials are introduced as "fluxing agents" (i.e., components to lower the melting point). Ordinary A-glass ("A" for "alkali-lime") or soda lime glass, crushed and ready to be remelted, as so-called cullet glass, was the first type of glass used for fiberglass. E-glass ("E" because of initial Electrical application), is alkali-free and was the first glass formulation used for continuous filament formation. It now makes up most of the fiberglass production in the world, and also is the single largest consumer of boron minerals globally. It is susceptible to chloride ion attack and is a poor choice for marine applications. S-glass ("S" for "stiff") is used when tensile strength (high modulus) is important and is thus an important building and aircraft epoxy composite (it is called R-glass, "R" for "reinforcement" in Europe). C-glass ("C" for "chemical resistance") and T-glass ("T" is for "thermal insulator"—a North American variant of C-glass) are resistant to chemical attack; both are often found in insulation-grades of blown fiberglass.
Table of some common fiberglass types
Applications
Fiberglass is versatile because it is lightweight, strong, weather-resistant, and can have a variety of surface textures.
During World War II, fiberglass was developed as a replacement for the molded plywood used in aircraft radomes (fiberglass being transparent to microwaves). Its first main civilian application was for the building of boats and sports car bodies, where it gained acceptance in the 1950s. Its use has broadened to the automotive and sport equipment sectors. In the production of some products, such as aircraft, carbon fiber is now used instead of fiberglass, which is stronger by volume and weight.
Advanced manufacturing techniques such as pre-pregs and fiber rovings extend fiberglass's applications and the tensile strength possible with fiber-reinforced plastics.
Fiberglass is also used in the telecommunications industry for shrouding antennas, due to its RF permeability and low signal attenuation properties. It may also be used to conceal other equipment where no signal permeability is required, such as equipment cabinets and steel support structures, due to the ease with which it can be molded and painted to blend with existing structures and surfaces. Other uses include sheet-form electrical insulators and structural components commonly found in power-industry products. Because of fiberglass's lightweight and durability, it is often used in protective equipment such as helmets. Many sports use fiberglass protective gear, such as goaltenders' and catchers' masks.
Storage tanks
Storage tanks can be made of fiberglass with capacities up to about 300 tonnes. Smaller tanks can be made with chopped strand mat cast over a thermoplastic inner tank which acts as a preform during construction. Much more reliable tanks are made using woven mat or filament wound fiber, with the fiber orientation at right angles to the hoop stress imposed in the sidewall by the contents. Such tanks tend to be used for chemical storage because the plastic liner (often polypropylene) is resistant to a wide range of corrosive chemicals. Fiberglass is also used for septic tanks.
House building
Glass-reinforced plastics are also used to produce house building components such as roofing laminate, door surrounds, over-door canopies, window canopies and dormers, chimneys, coping systems, and heads with keystones and sills. The material's reduced weight and easier handling, compared to wood or metal, allows faster installation. Mass-produced fiberglass brick-effect panels can be used in the construction of composite housing, and can include insulation to reduce heat loss.
Oil and gas artificial lift systems
In rod pumping applications, fiberglass rods are often used for their high tensile strength to weight ratio. Fiberglass rods provide an advantage over steel rods because they stretch more elastically (lower Young's modulus) than steel for a given weight, meaning more oil can be lifted from the hydrocarbon reservoir to the surface with each stroke, all while reducing the load on the pumping unit.
Fiberglass rods must be kept in tension, however, as they frequently part if placed in even a small amount of compression. The buoyancy of the rods within a fluid amplifies this tendency.
Piping
GRP and GRE pipe can be used in a variety of above- and below-ground systems, including those for desalination, water treatment, water distribution networks, chemical process plants, water used for firefighting, hot and cold drinking water, wastewater/sewage, municipal waste and liquified petroleum gas.
Boating
Fiberglass composite boats have been made since the early 1940s, and many sailing vessels made after 1950 were built using the fiberglass lay-up process. As of 2022, boats continue to be made with fiberglass, though more advanced techniques such as vacuum bag moulding are used in the construction process.
Armour
Though most bullet-resistant armours are made using different textiles, fiberglass composites have been shown to be effective as ballistic armor.
Construction methods
Filament winding
Filament winding is a fabrication technique mainly used for manufacturing open (cylinders) or closed-end structures (pressure vessels or tanks). The process involves winding filaments under tension over a male mandrel. The mandrel rotates while a wind eye on a carriage moves horizontally, laying down fibers in the desired pattern. The most common filaments are carbon or glass fiber and are coated with synthetic resin as they are wound. Once the mandrel is completely covered to the desired thickness, the resin is cured; often the mandrel is placed in an oven to achieve this, though sometimes radiant heaters are used with the mandrel still turning in the machine. Once the resin has cured, the mandrel is removed, leaving the hollow final product. For some products such as gas bottles, the 'mandrel' is a permanent part of the finished product forming a liner to prevent gas leakage or as a barrier to protect the composite from the fluid to be stored.
Filament winding is well suited to automation, and there are many applications, such as pipe and small pressure vessels that are wound and cured without any human intervention. The controlled variables for winding are fiber type, resin content, wind angle, tow or bandwidth and thickness of the fiber bundle. The angle at which the fiber has an effect on the properties of the final product. A high angle "hoop" will provide circumferential or "burst" strength, while lower angle patterns (polar or helical) will provide greater longitudinal tensile strength.
Products currently being produced using this technique range from pipes, golf clubs, Reverse Osmosis Membrane Housings, oars, bicycle forks, bicycle rims, power and transmission poles, pressure vessels to missile casings, aircraft fuselages and lamp posts and yacht masts.
Fiberglass hand lay-up operation
A release agent, usually in either wax or liquid form, is applied to the chosen mold to allow the finished product to be cleanly removed from the mold. Resin—typically a 2-part thermoset polyester, vinyl, or epoxy—is mixed with its hardener and applied to the surface. Sheets of fiberglass matting are laid into the mold, then more resin mixture is added using a brush or roller. The material must conform to the mold, and air must not be trapped between the fiberglass and the mold. Additional resin is applied and possibly additional sheets of fiberglass. Hand pressure, vacuum or rollers are used to be sure the resin saturates and fully wets all layers, and that any air pockets are removed. The work must be done quickly before the resin starts to cure unless high-temperature resins are used which will not cure until the part is warmed in an oven. In some cases, the work is covered with plastic sheets and vacuum is drawn on the work to remove air bubbles and press the fiberglass to the shape of the mold.
Fiberglass spray lay-up operation
The fiberglass spray lay-up process is similar to the hand lay-up process but differs in the application of the fiber and resin to the mold. Spray-up is an open-molding composites fabrication process where resin and reinforcements are sprayed onto a mold. The resin and glass may be applied separately or simultaneously "chopped" in a combined stream from a chopper gun. Workers roll out the spray-up to compact the laminate. Wood, foam or other core material may then be added, and a secondary spray-up layer imbeds the core between the laminates. The part is then cured, cooled, and removed from the reusable mold.
Pultrusion operation
Pultrusion is a manufacturing method used to make strong, lightweight composite materials. In pultrusion, material is pulled through forming machinery using either a hand-over-hand method or a continuous-roller method (as opposed to extrusion, where the material is pushed through dies).
In fiberglass pultrusion, fibers (the glass material) are pulled from spools through a device that coats them with a resin. They are then typically heat-treated and cut to length. Fiberglass produced this way can be made in a variety of shapes and cross-sections, such as W or S cross-sections.
Health hazards
Exposure
People can be exposed to fiberglass in the workplace during its fabrication, installation or removal, by breathing it in, by skin contact, or by eye contact.
Furthermore, in the manufacturing process of fiberglass, styrene vapors are released while the resins are cured. These are also irritating to mucous membranes and respiratory tract.
The general population can get exposed to fibreglass from insulation and building materials or from fibers in the air near manufacturing facilities or when they are near building fires or implosions. The American Lung Association advises that fiberglass insulation should never be left exposed in an occupied area. Since work practices are not always followed, and fiberglass is often left exposed in basements that later become occupied, people can get exposed. No readily usable biological or clinical indices of exposure exist.
Symptoms and signs, health effects
Fiberglass will irritate the eyes, skin, and the respiratory system. Hence, symptoms can include itchy eyes, skin, nose, sore throat, hoarseness, dyspnea (breathing difficulty) and cough. Peak alveolar deposition was observed in rodents and humans for fibers with diameters of 1 to 2 μm.
In animal experiments, adverse lung effects such as lung inflammation and lung fibrosis have occurred, and increased incidences of mesothelioma, pleural sarcoma, and lung carcinoma had been found with intrapleural or intratracheal instillations in rats.
As of 2001, in humans only the more biopersistent materials like ceramic fibres, which are used industrially as insulation in high-temperature environments such as blast furnaces, and certain special-purpose glass wools not used as insulating materials remain classified as possible carcinogens (IARC Group 2B). The more commonly used glass fibre wools including insulation glass wool, rock wool and slag wool are considered not classifiable as to carcinogenicity to humans (IARC Group 3).
In October 2001, all fiberglass wools commonly used for thermal and acoustical insulation were reclassified by the International Agency for Research on Cancer (IARC) as "not classifiable as to carcinogenicity to humans" (IARC group 3). "Epidemiologic studies published during the 15 years since the previous IARC monographs review of these fibers in 1988 provide no evidence of increased risks of lung cancer or mesothelioma (cancer of the lining of the body cavities) from occupational exposures during the manufacture of these materials, and inadequate evidence overall of any cancer risk."
In June 2011, the US National Toxicology Program (NTP) removed from its Report on Carcinogens all biosoluble glass wool used in home and building insulation and for non-insulation products. However, NTP still considers fibrous glass dust to be "reasonably anticipated [as] a human carcinogen (Certain Glass Wool Fibers (Inhalable))". Similarly, California's Office of Environmental Health Hazard Assessment (OEHHA) published a November, 2011 modification to its Proposition 65 listing to include only "Glass wool fibers (inhalable and biopersistent)." Therefore a cancer warning label for biosoluble fiber glass home and building insulation is no longer required under federal or California law. As of 2012, the North American Insulation Manufacturers Association stated that fiberglass is safe to manufacture, install and use when recommended work practices are followed to reduce temporary mechanical irritation.
As of 2012, the European Union and Germany have classified synthetic glass fibers as possibly or probably carcinogenic, but fibers can be exempt from this classification if they pass specific tests. A 2012 health hazard review for the European Commission stated that inhalation of fiberglass at concentrations of 3, 16 and 30 mg/m3 "did not induce fibrosis nor tumours except transient lung inflammation that disappeared after a post-exposure recovery period."
Historic reviews of the epidemiology studies had been conducted by Harvard's Medical and Public Health Schools in 1995, the National Academy of Sciences in 2000, the Agency for Toxic Substances and Disease Registry ("ATSDR") in 2004, and the National Toxicology Program in 2011. which reached the same conclusion as IARC that there is no evidence of increased risk from occupational exposure to glass wool fibers.
Pathophysiology
Genetic and toxic effects are exerted through production of reactive oxygen species, which can damage DNA, and cause chromosomal aberrations, nuclear abnormalities, mutations, gene amplification in proto-oncogenes, and cell transformation in mammalian cells. There is also indirect, inflammation-driven genotoxicity through reactive oxygen species by inflammatory cells. The longer and thinner as well as the more durable (biopersistent) fibers were, the more potent they were in damage.
Regulation, exposure limits
In the US, fine mineral fiber emissions have been regulated by the EPA, but respirable fibers (“particulates not otherwise regulated”) are regulated by Occupational Safety and Health Administration (OSHA); OSHA has set the legal limit (permissible exposure limit) for fiberglass exposure in the workplace as 15 mg/m3 total and 5 mg/m3 in respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 3 fibers/cm3 (less than 3.5 micrometers in diameter and greater than 10 micrometers in length) as a time-weighted average over an 8-hour workday, and a 5 mg/m3 total limit.
As of 2001, the Hazardous Substances Ordinance in Germany dictates a maximum occupational exposure limit of 86 mg/m3. In certain concentrations, a potentially explosive mixture may occur. Further manufacture of GRP components (grinding, cutting, sawing) creates fine dust and chips containing glass filaments, as well as tacky dust, in quantities high enough to affect health and the functionality of machines and equipment. The installation of effective extraction and filtration equipment is required to ensure safety and efficiency.
See also
Bulk moulding compound
Fiberglass sheet laminating
G-10 (material)
Glass fiber reinforced concrete
Hobas
Ignace Dubus-Bonnel
Sheet moulding compound
Carbon-fiber-reinforced polymers reinforcement with carbon fibers.
References
External links
American inventions
Composite materials
Fibre-reinforced polymers
Glass applications | Fiberglass | [
"Physics",
"Chemistry",
"Materials_science"
] | 5,371 | [
"Composite materials",
"Fiberglass",
"Materials",
"Polymer chemistry",
"Matter"
] |
174,448 | https://en.wikipedia.org/wiki/Abelian%20extension | In abstract algebra, an abelian extension is a Galois extension whose Galois group is abelian. When the Galois group is also cyclic, the extension is also called a cyclic extension. Going in the other direction, a Galois extension is called solvable if its Galois group is solvable, i.e., if the group can be decomposed into a series of normal extensions of an abelian group. Every finite extension of a finite field is a cyclic extension.
Description
Class field theory provides detailed information about the abelian extensions of number fields, function fields of algebraic curves over finite fields, and local fields.
There are two slightly different definitions of the term cyclotomic extension. It can mean either an extension formed by adjoining roots of unity to a field, or a subextension of such an extension. The cyclotomic fields are examples. A cyclotomic extension, under either definition, is always abelian.
If a field K contains a primitive n-th root of unity and the n-th root of an element of K is adjoined, the resulting Kummer extension is an abelian extension (if K has characteristic p we should say that p doesn't divide n, since otherwise this can fail even to be a separable extension). In general, however, the Galois groups of n-th roots of elements operate both on the n-th roots and on the roots of unity, giving a non-abelian Galois group as semi-direct product. The Kummer theory gives a complete description of the abelian extension case, and the Kronecker–Weber theorem tells us that if K is the field of rational numbers, an extension is abelian if and only if it is a subfield of a field obtained by adjoining a root of unity.
There is an important analogy with the fundamental group in topology, which classifies all covering spaces of a space: abelian covers are classified by its abelianisation which relates directly to the first homology group.
References
Field extensions
Algebraic number theory
Class field theory | Abelian extension | [
"Mathematics"
] | 423 | [
"Algebraic number theory",
"Number theory"
] |
174,474 | https://en.wikipedia.org/wiki/Synchronization%20gear | A synchronization gear (also known as a gun synchronizer or interrupter gear) was a device enabling a single-engine tractor configuration aircraft to fire its forward-firing armament through the arc of its spinning propeller without bullets striking the blades. This allowed the aircraft, rather than the gun, to be aimed at the target.
There were many practical problems, mostly arising from the inherently imprecise nature of an automatic gun's firing, the great (and varying) velocity of the blades of a spinning propeller, and the very high speed at which any gear synchronizing the two had to operate. In practice, all known gears worked on the principle of actively triggering each shot, in the manner of a semi-automatic weapon.
Design and experimentation with gun synchronization had been underway in France and Germany in 1913–1914, following the ideas of August Euler, who seems to have been the first to suggest mounting a fixed armament firing in the direction of flight (in 1910). However, the first practicalif far from reliablegear to enter operational service was that fitted to the Fokker Eindecker fighters, which entered squadron service with the German Air Service in mid-1915. The success of the Eindecker led to numerous gun synchronization devices, culminating in the reasonably reliable hydraulic British Constantinesco gear of 1917. By the end of the First World War, German engineers were well on the way to perfecting a gear using an electrical rather than a mechanical or hydraulic link between the engine and the gun, with the gun triggered by an electro-mechanical solenoid.
From 1918 to the mid-1930s the standard armament for a fighter aircraft remained two synchronized rifle-calibre machine guns, firing forward through the arc of the propeller. In the late 1930s, however, the main role of the fighter was increasingly seen as the destruction of large, all-metal bombers, for which this armament was inadequate. Since it was impractical to fit more than two guns in the limited space available in the front of a single-engine aircraft's fuselage, guns began to be mounted in the wings instead, firing outside the arc of the propeller so not requiring synchronising. Synchronizing became unnecessary on all aircraft with the introduction of propellerless jet propulsion.
Nomenclature
A mechanism to enable an automatic weapon to fire between the blades of a whirling propeller is usually called an interrupter or synchronizer gear. Both these terms are more or less misleading, at least insofar as explaining what happens when the gear functions.
The term "interrupter" implies that the gear pauses, or "interrupts" the fire of the gun at the point where one of the blades of the propeller passes in front of its muzzle. Even the relatively slowly revolving propellers of First World War aircraft, however, typically turned twice or even three times for each shot a contemporary machine gun could fire. A two-bladed propeller would therefore obstruct the gun six times every firing cycle of the gun, a four-bladed one twelve times. A gun set up this way would be interrupted more than forty times per second, while firing at only around seven rounds per second. Unsurprisingly, the designers of so-called interrupter gears found this too problematic to be seriously attempted, as the gaps between "interruptions" would have been too short to allow the gun to fire at all.
True synchronization, though, with a machine gun's rate of fire exactly proportional to the revolutions per minute of a spinning aircraft propeller, would require an impractical level of complexity. A machine gun normally fires a constant number of rounds a minute, and while this may be changed by modifying the gun, it cannot be varied at will while the gun is operating. The rate of rotation of an aircraft propeller, meanwhile, especially before the advent of the constant-speed propeller, could vary widely, depending on the throttle setting and what maneuvers were being performed. Even if it had been feasible to pick a particular point on an aircraft engine's tachometer at which a machine gun's cyclic rate would permit it to fire through the propeller arc, this would be very limiting.
It has been pointed out that any mechanism that achieved the feat of firing between the whirling blades of a propeller without striking them could be described as "interrupting" the fire of the gun (to the extent that it no longer actually works as an automatic weapon at all), and also as "synchronizing", or "timing" its fire to coincide with the revolutions of the propeller.
Components
A typical synchronizing gear had three basic components.
At the propeller
First, a method of determining the position of the propeller at a given instant was required. Typically, a cam, driven either directly from the propeller shaft itself, or from some part of the drive train revolving at the same speed as the propeller, generated a series of impulses at the same rate as the propeller's revolutions. There were exceptions to this. Some gears placed the cam within the gun trigger mechanism itself, and the firing impulses were sometimes timed to occur at every two or three revolutions of the propeller, or, especially in the case of hydraulic or electric gears, at the rate of two or more for each revolution. The diagrams in this section assume, for simplicity's sake, one impulse for one revolution, so that each synchronized round is "aimed" at a single spot on the propeller disc.
The timing of each impulse had to be adjusted to coincide with a "safe" period, when the blades of the propeller were well out of the way, and this adjustment had to be checked at intervals, especially if the propeller was changed or refitted, as well as after a major engine overhaul. Faults in this adjustment (for example, a cam wheel slipping a millimetre or two, or a pushrod flexing) could well result in every bullet fired hitting the propeller, a worse result than if the gun was fired through the propeller with no control at all. The other main type of failure resulted in fewer or no firing impulses, usually due to the generator or linkages either jamming or breaking. This was a common cause of synchronized guns "jamming".
The speed of the propeller, and thus the distance that it travelled between the firing of the gun and the arrival of the bullet at the propeller disc, varied as the rate of engine revolutions changed. Where muzzle velocity was very high, and the guns were sited well forward so that the bullets had a very short distance to reach the disc of the propeller, this difference could be largely ignored. But in the case of relatively low muzzle velocity weapons, or any gun sited well back from the propeller, the question could become critical, and in some cases the pilot had to consult his tachometer, taking care that his engine revolutions were within a "safe" range before firing, otherwise risking speedy destruction of his propeller.
At the gun
The second requirement was for a gun that would reliably fire (or hold its fire) exactly when required. Not all automatic weapons were equally amenable to synchronization. When it was ready to fire, a synchronized machine-gun needed to have a round in the breech, the breech closed, and the action cocked (the so-called "closed bolt" position). Several widely used automatic weapons (notably the Lewis gun and the Italian Revelli) were triggered from an open bolt, with an unpredictable interval between triggering and firing, and were thus not suitable for synchronization without extensive modification.
In practice it was found necessary for the gun to be fired in semi-automatic mode. As the propeller revolved, a series of firing impulses was transmitted to the gun, each of which could trigger it to fire a single shot. The majority of these impulses would catch the gun in the process of ejecting a spent round or loading a fresh one, and would thus have no effect; but as soon as the firing cycle was completed, the gun would be ready to fire as soon as it received the next impulse from the synchronizing gear. The delay between the end of the firing cycle and the arrival of the next firing impulse slowed the rate of fire in comparison with a free-firing machine gun, which fires the moment it is ready to do so; but provided the gear functioned correctly, the gun could fire fairly rapidly between the whirling propeller blades without striking them.
Some other machine-guns, such as the Austrian Schwarzlose and the American Marlin, proved less than perfectly adapted to synchronization, although eventually predictable "single shot" firing was achieved, typically by modifying the trigger mechanism to emulate "closed bolt" firing. Most weapons that were successfully synchronized (at least in the First World War period) were (like the German Parabellum and Spandau guns and the British Vickers) based on the original Maxim gun of 1884, a closed bolt weapon operated by barrel recoil. Before these distinctions were fully understood, much time was wasted on attempts to synchronize unsuitable weapons.
Even a closed bolt weapon needed reliable ammunition. If the primer in a cartridge is faulty to the extent of delaying the firing of the gun for a tiny fraction of a second (quite a common case in practice with mass-produced ammunition) this is of little consequence in the case of a gun in use by infantry on the ground, but in the case of a synchronized "aircraft" gun such a delay can produce a rogue firing, sufficiently "out of time" for it to risk hitting the propeller. A very similar problem could arise where the mass of a special round (such as an incendiary or explosive one) was different enough to produce a substantial difference in muzzle velocity. This was compounded by the additional risk to the integrity of the propeller due to the nature of the round.
The "trigger motor" could theoretically take two forms. The earliest patent (Schneider 1913) assumed that the synchronization gear would periodically prevent the gun from firing, thus operating as a true, or literal "interrupter". In practice all "real-life" synchronization gears, for which we have reliable technical details, directly fired the gun: operating it as if it were a semi-automatic weapon rather than a completely automatic one.
The linkage between propeller and gun
The third requirement is for a linkage between the "machines" (engine and gun) to be synchronized. Many early gears used an intricate and inherently fragile bell crank and push rod linkage that could easily jam or otherwise malfunction, especially when required to work at higher speeds than it had been designed for. There were several alternative methods, including an oscillating rod, a flexible drive, a column of hydraulic fluid, a cable, or an electrical connection.
Generally, mechanical systems were inferior to hydraulic or electric ones, but none were ever entirely foolproof, and synchronization gears at best always remained liable to occasional failure. The Luftwaffe ace Adolf Galland in his memoir of the war period The First and the Last describes a serious faulty synchronization incident in 1941.
Rate of fire
A pilot would usually only have the target in his sights for a fleeting moment, so a concentration of bullets was vital for achieving a "kill". Even flimsy First World War aircraft often took a surprisingly large number of hits to shoot down, and later, larger aircraft were even harder propositions. There were two obvious solutionsto fit a more efficient gun with a higher cyclic rate of fire, or increase the number of guns carried. Both of these measures impinged on the question of synchronization.
Early synchronized guns of the 1915–1917 period had a rate of fire in the region of 400 rounds per minute. At this comparatively leisurely rate of fire a synchronizer can be geared down to deliver a single firing impulse every two or three turns of the propeller, rendering it more reliable without unduly slowing the rate of fire. To control a faster gun, with, for example, a cyclic rate of 800 or 1,000 rounds a minute, it was necessary to supply at least one impulse (if not two) for every rotation of the propeller, making it more liable to failure. The intricate mechanism of a mechanical linkage system, especially of the "push rod" type, could easily shake itself to pieces when driven at this rate.
The final version of the Fokker Eindecker, the Fokker E.IV, came with two lMG 08 "Spandau" machine guns; this armament became standard for all the German D-type scouts starting with the Albatros D.I. From the appearance of the Sopwith Camel and the SPAD S.XIII in mid-1917, right through to the end of gun synchronization in the 1950s, a twin gun installation was the international norm. Having the two guns firing simultaneously would obviously not have been a satisfactory arrangement. The guns needed to both fire at the same point on the propeller disc, which means that one had to fire a tiny fraction of a second later than the other. This is why early gears designed for a single machine gun needed to be modified in order to control two guns satisfactorily. In practice, at least part of the mechanism had to be duplicated, even if the two weapons were not synchronized separately.
History
From the beginnings of practical flight, possible military uses for aircraft were considered, although not all writers came to positive conclusions on the subject. By 1913, military exercises in Britain, Germany, and France had confirmed the likely usefulness of aircraft for reconnaissance and surveillance, and this was seen by a few forward looking officers as implying the need to deter or destroy the enemy's reconnaissance machines. Thus aerial combat was by no means entirely unanticipated, and the machine gun was from the first seen as the most likely weapon to be used.
What was not generally agreed on was the superiority, at least for an attacking aircraft, of fixed forward-firing guns, aimed by pointing the aircraft at its target, rather than flexible weapons, aimed by a gunner other than the pilot.
As late as 1916, pilots of the DH.2 pusher fighter had problems convincing their senior officers that the forward-firing armament of their aircraft was more effective if it was fixed to fire forward rather than being flexible. On the other hand, August Euler had patented the idea of a fixed gun as early as 1910 – long before tractor aircraft became the norm, illustrating his patent with a diagram of a machine gun-armed pusher.
The Franz Schneider patent (1913–1914)
Whether directly inspired by Euler's original patent or not, the first inventor to patent a method of firing forward through a tractor propeller was the Swiss engineer Franz Schneider, formerly with Nieuport, but by then working for the LVG Company in Germany.
The patent was published in the German aviation magazine Flugsport in 1914, meaning that the concept became public knowledge at an early stage. The linkage between the propeller and the gun is achieved with a spinning drive shaft, rather than a reciprocating rod. The impulses needed to operate the trigger, or in this case to prevent the trigger from operating, are produced by a cam wheel with two lobes at 180° apart situated at the gun itself since firing is to be interrupted by both blades of the propeller. No attempt was made (so far as is known) to build or test an actual operating gear based on this patent, which attracted little or no official interest at the time. The exact form of the synchronization gear fitted to Schneider's LVG E.I of 1915 and its relationship to this patent is unknown, since no plans survive.
The Raymond Saulnier patent (1914)
Unlike the Schneider patent design, Saulnier's device was actually built, and may be considered the first practical synchronization gear to be tested. For the first time, the cam producing the to-and-fro movement conveying firing impulses to the gun is situated at the engine (driven in this case by the same spindle that operated the oil pump and the tachometer) and the impulses themselves are transmitted by a reciprocating rod rather than Schneider's rotating shaft. The idea of literally "interrupting" the firing of the gun gives way (probably as the result of experience) to the principle of pulling the trigger for each successive shot, like the action of a semi-automatic weapon.
It has been pointed out that this was a practical design that should have worked, but it did not. Apart from possible inconsistencies in the ammunition supplied, the real problem was that the gun used to trial the gear, a gas-operated Hotchkiss 8 mm (.323 in) machine gun borrowed from the French army, was fundamentally unsuitable for "semi-automatic" firing. Following initial unsuccessful tests, the gun had to be returned, and the experiments ceased.
Unsynchronized guns and the "deflector wedge" concept
When the pilots of the British Royal Flying Corps and Royal Naval Air Service arrived in France in 1914, they were equipped with pusher aircraft too underpowered to carry machine guns and still have a chance of overtaking the enemy, and tractor aircraft which were difficult to arm effectively because the propeller was in the way. Among other attempts to get around thissuch as firing obliquely past the arc of the propeller, and even efforts, doomed to failure, to synchronize the Lewis Gun which was at the time the "standard" British aircraft weapon was the expedient of firing straight through the propeller arc and "hoping for the best". A high proportion of bullets would in the normal course pass the propeller without striking the blades, and each blade might typically take several hits before there was much danger of its failing, especially if it were bound with tape to prevent splintering (see diagram below, and illustration to the left).
After his early synchronization experiments failed, Saulnier pursued a method trusting rather less to statistics and luck by developing armoured propeller blades that would resist damage.
By March 1915, when French pilot Roland Garros approached Saulnier to arrange for this device to be installed on his Morane-Saulnier Type L, these had taken the form of steel wedges which deflected the bullets which might otherwise have damaged the propeller, or ricocheted dangerously. Garros himself and his personal mechanic Jules Hue are sometimes credited with testing and perfecting the "deflectors". This crude system worked after a fashion, although the wedges diminished the propeller's efficiency, and the not inconsiderable force of the impact of bullets on the deflector blades must have put undesirable stress on the engine's crankshaft.
On 1 April 1915 Garros shot down his first German aircraft, killing both the crew. On 18 April 1915, after two more victories, Garros was forced down (by ground fire) behind German lines. Although he was able to burn his aircraft, Garros was captured and his special propeller was sufficiently intact to be sent for evaluation by the Inspektion der Fliegertruppen (Idflieg) at Döberitz near Berlin.
Fokker's Synchronizer and other German gears
Inspection of the propeller from Garros' machine prompted Idflieg to attempt to copy it. Initial trials indicated that the deflector wedges would not be sufficiently strong to cope with the standard steel-jacketed German ammunition, and representatives from Fokker and Pfalz, two companies already building Morane copies (although, strangely, not Schneider's LVG concern) were invited to Döberitz to inspect the mechanism and suggest ways that its action might be duplicated.
Anthony Fokker was able to persuade Idflieg to arrange the loan of a Parabellum machine gun and ammunition so that his device could be tested, and for these items to be transported forthwith to the Fokker Flugzeugwerke GmbH at Schwerin (although probably not in his railway compartment or "under his arm", as he claimed after the war).
The story of his conception, development and installation of the Fokker synchronization device in a period of 48 hours (first found in an authorised biography of Fokker written in 1929) is not now believed to be factual. Another possible explanation is that Garros's Morane, partly destroyed by fire as it was, had sufficient traces of the original synchronization gear remaining for Fokker to have guessed how it worked. For various reasons this also seems unlikely, and the current historical consensus points to a synchronization device having been in development by Fokker's team (including engineer Heinrich Lübbe) prior to the capture of Garros's machine.
The Fokker Stangensteuerung gear
Whatever its ultimate source, the initial version of the Fokker synchronization gear (see illustration) very closely followed, not Schneider's patent, as claimed by Schneider and others, but Saulnier's. Like the Saulnier patent, Fokker's gear was designed to actively fire the gun rather than interrupt it, and, like the later Vickers-Challenger gear developed for the RFC, it followed Saulnier in taking its primary mechanical drive from the oil pump of a rotary engine. The "transmission" between the motor and the gun was by a version of Saulnier's reciprocating push-rod. The main difference was that instead of the push rod passing directly from the engine to the gun itself, which would have required a tunnel through the firewall and fuel tank (as shown in the Saulnier patent drawings), it was driven by a shaft joining the oil pump to a small cam at the top of the fuselage. This eventually proved unsatisfactory, as the oil pump's mechanical drive spindle was insufficiently robust to take the extra load.
Before the failings of the first form of the gear had become clear, Fokker's team had adapted the new system to the new Parabellum MG14 machine gun, and fitted it to a Fokker M.5K, a type which was at the time serving in small numbers with the Fliegertruppen as the A.III. This aircraft, bearing IdFlieg serial number A.16/15 became the direct forerunner to the five M.5K/MG pre-production prototypes built, and was effectively the prototype of the Fokker E.I – the first production single-seat fighter aircraft armed with a synchronized machine gun.
This prototype was demonstrated to IdFlieg by Fokker in person on 19–20 May 1915 at the Döberitz proving ground near Berlin. Leutnant Otto Parschau was test flying this aircraft by 30 May 1915. The five production prototypes (factory designated M.5K/MG and serialed E.1/15 – E.5/15) were undergoing military trials shortly thereafter. These were all armed with the Parabellum gun, synchronized with the first version of the Fokker gear. This prototype gear had such a short life that a redesign was necessary, producing the second, more familiar, production form of the gear.
The gear used in the production Eindecker fighters replaced the oil pump's mechanical driveshaft-based system with a large cam wheel, almost a light flywheel, driven directly from the spinning rotary engine's crankcase. The push rod now took its reciprocating motion directly from a "follower" on this cam wheel. At the same time the machine gun used was also changedan lMG 08 machine gun, the so-called "Spandau", replacing the Parabellum used with the prototype gear. At this time the Parabellum was still in very short supply, and all available examples were required as observers' guns, the lighter and handier weapon being far superior in this role.
The first victory using a synchronized gun-equipped fighter is now believed to have occurred on 1 July 1915 when Leutnant Kurt Wintgens of Feldflieger Abteilung 6b, flying the Parabellum-armed Fokker M.5K/MG aircraft "E.5/15", forced down a French Morane-Saulnier Type L east of Lunéville.
Exclusive possession of a working gun synchronizer enabled a period of German air superiority on the Western Front known as the Fokker Scourge. The German high command was protective of the synchronizer system, instructing pilots not to venture over enemy territory in case they were forced down and the secret revealed, but the basic principles involved were already common knowledge, and by the middle of 1916 several Allied synchronizers were already available in quantity.
By this time, the Fokker Stangensteuerung gear, which had worked reasonably well for synchronizing a single gun, firing at a modest cyclic rate through a two-bladed propeller driven by a rotary engine, was becoming obsolete.
Stangensteuerung gears for "stationary", i.e., in-line engines, worked from a small cam immediately behind the propeller (see illustration). This produced a basic dilemma: A short, fairly robust push rod meant that the machine gun had to be mounted well forward, putting the breech of the gun out of the pilot's reach for clearing jams. If the gun was mounted in the ideal position, within easy reach of the pilot, a much longer push rod was required, which tended to bend and break.
The other problem was that the Stangensteuerung never worked well with more than one gun. Two (or even three) guns, mounted side by side and firing simultaneously, would have produced a wide spread of fire that would have been impossible to match with the "safe zone" between the whirling propeller blades. Fokker's initial answer to this was the fitting of extra "followers" to the Stangensteuerung's large cam wheel, to (theoretically) produce the "ripple" salvo necessary to ensure that the guns were aimed at the same point on the propeller disc. This proved a disastrously unstable arrangement in the case of three guns, and was rather less than satisfactory, even for two. Most of the early Fokker and Halberstadt biplane fighters were limited to a single gun for this reason.
In fact, the builders of the new Albatros twin-gunned stationary-engine fighters of late 1916 had to introduce their own synchronization gear, known as the Hedtke gear or Hedtkesteuerung, and it was evident that Fokker were going to have to come up with something radically new.
The Fokker Zentralsteuerung gear
This was designed in late 1916 and took the form of a new synchronization gear without any rods at all. The cam that generated the firing impulses was moved from the engine to the gun; the trigger motor in effect now generated its own firing impulses. The linkage between the propeller and the gun now consisted of a flexible drive shaft directly connecting the end of the engine camshaft to the trigger motor of the gun. The firing button for the gun simply engaged a clutch at the engine which set the flexible drive (and thus the trigger motor) in motion. In some ways this brought the new gear closer to the original Schneider patent (q.v.).
A major advantage was that the adjustment (to set where on the propeller's disc each bullet was to impact) was now in the gun itself. This meant that each gun was adjusted separately, an important feature, since twin synchronized guns were not set to be fired in strict unison, but when they were pointing at the same point on the propeller disc. Each gun could be fired independently, since it had its own flexible drive, linked to the engine camshaft by a junction box, and having its own clutch. This provision of a quite separate set of components for each gun also meant that a failure in the gear for one gun did not impinge on the other.
This gear was available in numbers by mid 1917, in time for installation on the Fokker Dr.I triplane and all later German fighters. In fact it became the standard synchronizer for the Luftstreitkräfte for the remainder of the war, although experiments to find an even more reliable gear continued.
Other German synchronizers
The 1915 Schneider gear
In June 1915 a two-seater monoplane designed by Schneider for the LVG Company was sent to the front for evaluation. Its observer was armed with the new Schneider gun ring that was becoming standard on all German two-seaters: the pilot was apparently armed with a fixed synchronized machine gun. The aircraft crashed on its way to the front and nothing more was heard of it, or its synchronization gear, although it was presumably based on Schneider's own patent.
The Albatros gears
The new Albatros fighters of late 1916 were fitted with twin guns synchronized with the Albatros-Hedtke Steuerung gear, which was designed by Albatros Werkmeister Hedtke. The system was specifically intended to overcome the problems that had arisen in applying the Fokker Stangensteuerung gear to in-line engines and twin gun installations, and was a variation of the rigid push-rod system, driven from the rear of the crankshaft of the Mercedes D.III engine.
The Albatros D.V used a new gear, designed by Werkmeister Semmler: (the Albatros-Semmler Steuerung). It was basically an improved version of the Hedtke gear.
An official order, signed on 24 July 1917 standardised the superior Fokker Zentralsteuerung system for all German aircraft, presumably including Albatroses.
Electrical gears
Post First World War German fighters were fitted with electrical synchronizers. In such a gear, a contact or set of contacts, either on the propeller shaft itself, or some other part of the drive train revolving at the same number of revolutions per minute, generates a series of electrical pulses, which are transmitted to a solenoid driven trigger motor at the gun. Experiments with these were underway before the end of the war, and again the LVG company seems to have been involved: a British intelligence report from 25 June 1918 mentions an LVG two-seater fitted with such a gear that was brought down in the British lines. It is known that LVG built 40 C.IV two-seaters fitted with a Siemens electrical synchronizing system.
In addition, the Aviatik company received instructions to install 50 of their own electrical synchronization system on to DFW C.Vs (Av).
Austria-Hungary
The standard machine gun of the Austro-Hungarian armed forces in 1914 was the Schwarzlose MG M.07/12 machine-gun, which operated on a "delayed blow back" system and was not suited to synchronization. Unlike the French and Italians, who were eventually able to acquire supplies of Vickers guns, the Austrians were unable to obtain sufficient quantities of "Spandaus" from their German allies and were forced to use the Schwarzlose in an application for which it was not really suited. Although the problem of synchronizing the Schwarzlose was eventually partially solved, it was not until late 1916 that gears were available. Even then, at high engine revolutions Austrian synchronizer gears tended to behave very erratically. Austrian fighters were fitted with large tachometers to ensure that a pilot could check that his "revs" were within the required range before firing his guns, and propeller blades were fitted with an electrical warning system that alerted a pilot if his propeller was being hit. There were never enough gears available, due to a chronic shortage of precision tools; so that production fighters, even the excellent Austrian versions of the Albatros D.III, often had to be sent to the front in an unarmed state, for squadron armourers to fit such guns and gears as could be scrounged, salvaged or improvised.
Rather than standardising on a single system, different Austrian manufacturers produced their own gears. The research of Harry Woodman (1989) identified the following types:
Zahnrad-Steuerung (cogwheel-control)
Drive was from the camshaft operating rods of the Austro-Daimler engine via a wormgear. The early Schwarzlose gun had a synchronized rate of 360 rounds per minute with this gear – this was later boosted to 380 rounds with the MG16 model.
Bernatzik-Steuerung
Drive was taken from the rocking arm of an exhaust valve, a lever fixed to the valve housing transmitting impulses to the gun through a rod. Designed by Leutnant Otto Bernatzik, it was geared down to deliver a firing impulse every second revolution of the propeller, and fired at about 380 to 400 rounds per gun. As with other gears synchronizing the Schwarzlose gun, firing became erratic at high engine speeds.
Priesel-Steuerung
Apart from a control that engaged the cam follower and fired the gun in one movement, this gear was based closely on the original Fokker Stangensteuerung gear. It was designed by Oberleutnant Guido Priesel, and became standard on Oeffag Albatros fighters in 1918.
Zap-Steuerung (Zaparka control)
This gear was designed by Oberleutnant Eduard Zaparka. Drive was from the rear of the camshaft of a Hiero engine through a transmission shaft with Cardan joints. The rate of fire, with the later Schwarzlose gun, was up to 500 rounds per minute. The machine gun had to be placed well forward, where it was inaccessible to the pilot, so that jams could not be cleared in flight.
Kralische Zentralsteuerung
Based on the principle of the Fokker Zentralsteuerung gear, with flexible drives linked to the camshaft, and firing impulses being generated by the trigger motor of each gun. Geared down to operate more reliably with the difficult Schwarzlose gun, its rate of fire was limited to 360–380 rounds per minute.
United Kingdom
British gun synchronization got off to a quick but rather shaky start. The early mechanical synchronization gears turned out to be inefficient and unreliable and full standardisation on the very satisfactory hydraulic "C.C." gear was not accomplished until November 1917. Synchronized guns seem to have been rather unpopular with British fighter pilots well into 1917 and the over-wing Lewis gun, on its Foster mounting, remained the weapon for Nieuports in British service, being also initially considered as the main weapon of the S.E.5. Significantly, early problems with the C.C. gear were considered one of the less pressing matters for No. 56 squadron in March 1917, busy getting their new S.E.5 fighters combat worthy before they went to France, since they had the over-wing Lewis to fall back on! Ball actually had his Vickers gun removed altogether for a while, to save weight.
The Vickers-Challenger gear
The first British synchronizer gear was built by the manufacturer of the machine-gun for which it was designed: it went into production in December 1915. George Challenger, the designer, was at the time an engineer at Vickers. In principle it closely resembled the first form of the Fokker gear, although this was not because it was a copy (as is sometimes reported) it was not until April 1916 that a captured Fokker was available for technical analysis. The fact is that both gears were based closely on the Saulnier patent. The first version was driven by a reduction gear attached to a rotary engine oil pump spindle as in Saulnier's design and a small impulse-generating cam was mounted externally on the port side of the forward fuselage where it was readily accessible for adjustment.
Unfortunately, when the gear was fitted to types such as the Bristol Scout and the Sopwith 1½ Strutter, which had rotary engines and their forward-firing machine gun in front of the cockpit, the long push rod linking the gear to the gun had to be mounted at an awkward angle, in which it was liable to twisting and deformation as well as expansion and contraction due to temperature changes.
For this reason the B.E.12, the R.E.8 and Vickers' own FB 19 mounted their forward-firing machine guns on the port side of the fuselage so that a relatively short version of the push rod could be linked directly to the gun.
This worked reasonably well although the "awkward" position of the gun, which precluded direct sighting, was initially much criticised. It proved less of a problem than was at first supposed once it was realized that it was the aircraft that was aimed rather than the gun itself. The last aircraft type to be fitted with the Vickers-Challenger gear, the R.E.8, retained the port-side position of the gun even after most were retrofitted with the C.C. gear from mid 1917.
The Scarff-Dibovski gear
Lieutenant Victor Dibovski, an officer of the Imperial Russian Navy, while serving as a member of a mission to England to observe and report on British aircraft production methods, suggested a synchronization gear of his own design. According to Russian sources, this gear had already been tested in Russia, with mixed results, although it is possible that the earlier Dibovski gear was actually a deflector system rather than a true synchronizer. In any case, Warrant Officer F. W. Scarff worked with Dibovski to develop and realize the gear, which worked on the familiar cam and rider principle, the connection to the gun being by the usual push rod and a rather complicated series of levers. It was geared in order to slow the rate that firing impulses were delivered to the gun (and hence improve reliability, although not the rate of fire). The gear was ordered for the RNAS and followed the Vickers-Challenger gear into production by a matter of weeks. It was more adaptable to rotary engines than the Vickers-Challenger, but apart from early Sopwith 1½ Strutters built to RNAS orders in 1916, and possibly some early Sopwith Pups, no actual applications seem to have been recorded.
Ross and other "miscellaneous" gears
The Ross gear was an interim, field-built gear designed in 1916 specifically to replace the unsuitable Vickers-Challenger gears in the 1½ Strutters of the RFC's No.70 Squadron. Officially it was designed by Captain Ross of No.70, although it has been suggested that a flight-sergeant working under Captain Ross was largely responsible. The gear was apparently used only on 1½ Strutters, but No. 45 squadron used at least some examples of the gear, as well as No. 70. It was replaced by the Sopwith-Kauper gear when that gear became available.
Norman Macmillan, writing some years after the event, claimed that the Ross gear had a very slow rate of fire, but that it left the original trigger intact, so that it was possible "in a really tight corner" to "fire the gun direct without the gear, and get the normal rate of fire of the ground gun". Macmillan claimed that propellers with up to twenty hits nonetheless got their aircraft home. Some aspects of this information are hard to reconcile with the way a synchronized gun actually worked, and may well be a matter of Macmillan's memory playing tricks.
Another "field made" synchronizer was the ARSIAD: produced by the Aeroplane Repair Section of the No.1 Aircraft Depot in 1916. Little specific seems to be known about it; although it may have been fitted to some early R.E.8s for which no Vickers-Challenger gears could be found.
Airco and Armstrong Whitworth both designed their own gears specifically for their own aircraft. Standardisation on the hydraulic C.C. gear (described below) occurred before either had been produced in numbers. Only Sopwiths' gear (next section) was to go into production.
The Sopwith-Kauper gear
The first mechanical synchronization gears fitted to early Sopwith fighters were so unsatisfactory that in mid 1916 Sopwiths had an improved gear designed by their foreman of works Harry Kauper, a friend and colleague of fellow Australian Harry Hawker. This gear was specifically intended to overcome the faults of earlier gears. Patents connected with the extensively modified Mk.II and Mk.III versions were applied for in January and June 1917.
Mechanical efficiency was improved by reversing the action of the push rod. The firing impulse was generated at a low point of the cam instead of at the lobe of the cam as in Saulnier's patent. Thus the force on the rod was exerted by tension rather than compression, (or in less technical language, the trigger motor worked by being "pulled" rather than "pushed") which enabled the rod to be lighter, minimising its inertia so that it could operate faster (at least in early versions of the gear, each revolution of the cam wheel produced two firing impulses instead of one). A single firing lever engaged the gear and fired the gun in one action, rather than the gear having to be "turned on" and then fired, as with some earlier gears.
2,750 examples of the Sopwith-Kauper gear were installed in service aircraft: as well as being the standard gear for the Sopwith Pup and Triplane it was fitted to many early Camels, and replaced earlier gears in 1½ Strutters and other Sopwith types. However, by November 1917, in spite of several modifications, it was becoming evident that even the Sopwith-Kauper gear suffered from the inherent limitations of mechanical gears. Camel squadrons, in particular, reported that propellers were frequently being "shot through", the gears having a tendency to "run away". Wear and tear, as well as the increased rate of fire of the Vickers gun and higher engine speeds were responsible for this decline in performance and reliability. By this time the teething problems of the hydraulic C.C. gear had been overcome and it was made standard for all British aircraft, including Sopwiths.
The Constantinesco synchronization gear
Major Colley, the Chief Experimental Officer and Artillery Adviser at the War Office Munitions Invention Department, became interested in George Constantinesco's theory of Wave Transmission, and worked with him to determine how his invention could be put to practical use, finally hitting on the notion of developing a synchronization gear based on it. Major Colley used his contacts in the Royal Flying Corps and the Royal Artillery (his own corps) to obtain the loan of a Vickers machine gun and 1,000 rounds of ammunition.
Constantinesco drew on his work with rock drills to develop a synchronization gear using his wave transmission system. In May 1916, he prepared the first drawing and an experimental model of what became known as the Constantinesco Fire Control Gear or the "C.C. (Constantinesco-Colley) Gear". The first provisional patent application for the Gear was submitted on 14 July 1916 (No. 512).
At first, the meticulous Constantinesco was dissatisfied with the odd slightly deviant hit on his test disc. It was found that carefully inspecting the ammunition cured this fault (common, of course, to all such gears); with good quality rounds, the performance of the gear pleased even its creator. A. M. Low who commanded the Royal Flying Corps secret Experimental Works at Feltham was involved in the testing. The system was perfected by Constantinesco in collaboration with the Fleet Street printer and engineer Walter Haddon at the Haddon Engineering Works in Honeypot Lane, Alperton. The first working C.C. gear was air-tested in a B.E.2c in August 1916.
The new gear had several advantages over all mechanical gears: the rate of fire was greatly improved, the synchronization was much more accurate, and above all it was readily adaptable to any type of engine and airframe, instead of needing a specially designed impulse generator for each type of engine and special linkages for each type of aircraft. In the long run (provided it was properly maintained and adjusted) it also proved far more durable and less prone to failure.
No. 55 Squadron's DH.4s arrived in France on 6 March 1917 fitted with the new gear, followed shortly after by No. 48 squadron's Bristol Fighters and No. 56 Squadron's S.E.5s. Early production models had some teething troubles in service, as ground crew learned to service and adjust the new gears, and pilots to operate them. It was late in 1917 before a version of the gear that could operate twin guns became available, so that the first Sopwith Camels had to be fitted with the Sopwith-Kauper gear instead.
From November 1917 the gear finally became standard; being fitted to all new British aircraft with synchronized guns from that date up to the Gloster Gladiator of 1937.
Over 6,000 gears were fitted to machines of the Royal Flying Corps and the Royal Naval Air Service between March and December 1917. Twenty thousand more "Constantinesco-Colley" gun synchronization systems were fitted to British military aircraft between January and October 1918, during the period when the Royal Air Force was formed from the two earlier services on April 1, 1918. A total of 50,000 gears were manufactured during the twenty years it was standard equipment.
The Betteridge gear
The C.C. gear was not the only hydraulic gear to be proposed; in 1917 Air Mechanic A.R. Betteridge of No.1 Squadron Australian Flying Corps built and tested a gear of his own design while serving with his unit in Palestine. No official interest was expressed in this device; possibly the C.C. gear was already in prospect. The illustration seems very likely to be of the test rig for this gear.
France
The French Aviation Militaire was fortunate in that they were able to standardise on two reasonably satisfactory synchronization gears – one adapted for rotary engines, and the other for "stationary" (in-line) ones – almost from the beginning.
The Alkan-Hamy gear
The first French synchronizer was developed by Sergeant-Mécanicien Robert Alkan and Ingénieur du Génie maritime Hamy. It was based closely on the definitive Fokker Stangensteuerung gear: the main difference being that the push rod was installed within the Vickers gun, using a redundant steam tube in the cooling jacket. This mitigated a major drawback of other push rod gears in that the rod, being supported for its whole length, was much less liable to distortion or breakage. Vickers guns modified to take this gear can be distinguished by the housing for the push rod's spring, projecting from the front of the gun like a second barrel. This gear was first installed and air-tested in a Nieuport 12, on 2 May 1916, and other pre-production gears were fitted to contemporary Morane-Saulnier and Nieuport fighters. The Alkan-Hamy gear was standardised as the Système de Synchronisation pour Vickers Type I (moteurs rotatifs), becoming available in numbers in time for the arrival of the Nieuport 17 at the front in mid 1916, as the standard gear for forward-firing guns of rotary-engine French aircraft.
The Nieuport 28 used a different gear – now known only through American documentation, where it is described as the "Nieuport Synchronizing gear" or the "Gnome gear". A spinning drive shaft, driven by the rotating crankcase of the Nieuport's 160 CV Gnome 9N Monosoupape rotary engine, drove two separately adjustable trigger motors – each imparting firing impulses to its gun by means of its own short rod. Photographic evidence suggests that an earlier version of this gear, controlling a single gun, might have been fitted to the Nieuport 23 and the Hanriot HD.1.
The Birkigt gear
The SPAD S.VII was designed around Marc Birkigt's Hispano-Suiza engine, and when the new fighter entered service in September 1916 it came armed with a single Vickers gun synchronized with a new gear provided by Birkigt for use with his engine. Unlike most other mechanical gears, the "SPAD gear" as it was often called, did without a pushrod altogether: the firing impulses being transmitted to the gun torsionally by a moving oscillating shaft, which rotated through about a quarter of a revolution, alternately clockwise and anticlockwise. This oscillation was more mechanically efficient than the reciprocating motion of a push rod, permitting higher speeds. Officially known as the Système de Synchronisation pour Vickers Type II (moteurs fixes) the Birkigt gear was later adapted to control two guns, and remained in use in French service up to the time of the Second World War.
Russia
No Russian synchronization gears went into production before the 1917 Revolution – although experiments by Victor Dibovski in 1915 contributed to the later British Scarff-Dibovski gear (described above), and another naval officer, G.I. Lavrov, also designed a gear that was fitted to the unsuccessful Sikorsky S-16. French and British designs licence-built in Russia used the Alkan-Hamy or Birkigt gears.
Fighters of the Soviet era used synchronized guns right up to the time of the Korean War, when the Lavochkin La-11 and the Yakovlev Yak-9 became the last synchronizer-equipped aircraft to see combat action.
Italy
The Italian Fiat-Revelli gun did not prove amenable to synchronization, so the Vickers became the standard pilot's weapon, synchronized by the Alkan-Hamy or Birkigt gears.
United States
French and British combat aircraft ordered for the American Expeditionary Force in 1917/18 were fitted with their "native" synchronization gears, including the Alkan-Hamy in Nieuports and French-built Sopwiths, the Birkigt gear in SPADs, and the C.C. gear for British types. The C.C. was also adopted for the twin M1917/18 Marlin machine guns fitted to the American built DH-4, and was itself made in America until the Nelson gear appeared in numbers.
The Nelson gear
The Marlin gas operated gun proved less amenable to synchronization than the Vickers. It was found that "rogue" shots occasionally pierced the propeller, even when the gear was properly adjusted and otherwise functioning well. The problem was eventually resolved by modifications to the Marlin's trigger mechanism, but in the meantime the engineer Adolph L. Nelson at the Airplane Engineering Department at McCook Field had developed a new, mechanical gear especially adapted to the Marlin, officially known as the Nelson single shot synchronizer. In place of the push rod common to many mechanical gears, or the "pull rod" of the Sopwith-Kauper, the Nelson gear used a cable held in tension for the transmission of firing impulses to the gun.
Production models were largely too late for use before the end of the First World War, but the Nelson gear became the post-war U.S. standard, as Vickers and Marlin guns were phased out in favour of the Browning .30 calibre machine gun.
E-4/E-8 gears
The Nelson gear proved reliable and accurate, but it was expensive to produce and the necessity for its cable to be given a straight run could create difficulties when it was to be installed in a new type. By 1929 the latest model (the E-4 gear) had a new and simplified impulse generator, a new trigger motor, and the impulse cable was enclosed in a metal tube, protecting it, and permitting shallow bends. While the basic principle of the new gear remained unchanged: virtually all the components had been redesigned, and it was no longer officially referred to as the "Nelson" gear. The gear was further modernised in 1942 as the E-8. This final model had a modified impulse generator that was easier to adjust and was controlled from the cockpit by an electrical solenoid rather than a Bowden cable.
Decline and end of synchronization
The usefulness of synchronization gears naturally disappeared altogether when jet engines eliminated the propeller, at least in fighter aircraft, but gun synchronization, even in single reciprocating engine aircraft, had already been in decline for twenty years prior to this.
The increased speeds of the new monoplanes of the mid to late 1930s meant that the time available to deliver a sufficient weight of fire to bring down an enemy aircraft was greatly reduced. At the same time, the primary vehicle of air power was increasingly seen as the large all-metal bomber: powerful enough to carry armour protection for its vulnerable areas. Two rifle-calibre machine guns were no longer enough, especially for defence planners who anticipated a primarily strategic role for airpower. An effective "anti-bomber" fighter needed something more.
Cantilever monoplane wings provided ample space to mount armaments and, being much more rigid than the old cable-braced wings, they afforded almost as steady a mounting as the fuselage. This new context also made the harmonisation of wing guns more satisfactory, producing a fairly narrow cone of fire in the close to medium ranges at which a fighter's gun armament was most effective.
The retention of fuselage-mounted guns, with the additional weight of their synchronization gear (which slowed their rate of fire, albeit only slightly, and still occasionally failed, resulting in damage to propellers) became increasingly unattractive. This design philosophy, common in Britain and France (and, after 1941, the United States) tended towards eliminating fuselage mounted guns altogether. For example, the original 1934 specifications for the Hawker Hurricane were for a similar armament to the Gloster Gladiator: four machine-guns, two in the wings and two in the fuselage, synchronized to fire through the propeller arc. The illustration opposite is of an early mock-up of the prototype, showing the starboard fuselage gun. The prototype (K5083) as completed had ballast representing this armament; production Hurricane Is, however, were armed with eight guns, all in the wings.
Another approach, common to Germany, the Soviet Union, and Japan, while recognising the necessity to increase armament, preferred a system that included synchronized weapons. Centralised guns had the real advantage that their range was limited only by ballistics, as they did not need the gun harmonisation necessary to concentrate the fire of wing-mounted guns. They were seen as rewarding the true marksman, as they involved less dependence on gun sight technology. Mounting guns in the fuselage also concentrated mass at the centre of gravity, thus improving the fighter's roll ability. More consistent ammunition manufacture, and improved synchronization gear systems made the whole concept more efficient and effective, whilst facilitating its application to weapons of increased calibre such as autocannon; moreover the constant-speed propellers that quickly became standard equipment on WW II fighters meant that the ratio between the propeller speed and the rate of fire of the guns varied less erratically.
The swan-song of synchronization belongs to the last reciprocating engine Soviet fighters, which largely made do with slow firing synchronized cannon throughout the World War II period and after. In fact, the very last synchronizer-equipped aircraft to see combat action were the Lavochkin La-11 and the Yakovlev Yak-9 during the Korean War.
Popular culture
The act of shooting one's own propeller is a trope that can be found in comedic gags, like the 1965 cartoon short "Just Plane Beep" starring Wile E. Coyote and the Road Runner. In this film, the attacking Coyote reduces his propeller to splinters after numerous bullets strike.
See also
Glossary of firearms terms
Gun harmonisation
Notes
References
Bibliography
Barnes, C. H. Bristol Aircraft since 1910. London: Putnam, 1964.
Bruce, J. M. Sopwith 1½ Strutter. Leatherhead: Profile Publications, 1966.
Bureau of Aircraft Production. Handbook of Aircraft Armament. Washington: (U.S.) Government Printing Office, 1918.
Cheesman, E.F.(ed.). Fighter Aircraft of the 1914–1918 War. Letchworth: Harleyford, 1960.
Courtney, Frank T. The Eighth Sea. New York: Doubleday, 1972
Fokker, Anthony and Bruce Gould. Flying Dutchman: The Life of Anthony Fokker. London: George Routledge, 1931.
Galland, Adolf. The First and the Last. London: Methuen, 1956. (A translation of Die Ersten und die Letzten, Berlin: Franz Schneekluth, 1955)
Goulding, James. Interceptor: RAF Single Seat Multi-Gun Fighters. London: Ian Allan Ltd., 1986. .
Grosz, Peter M., Windsock Mini Datafile 7, Fokker E.IV, Albatros Publications, Ltd. 1996.
Grosz, Peter M., Windsock Datafile No. 91, Fokker E.I/II, Albatros Publications, Ltd. 2002.
Guttman, Jon. The Origin of the Fighter Aircraft. Yardley: Westholme, 2009.
Hamady, Theodore The Nieuport 28 – America's First Fighter. Atglen, PA: Schiffer Military History, 2008.
Hare, Paul R. Mount of Aces – The Royal Aircraft Factory S.E.5a, UK: Fonthill Media, 2013.
Hegener, Henri. Fokker – the Man and the Aircraft, Letchworth: Harleyford, 1961.
Jarrett, Philip, "The Fokker Eindeckers", Aeroplane Monthly, December 2004
Kosin, Rudiger, The German Fighter since 1915, London: Putman, 1988. (original German edition 1986)
Kulikov, Victor, Russian Aces of World War 1. Oxford: Osprey, 2013.
Mason, Francis K., The Hawker Hurricane, London: MacDonald, 1962.
Mixter, G.W. and H.H. Emmonds. United States Army Production Facts. Washington: Bureau of Aircraft Production, 1919.
Pengelly, Colin, Albert Ball V.C. The Fighter Pilot of World War I. Barnsley: Pen and Sword, 2010.
Robertson, Bruce, Sopwith – the man and his aircraft, Letchworth: Air Review, 1970.
Sweetman, John, Cavalry of the clouds:Air war over Europe 1914–1918, Stroud: Spellmount, 2010.
VanWyngarden, Greg, Early German Aces of World War 1. Oxford: Osprey, 2006.
Varriale, Paolo, Austro-Hungarian Albatros Aces of World War I. Oxford: Osprey, 2012.
Volker, Hank. "Synchronizers Parts 1–6" in World War I Aero. (1992–1996), World War I Aeroplanes, Inc.
Weyl, A. J., Fokker: The Creative Years. London: Putnam, 1965.
Williams, Anthony G & Dr. Emmanuel Guslin Flying Guns, World War I. Ramsbury, Wilts: Crowood Press, 2003.
Woodman, Harry. Early Aircraft Armament. London: Arms and Armour, 1989
Woodman, Harry, "CC Gun Synchronization Gear", Aeroplane Monthly, September 2005
Firearm terminology
Firearm components
Firearms
Wikipedia glossaries using unordered lists
Machine guns
Military aviation
Gear
Gear | Synchronization gear | [
"Technology",
"Engineering"
] | 12,369 | [
"Firearm components",
"Telecommunications engineering",
"Components",
"Synchronization"
] |
174,475 | https://en.wikipedia.org/wiki/Modularity%20theorem | The modularity theorem (formerly called the Taniyama–Shimura conjecture, Taniyama–Shimura–Weil conjecture or modularity conjecture for elliptic curves) states that elliptic curves over the field of rational numbers are related to modular forms in a particular way. Andrew Wiles and Richard Taylor proved the modularity theorem for semistable elliptic curves, which was enough to imply Fermat's Last Theorem. Later, a series of papers by Wiles's former students Brian Conrad, Fred Diamond and Richard Taylor, culminating in a joint paper with Christophe Breuil, extended Wiles's techniques to prove the full modularity theorem in 2001.
Statement
The theorem states that any elliptic curve over can be obtained via a rational map with integer coefficients from the classical modular curve for some integer ; this is a curve with integer coefficients with an explicit definition. This mapping is called a modular parametrization of level . If is the smallest integer for which such a parametrization can be found (which by the modularity theorem itself is now known to be a number called the conductor), then the parametrization may be defined in terms of a mapping generated by a particular kind of modular form of weight two and level , a normalized newform with integer -expansion, followed if need be by an isogeny.
Related statements
The modularity theorem implies a closely related analytic statement:
To each elliptic curve over we may attach a corresponding -series. The -series is a Dirichlet series, commonly written
The generating function of the coefficients is then
If we make the substitution
we see that we have written the Fourier expansion of a function of the complex variable , so the coefficients of the -series are also thought of as the Fourier coefficients of . The function obtained in this way is, remarkably, a cusp form of weight two and level and is also an eigenform (an eigenvector of all Hecke operators); this is the Hasse–Weil conjecture, which follows from the modularity theorem.
Some modular forms of weight two, in turn, correspond to holomorphic differentials for an elliptic curve. The Jacobian of the modular curve can (up to isogeny) be written as a product of irreducible Abelian varieties, corresponding to Hecke eigenforms of weight 2. The 1-dimensional factors are elliptic curves (there can also be higher-dimensional factors, so not all Hecke eigenforms correspond to rational elliptic curves). The curve obtained by finding the corresponding cusp form, and then constructing a curve from it, is isogenous to the original curve (but not, in general, isomorphic to it).
History
Yutaka Taniyama stated a preliminary (slightly incorrect) version of the conjecture at the 1955 international symposium on algebraic number theory in Tokyo and Nikkō as the twelfth of his set of 36 unsolved problems. Goro Shimura and Taniyama worked on improving its rigor until 1957. André Weil rediscovered the conjecture, and showed in 1967 that it would follow from the (conjectured) functional equations for some twisted -series of the elliptic curve; this was the first serious evidence that the conjecture might be true. Weil also showed that the conductor of the elliptic curve should be the level of the corresponding modular form. The Taniyama–Shimura–Weil conjecture became a part of the Langlands program.
The conjecture attracted considerable interest when Gerhard Frey suggested in 1986 that it implies Fermat's Last Theorem. He did this by attempting to show that any counterexample to Fermat's Last Theorem would imply the existence of at least one non-modular elliptic curve. This argument was completed in 1987 when Jean-Pierre Serre identified a missing link (now known as the epsilon conjecture or Ribet's theorem) in Frey's original work, followed two years later by Ken Ribet's completion of a proof of the epsilon conjecture.
Even after gaining serious attention, the Taniyama–Shimura–Weil conjecture was seen by contemporary mathematicians as extraordinarily difficult to prove or perhaps even inaccessible to prove. For example, Wiles's Ph.D. supervisor John Coates states that it seemed "impossible to actually prove", and Ken Ribet considered himself "one of the vast majority of people who believed [it] was completely inaccessible".
In 1995, Andrew Wiles, with some help from Richard Taylor, proved the Taniyama–Shimura–Weil conjecture for all semistable elliptic curves. Wiles used this to prove Fermat's Last Theorem, and the full Taniyama–Shimura–Weil conjecture was finally proved by Diamond, Conrad, Diamond & Taylor; and Breuil, Conrad, Diamond & Taylor; building on Wiles's work, they incrementally chipped away at the remaining cases until the full result was proved in 1999. Once fully proven, the conjecture became known as the modularity theorem.
Several theorems in number theory similar to Fermat's Last Theorem follow from the modularity theorem. For example: no cube can be written as a sum of two coprime th powers, .
Generalizations
The modularity theorem is a special case of more general conjectures due to Robert Langlands. The Langlands program seeks to attach an automorphic form or automorphic representation (a suitable generalization of a modular form) to more general objects of arithmetic algebraic geometry, such as to every elliptic curve over a number field. Most cases of these extended conjectures have not yet been proved.
In 2013, Freitas, Le Hung, and Siksek proved that elliptic curves defined over real quadratic fields are modular.
Example
For example, the elliptic curve , with discriminant (and conductor) 37, is associated to the form
For prime numbers not equal to 37, one can verify the property about the coefficients. Thus, for , there are 6 solutions of the equation modulo 3: , , , , , ; thus .
The conjecture, going back to the 1950s, was completely proven by 1999 using the ideas of Andrew Wiles, who proved it in 1994 for a large family of elliptic curves.
There are several formulations of the conjecture. Showing that they are equivalent was a main challenge of number theory in the second half of the 20th century. The modularity of an elliptic curve of conductor can be expressed also by saying that there is a non-constant rational map defined over , from the modular curve to . In particular, the points of can be parametrized by modular functions.
For example, a modular parametrization of the curve is given by
where, as above, . The functions and are modular of weight 0 and level 37; in other words they are meromorphic, defined on the upper half-plane and satisfy
and likewise for , for all integers with and .
Another formulation depends on the comparison of Galois representations attached on the one hand to elliptic curves, and on the other hand to modular forms. The latter formulation has been used in the proof of the conjecture. Dealing with the level of the forms (and the connection to the conductor of the curve) is particularly delicate.
The most spectacular application of the conjecture is the proof of Fermat's Last Theorem (FLT). Suppose that for a prime , the Fermat equation
has a solution with non-zero integers, hence a counter-example to FLT. Then as was the first to notice, the elliptic curve
of discriminant
cannot be modular. Thus, the proof of the Taniyama–Shimura–Weil conjecture for this family of elliptic curves (called Hellegouarch–Frey curves) implies FLT. The proof of the link between these two statements, based on an idea of Gerhard Frey (1985), is difficult and technical. It was established by Kenneth Ribet in 1987.
Notes
References
Bibliography
Contains a gentle introduction to the theorem and an outline of the proof.
Discusses the Taniyama–Shimura–Weil conjecture 3 years before it was proven for infinitely many cases.
English translation in
External links
Algebraic curves
Modular forms
Theorems in number theory
Theorems in algebraic geometry
Conjectures that have been proved
20th century in mathematics
Arithmetic geometry
1995 in science | Modularity theorem | [
"Mathematics"
] | 1,685 | [
"Theorems in algebraic geometry",
"Arithmetic geometry",
"Theorems in number theory",
"Theorems in geometry",
"Conjectures that have been proved",
"Mathematical problems",
"Modular forms",
"Mathematical theorems",
"Number theory"
] |
174,482 | https://en.wikipedia.org/wiki/Common%20logarithm | In mathematics, the common logarithm (aka "standard logarithm") is the logarithm with base 10. It is also known as the decadic logarithm, the decimal logarithm and the Briggsian logarithm. The name "Briggsian logarithm" is in honor of the British mathematician Henry Briggs who conceived of and developed the values for the "common logarithm". Historically', the "common logarithm" was known by its Latin name logarithmus decimalis or logarithmus decadis.
The mathematical notation for using the common logarithm is , , or sometimes with a capital ; on calculators, it is printed as "log", but mathematicians usually mean natural logarithm (logarithm with base e ≈ 2.71828) rather than common logarithm when writing "log". To mitigate this ambiguity, the ISO 80000 specification recommends that should be written , and should be .
Before the early 1970s, handheld electronic calculators were not available, and mechanical calculators capable of multiplication were bulky, expensive and not widely available. Instead, tables of base-10 logarithms were used in science, engineering and navigation—when calculations required greater accuracy than could be achieved with a slide rule. By turning multiplication and division to addition and subtraction, use of logarithms avoided laborious and error-prone paper-and-pencil multiplications and divisions. Because logarithms were so useful, tables of base-10 logarithms were given in appendices of many textbooks. Mathematical and navigation handbooks included tables of the logarithms of trigonometric functions as well. For the history of such tables, see log table.
Mantissa and characteristic
An important property of base-10 logarithms, which makes them so useful in calculations, is that the logarithm of numbers greater than 1 that differ by a factor of a power of 10 all have the same fractional part. The fractional part is known as the mantissa. Thus, log tables need only show the fractional part. Tables of common logarithms typically listed the mantissa, to four or five decimal places or more, of each number in a range, e.g. 1000 to 9999.
The integer part, called the characteristic, can be computed by simply counting how many places the decimal point must be moved, so that it is just to the right of the first significant digit. For example, the logarithm of 120 is given by the following calculation:
The last number (0.07918)—the fractional part or the mantissa of the common logarithm of 120—can be found in the table shown. The location of the decimal point in 120 tells us that the integer part of the common logarithm of 120, the characteristic, is 2.
Negative logarithms
Positive numbers less than 1 have negative logarithms. For example,
To avoid the need for separate tables to convert positive and negative logarithms back to their original numbers, one can express a negative logarithm as a negative integer characteristic plus a positive mantissa. To facilitate this, a special notation, called bar notation, is used:
The bar over the characteristic indicates that it is negative, while the mantissa remains positive. When reading a number in bar notation out loud, the symbol is read as "bar ", so that is read as "bar 2 point 07918...". An alternative convention is to express the logarithm modulo 10, in which case
with the actual value of the result of a calculation determined by knowledge of the reasonable range of the result.
The following example uses the bar notation to calculate 0.012 × 0.85 = 0.0102:
* This step makes the mantissa between 0 and 1, so that its antilog (10) can be looked up.
The following table shows how the same mantissa can be used for a range of numbers differing by powers of ten:
Note that the mantissa is common to all of the . This holds for any positive real number because
Since is a constant, the mantissa comes from , which is constant for given . This allows a table of logarithms to include only one entry for each mantissa. In the example of , 0.698 970 (004 336 018 ...) will be listed once indexed by 5 (or 0.5, or 500, etc.).
History
Common logarithms are sometimes also called "Briggsian logarithms" after Henry Briggs, a 17th century British mathematician. In 1616 and 1617, Briggs visited John Napier at Edinburgh, the inventor of what are now called natural (base-e) logarithms, in order to suggest a change to Napier's logarithms. During these conferences, the alteration proposed by Briggs was agreed upon; and after his return from his second visit, he published the first chiliad of his logarithms.
Because base-10 logarithms were most useful for computations, engineers generally simply wrote "" when they meant . Mathematicians, on the other hand, wrote "" when they meant for the natural logarithm. Today, both notations are found. Since hand-held electronic calculators are designed by engineers rather than mathematicians, it became customary that they follow engineers' notation. So the notation, according to which one writes "" when the natural logarithm is intended, may have been further popularized by the very invention that made the use of "common logarithms" far less common, electronic calculators.
Numeric value
The numerical value for logarithm to the base 10 can be calculated with the following identities:
or or
using logarithms of any available base
as procedures exist for determining the numerical value for logarithm base (see ) and logarithm base 2 (see Algorithms for computing binary logarithms).
Derivative
The derivative of a logarithm with a base b is such that
, so .
See also
Binary logarithm
Cologarithm
Decibel
Logarithmic scale
Napierian logarithm
Significand (also commonly called mantissa)
Notes
References
Bibliography
Logarithms | Common logarithm | [
"Mathematics"
] | 1,339 | [
"E (mathematical constant)",
"Logarithms"
] |
174,515 | https://en.wikipedia.org/wiki/Dirichlet%20character | In analytic number theory and related branches of mathematics, a complex-valued arithmetic function is a Dirichlet character of modulus (where is a positive integer) if for all integers and :
that is, is completely multiplicative.
(gcd is the greatest common divisor)
; that is, is periodic with period .
The simplest possible character, called the principal character, usually denoted , (see Notation below) exists for all moduli:
The German mathematician Peter Gustav Lejeune Dirichlet—for whom the character is named—introduced these functions in his 1837 paper on primes in arithmetic progressions.
Notation
is Euler's totient function.
is a complex primitive n-th root of unity:
but
is the group of units mod . It has order
is the group of Dirichlet characters mod .
etc. are prime numbers.
is a standard abbreviation for
etc. are Dirichlet characters. (the lowercase Greek letter chi for "character")
There is no standard notation for Dirichlet characters that includes the modulus. In many contexts (such as in the proof of Dirichlet's theorem) the modulus is fixed. In other contexts, such as this article, characters of different moduli appear. Where appropriate this article employs a variation of Conrey labeling (introduced by Brian Conrey and used by the LMFDB).
In this labeling characters for modulus are denoted where the index is described in the section the group of characters below. In this labeling, denotes an unspecified character and
denotes the principal character mod .
Relation to group characters
The word "character" is used several ways in mathematics. In this section it refers to a homomorphism from a group (written multiplicatively) to the multiplicative group of the field of complex numbers:
The set of characters is denoted If the product of two characters is defined by pointwise multiplication the identity by the trivial character and the inverse by complex inversion then becomes an abelian group.
If is a finite abelian group then there is an isomorphism , and the orthogonality relations:
and
The elements of the finite abelian group are the residue classes where
A group character can be extended to a Dirichlet character by defining
and conversely, a Dirichlet character mod defines a group character on
Paraphrasing Davenport, Dirichlet characters can be regarded as a particular case of Abelian group characters. But this article follows Dirichlet in giving a direct and constructive account of them. This is partly for historical reasons, in that Dirichlet's work preceded by several decades the development of group theory, and partly for a mathematical reason, namely that the group in question has a simple and interesting structure which is obscured if one treats it as one treats the general Abelian group.
Elementary facts
4) Since property 2) says so it can be canceled from both sides of :
5) Property 3) is equivalent to
if then
6) Property 1) implies that, for any positive integer
7) Euler's theorem states that if then Therefore,
That is, the nonzero values of are -th roots of unity:
for some integer which depends on and . This implies there are only a finite number of characters for a given modulus.
8) If and are two characters for the same modulus so is their product defined by pointwise multiplication:
( obviously satisfies 1-3).
The principal character is an identity:
9) Let denote the inverse of in .
Then
so which extends 6) to all integers.
The complex conjugate of a root of unity is also its inverse (see here for details), so for
( also obviously satisfies 1-3).
Thus for all integers
in other words .
10) The multiplication and identity defined in 8) and the inversion defined in 9) turn the set of Dirichlet characters for a given modulus into a finite abelian group.
The group of characters
There are three different cases because the groups have different structures depending on whether is a power of 2, a power of an odd prime, or the product of prime powers.
Powers of odd primes
If is an odd number is cyclic of order ; a generator is called a primitive root mod .
Let be a primitive root and for define the function (the index of ) by
For if and only if Since
is determined by its value at
Let be a primitive -th root of unity. From property 7) above the possible values of are
These distinct values give rise to Dirichlet characters mod For define as
Then for and all and
showing that is a character and
which gives an explicit isomorphism
Examples m = 3, 5, 7, 9
2 is a primitive root mod 3. ()
so the values of are
.
The nonzero values of the characters mod 3 are
2 is a primitive root mod 5. ()
so the values of are
.
The nonzero values of the characters mod 5 are
3 is a primitive root mod 7. ()
so the values of are
.
The nonzero values of the characters mod 7 are ()
.
2 is a primitive root mod 9. ()
so the values of are
.
The nonzero values of the characters mod 9 are ()
.
Powers of 2
is the trivial group with one element. is cyclic of order 2. For 8, 16, and higher powers of 2, there is no primitive root; the powers of 5 are the units and their negatives are the units
For example
Let ; then is the direct product of a cyclic group of order 2 (generated by −1) and a cyclic group of order (generated by 5).
For odd numbers define the functions and by
For odd and if and only if and
For odd the value of is determined by the values of and
Let be a primitive -th root of unity. The possible values of are
These distinct values give rise to Dirichlet characters mod For odd define by
Then for odd and and all and
showing that is a character and
showing that
Examples m = 2, 4, 8, 16
The only character mod 2 is the principal character .
−1 is a primitive root mod 4 ()
The nonzero values of the characters mod 4 are
−1 is and 5 generate the units mod 8 ()
.
The nonzero values of the characters mod 8 are
−1 and 5 generate the units mod 16 ()
.
The nonzero values of the characters mod 16 are
.
Products of prime powers
Let where be the factorization of into prime powers. The group of units mod is isomorphic to the direct product of the groups mod the :
This means that 1) there is a one-to-one correspondence between and -tuples where
and 2) multiplication mod corresponds to coordinate-wise multiplication of -tuples:
corresponds to
where
The Chinese remainder theorem (CRT) implies that the are simply
There are subgroups such that
and
Then
and every corresponds to a -tuple where and
Every can be uniquely factored as
If is a character mod on the subgroup it must be identical to some mod Then
showing that every character mod is the product of characters mod the .
For define
Then for and all and
showing that is a character and
showing an isomorphism
Examples m = 15, 24, 40
The factorization of the characters mod 15 is
The nonzero values of the characters mod 15 are
.
The factorization of the characters mod 24 is
The nonzero values of the characters mod 24 are
.
The factorization of the characters mod 40 is
The nonzero values of the characters mod 40 are
.
Summary
Let , be the factorization of and assume
There are Dirichlet characters mod They are denoted by where is equivalent to
The identity is an isomorphism
Each character mod has a unique factorization as the product of characters mod the prime powers dividing :
If the product is a character where is given by and
Also,
Orthogonality
The two orthogonality relations are
and
The relations can be written in the symmetric form
and
The first relation is easy to prove: If there are non-zero summands each equal to 1. If there is some Then
implying
Dividing by the first factor gives QED. The identity for shows that the relations are equivalent to each other.
The second relation can be proven directly in the same way, but requires a lemma
Given there is a
The second relation has an important corollary: if define the function
Then
That is the indicator function of the residue class . It is basic in the proof of Dirichlet's theorem.
Classification of characters
Conductor; Primitive and induced characters
Any character mod a prime power is also a character mod every larger power. For example, mod 16
has period 16, but has period 8 and has period 4: and
We say that a character of modulus has a quasiperiod of if for all , coprime to satisfying mod . For example, , the only Dirichlet character of modulus , has a quasiperiod of , but not a period of (it has a period of , though). The smallest positive integer for which is quasiperiodic is the conductor of . So, for instance, has a conductor of .
The conductor of is 16, the conductor of is 8 and that of and is 4. If the modulus and conductor are equal the character is primitive, otherwise imprimitive. An imprimitive character is induced by the character for the smallest modulus: is induced from and and are induced from .
A related phenomenon can happen with a character mod the product of primes; its nonzero values may be periodic with a smaller period.
For example, mod 15,
.
The nonzero values of have period 15, but those of have period 3 and those of have period 5. This is easier to see by juxtaposing them with characters mod 3 and 5:
.
If a character mod is defined as
, or equivalently as
its nonzero values are determined by the character mod and have period .
The smallest period of the nonzero values is the conductor of the character. For example, the conductor of is 15, the conductor of is 3, and that of is 5.
As in the prime-power case, if the conductor equals the modulus the character is primitive, otherwise imprimitive. If imprimitive it is induced from the character with the smaller modulus. For example, is induced from and is induced from
The principal character is not primitive.
The character is primitive if and only if each of the factors is primitive.
Primitive characters often simplify (or make possible) formulas in the theories of L-functions and modular forms.
Parity
is even if and is odd if
This distinction appears in the functional equation of the Dirichlet L-function.
Order
The order of a character is its order as an element of the group , i.e. the smallest positive integer such that Because of the isomorphism the order of is the same as the order of in The principal character has order 1; other real characters have order 2, and imaginary characters have order 3 or greater. By Lagrange's theorem the order of a character divides the order of which is
Real characters
is real or quadratic if all of its values are real (they must be ); otherwise it is complex or imaginary.
is real if and only if ; is real if and only if ; in particular, is real and non-principal.
Dirichlet's original proof that (which was only valid for prime moduli) took two different forms depending on whether was real or not. His later proof, valid for all moduli, was based on his class number formula.
Real characters are Kronecker symbols; for example, the principal character can be written
.
The real characters in the examples are:
Principal
If the principal character is
Primitive
If the modulus is the absolute value of a fundamental discriminant there is a real primitive character (there are two if the modulus is a multiple of 8); otherwise if there are any primitive characters they are imaginary.
Imprimitive
Applications
L-functions
The Dirichlet L-series for a character is
This series only converges for ; it can be analytically continued to a meromorphic function.
Dirichlet introduced the -function along with the characters in his 1837 paper.
Modular forms and functions
Dirichlet characters appear several places in the theory of modular forms and functions. A typical example is
Let and let be primitive.
If
define
,
Then
. If is a cusp form so is
See theta series of a Dirichlet character for another example.
Gauss sum
The Gauss sum of a Dirichlet character modulo is
It appears in the functional equation of the Dirichlet L-function.
Jacobi sum
If and are Dirichlet characters mod a prime their Jacobi sum is
Jacobi sums can be factored into products of Gauss sums.
Kloosterman sum
If is a Dirichlet character mod and the Kloosterman sum is defined as
If it is a Gauss sum.
Sufficient conditions
It is not necessary to establish the defining properties 1) – 3) to show that a function is a Dirichlet character.
From Davenport's book
If such that
1)
2) ,
3) If then , but
4) is not always 0,
then is one of the characters mod
Sárközy's Condition
A Dirichlet character is a completely multiplicative function that satisfies a linear recurrence relation: that is, if
for all positive integer , where are not all zero and are distinct then is a Dirichlet character.
Chudakov's Condition
A Dirichlet character is a completely multiplicative function satisfying the following three properties: a) takes only finitely many values; b) vanishes at only finitely many primes; c) there is an for which the remainder
is uniformly bounded, as . This equivalent definition of Dirichlet characters was conjectured by Chudakov in 1956, and proved in 2017 by Klurman and Mangerel.
See also
Character sum
Multiplicative group of integers modulo n
Primitive root modulo n
Multiplicative character
Notes
References
External links
English translation of Dirichlet's 1837 paper on primes in arithmetic progressions
LMFDB Lists 30,397,486 Dirichlet characters of modulus up to 10,000 and their L-functions
Analytic number theory
Zeta and L-functions | Dirichlet character | [
"Mathematics"
] | 2,932 | [
"Analytic number theory",
"Number theory"
] |
174,521 | https://en.wikipedia.org/wiki/Infrastructure | Infrastructure is the set of facilities and systems that serve a country, city, or other area, and encompasses the services and facilities necessary for its economy, households and firms to function. Infrastructure is composed of public and private physical structures such as roads, railways, bridges, airports, public transit systems, tunnels, water supply, sewers, electrical grids, and telecommunications (including Internet connectivity and broadband access). In general, infrastructure has been defined as "the physical components of interrelated systems providing commodities and services essential to enable, sustain, or enhance societal living conditions" and maintain the surrounding environment.
Especially in light of the massive societal transformations needed to mitigate and adapt to climate change, contemporary infrastructure conversations frequently focus on sustainable development and green infrastructure. Acknowledging this importance, the international community has created policy focused on sustainable infrastructure through the Sustainable Development Goals, especially Sustainable Development Goal 9 "Industry, Innovation and Infrastructure".
One way to describe different types of infrastructure is to classify them as two distinct kinds: hard infrastructure and soft infrastructure. Hard infrastructure is the physical networks necessary for the functioning of a modern industrial society or industry. This includes roads, bridges, and railways. Soft infrastructure is all the institutions that maintain the economic, health, social, environmental, and cultural standards of a country. This includes educational programs, official statistics, parks and recreational facilities, law enforcement agencies, and emergency services.
Classifications
A 1987 US National Research Council panel adopted the term "public works infrastructure", referring to:
"... both specific functional modes – highways, streets, roads, and bridges; mass transit; airports and airways; water supply and water resources; wastewater management; solid-waste treatment and disposal; electric power generation and transmission; telecommunications; and hazardous waste management – and the combined system these modal elements comprise. A comprehension of infrastructure spans not only these public works facilities, but also the operating procedures, management practices, and development policies that interact together with societal demand and the physical world to facilitate the transport of people and goods, provision of water for drinking and a variety of other uses, safe disposal of society's waste products, provision of energy where it is needed, and transmission of information within and between communities."
The American Society of Civil Engineers publishes an "Infrastructure Report Card" which represents the organization's opinion on the condition of various infrastructure every 2–4 years. they grade 16 categories, namely aviation, bridges, dams, drinking water, energy, hazardous waste, inland waterways, levees, parks and recreation, ports, rail, roads, schools, solid waste, transit and wastewater. The United States has received a rating of "D+" on its infrastructure. This aging infrastructure is a result of governmental neglect and inadequate funding. As the United States presumably looks to upgrade its existing infrastructure, sustainable measures could be a consideration of the design, build, and operation plans.
Public
Public infrastructure is that owned or available for use by the public (represented by the government). It includes:
Transport infrastructure – vehicles, road, rail, cable and financing of transport
Aviation infrastructure – air traffic control technology in aviation
Rail transport – trackage, signals, electrification of rails
Road transport – roads, bridges, tunnels
Critical infrastructure – assets required to sustain human life
Energy infrastructure – transmission and storage of fossil fuels and renewable sources
Information and communication infrastructure – systems of information storage and distribution
Public capital – government-owned assets
Public works – municipal infrastructure, maintenance functions and agencies
Municipal solid waste – generation, collection, management of trash/garbage
Sustainable urban infrastructure – technology, architecture, policy for sustainable living
Water supply network – the distribution and maintenance of water supply
Wastewater infrastructure – disposal and treatment of wastewater
Infrastructure-based development
Personal
A way to embody personal infrastructure is to think of it in terms of human capital. Human capital is defined by the Encyclopædia Britannica as "intangible collective resources possessed by individuals and groups within a given population". The goal of personal infrastructure is to determine the quality of the economic agents' values. This results in three major tasks: the task of economic proxies in the economic process (teachers, unskilled and qualified labor, etc.); the importance of personal infrastructure for an individual (short and long-term consumption of education); and the social relevance of personal infrastructure. Essentially, personal infrastructure maps the human impact on infrastructure as it is related to the economy, individual growth, and social impact.
Institutional
Institutional infrastructure branches from the term "economic constitution". According to Gianpiero Torrisi, institutional infrastructure is the object of economic and legal policy. It compromises the growth and sets norms. It refers to the degree of fair treatment of equal economic data and determines the framework within which economic agents may formulate their own economic plans and carry them out in co-operation with others.
Sustainable
Sustainable infrastructure refers to the processes of design and construction that take into consideration their environmental, economic, and social impact. Included in this section are several elements of sustainable schemes, including materials, water, energy, transportation, and waste management infrastructure. Although there are endless other factors of consideration, those will not be covered in this section.
Material
Material infrastructure is defined as "those immobile, non-circulating capital goods that essentially contribute to the production of infrastructure goods and services needed to satisfy basic physical and social requirements of economic agents". There are two distinct qualities of material infrastructures: 1) fulfillment of social needs and 2) mass production. The first characteristic deals with the basic needs of human life. The second characteristic is the non-availability of infrastructure goods and services. Today, there are various materials that can be used to build infrastructure. The most prevalent ones are asphalt, concrete, steel, masonry, wood, polymers and composites.
Economic
According to the business dictionary, economic infrastructure can be defined as "internal facilities of a country that make business activity possible, such as communication, transportation and distribution networks, financial institutions and related international markets, and energy supply systems". Economic infrastructure support productive activities and events. This includes roads, highways, bridges, airports, cycling infrastructure, water distribution networks, sewer systems, and irrigation plants.
Social
Social infrastructure can be broadly defined as the construction and maintenance of facilities that support social services. Social infrastructures are created to increase social comfort and promote economic activity. These include schools, parks and playgrounds, structures for public safety, waste disposal plants, hospitals, and sports areas.
Core
Core assets provide essential services and have monopolistic characteristics. Investors seeking core infrastructure look for five different characteristics: income, low volatility of returns, diversification, inflation protection, and long-term liability matching. Core infrastructure incorporates all the main types of infrastructure, such as roads, highways, railways, public transportation, water, and gas supply.
Basic
Basic infrastructure refers to main railways, roads, canals, harbors and docks, the electromagnetic telegraph, drainage, dikes, and land reclamation. It consist of the more well-known and common features of infrastructure that we come across in our daily lives (buildings, roads, docks).
Complementary
Complementary infrastructure refers to things like light railways, tramways, and gas/electricity/water supply. To complement something means to bring it to perfection or complete it. Complementary infrastructure deals with the little parts of the engineering world that make life more convenient and efficient. They are needed to ensure successful usage and marketing of an already finished product, like in the case of road bridges. Other examples are lights on sidewalks, landscaping around buildings, and benches where pedestrians can rest.
Applications
Engineering and construction
Engineers generally limit the term "infrastructure" to describe fixed assets that are in the form of a large network; in other words, hard infrastructure. Efforts to devise more generic definitions of infrastructures have typically referred to the network aspects of most of the structures, and to the accumulated value of investments in the networks as assets. One such definition from 1998 defined infrastructure as the network of assets "where the system as a whole is intended to be maintained indefinitely at a specified standard of service by the continuing replacement and refurbishment of its components".
Civil defense and economic development
Civil defense planners and developmental economists generally refer to both hard and soft infrastructure, including public services such as schools and hospitals, emergency services such as police and fire fighting, and basic services in the economic sector. The notion of infrastructure-based development combining long-term infrastructure investments by government agencies at central and regional levels with public private partnerships has proven popular among economists in Asia (notably Singapore and China), mainland Europe, and Latin America.
Military
Military infrastructure is the buildings and permanent installations necessary for the support of military forces, whether they are stationed in bases, being deployed or engaged in operations. Examples include barracks, headquarters, airfields, communications facilities, stores of military equipment, port installations, and maintenance stations.
Communications
Communications infrastructure is the informal and formal channels of communication, political and social networks, or beliefs held by members of particular groups, as well as information technology, software development tools. Still underlying these more conceptual uses is the idea that infrastructure provides organizing structure and support for the system or organization it serves, whether it is a city, a nation, a corporation, or a collection of people with common interests. Examples include IT infrastructure, research infrastructure, terrorist infrastructure, employment infrastructure, and tourism infrastructure.
Related concepts
The term "infrastructure" may be confused with the following overlapping or related concepts.
Land improvement and land development are general terms that in some contexts may include infrastructure, but in the context of a discussion of infrastructure would refer only to smaller-scale systems or works that are not included in infrastructure, because they are typically limited to a single parcel of land, and are owned and operated by the landowner. For example, an irrigation canal that serves a region or district would be included with infrastructure, but the private irrigation systems on individual land parcels would be considered land improvements, not infrastructure. Service connections to municipal service and public utility networks would also be considered land improvements, not infrastructure.
The term "public works" includes government-owned and operated infrastructure as well as public buildings, such as schools and courthouses. Public works generally refers to physical assets needed to deliver public services. Public services include both infrastructure and services generally provided by the government.
Ownership and financing
Infrastructure may be owned and managed by governments or by privately held companies, such as sole public utility or railway companies. Generally, most roads, major airports and other ports, water distribution systems, and sewage networks are publicly owned, whereas most energy and telecommunications networks are privately owned. Publicly owned infrastructure may be paid for from taxes, tolls, or metered user fees, whereas private infrastructure is generally paid for by metered user fees. Major investment projects are generally financed by the issuance of long-term bonds.
Government-owned and operated infrastructure may be developed and operated in the private sector or in public-private partnerships, in addition to in the public sector. in the United States for example, public spending on infrastructure has varied between 2.3% and 3.6% of GDP since 1950. Many financial institutions invest in infrastructure.
In the developing world
According to researchers at the Overseas Development Institute, the lack of infrastructure in many developing countries represents one of the most significant limitations to economic growth and achievement of the Millennium Development Goals (MDGs). Infrastructure investments and maintenance can be very expensive, especially in such areas as landlocked, rural and sparsely populated countries in Africa. It has been argued that infrastructure investments contributed to more than half of Africa's improved growth performance between 1990 and 2005, and increased investment is necessary to maintain growth and tackle poverty. The returns to investment in infrastructure are very significant, with on average thirty to forty percent returns for telecommunications (ICT) investments, over forty percent for electricity generation, and eighty percent for roads.
Regional differences
The demand for infrastructure both by consumers and by companies is much higher than the amount invested. There are severe constraints on the supply side of the provision of infrastructure in Asia. The infrastructure financing gap between what is invested in Asia-Pacific (around US$48 billion) and what is needed (US$228 billion) is around US$180 billion every year.
In Latin America, three percent of GDP (around US$71 billion) would need to be invested in infrastructure in order to satisfy demand, yet in 2005, for example, only around two percent was invested leaving a financing gap of approximately US$24 billion.
In Africa, in order to reach the seven percent annual growth calculated to be required to meet the MDGs by 2015 would require infrastructure investments of about fifteen percent of GDP, or around US$93 billion a year. In fragile states, over thirty-seven percent of GDP would be required.
Sources of funding for infrastructure
The source of financing for infrastructure varies significantly across sectors. Some sectors are dominated by government spending, others by overseas development aid (ODA), and yet others by private investors. In California, infrastructure financing districts are established by local governments to pay for physical facilities and services within a specified area by using property tax increases. In order to facilitate investment of the private sector in developing countries' infrastructure markets, it is necessary to design risk-allocation mechanisms more carefully, given the higher risks of their markets.
The spending money that comes from the government is less than it used to be. From the 1930s to 2019, the United States went from spending 4.2% of GDP to 2.5% of GDP on infrastructure. These under investments have accrued, in fact, according to the 2017 ASCE Infrastructure Report Card, from 2016 to 2025, infrastructure will be underinvested by $2 trillion. Compared to the global GDP percentages, The United States is tied for second-to-last place, with an average percentage of 2.4%. This means that the government spends less money on repairing old infrastructure and or on infrastructure as a whole.
In Sub-Saharan Africa, governments spend around US$9.4 billion out of a total of US$24.9 billion. In irrigation, governments represent almost all spending. In transport and energy a majority of investment is government spending. In ICT and water supply and sanitation, the private sector represents the majority of capital expenditure. Overall, between them aid, the private sector, and non-OECD financiers exceed government spending. The private sector spending alone equals state capital expenditure, though the majority is focused on ICT infrastructure investments. External financing increased in the 2000s (decade) and in Africa alone external infrastructure investments increased from US$7 billion in 2002 to US$27 billion in 2009. China, in particular, has emerged as an important investor.
Coronavirus implications
The 2020 COVID-19 pandemic has only exacerbated the underfunding of infrastructure globally that has been accumulating for decades. The pandemic has increased unemployment and has widely disrupted the economy. This has serious impacts on households, businesses, and federal, state and local governments. This is especially detrimental to infrastructure because it is so dependent on funding from government agencieswith state and local governments accounting for approximately 75% of spending on public infrastructure in the United States.
Governments are facing enormous decreases in revenue, economic downturns, overworked health systems, and hesitant workforces, resulting in huge budget deficits across the board. However, they must also scale up public investment to ensure successful reopening, boost growth and employment, and green their economies. The unusually large scale of the packages needed for COVID-19 was accompanied by widespread calls for "greening" them to meet the dual goals of economic recovery and environmental sustainability. However, as of March 2021, only a small fraction of the G20 COVID-19 related fiscal measures was found to be climate friendly.
Sustainable infrastructure
Although it is readily apparent that much effort is needed to repair the economic damage inflicted by the Coronavirus epidemic, an immediate return to business as usual could be environmentally harmful, as shown by the 2007-08 financial crisis in the United States. While the ensuing economic slowdown reduced global greenhouse gas emissions in 2009, emissions reached a record high in 2010, partially due to governments' implemented economic stimulus measures with minimal consideration of the environmental consequences. The concern is whether this same pattern will repeat itself. The post-COVID-19 period could determine whether the world meets or misses the emissions goals of the 2015 Paris Agreement and limits global warming to 1.5 degrees C to 2 degrees C.
As a result of the COVID-19 epidemic, a host of factors could jeopardize a low-carbon recovery plan: this includes reduced attention on the global political stage (2020 UN Climate Summit has been postponed to 2021), the relaxing of environmental regulations in pursuit of economic growth, decreased oil prices preventing low-carbon technologies from being competitive, and finally, stimulus programs that take away funds that could have been used to further the process of decarbonization. Research suggests that a recovery plan based on lower-carbon emissions could not only make significant emissions reductions needed to battle climate change, but also create more economic growth and jobs than a high-carbon recovery plan would. A study published in the Oxford Review of Economic Policy, more than 200 economists and economic officials reported that "green" economic-recovery initiatives performed at least as well as less "green" initiatives. There have also been calls for an independent body could provide a comparable assessment of countries' fiscal policies, promoting transparency and accountability at the international level.
In addition, in an econometric study published in the Economic Modelling journal, an analysis on government energy technology spending showed that spending on the renewable energy sector created five more jobs per million dollars invested than spending on fossil fuels. Since sustainable infrastructure is more beneficial in both an economic and environmental context, it represents the future of infrastructure. Especially with increasing pressure from climate change and diminishing natural resources, infrastructure not only needs to maintain economic development and job development, and a high quality of life for residents, but also protect the environment and its natural resources.
Sustainable energy
Sustainable energy infrastructure includes types of renewable energy power plants as well as the means of exchange from the plant to the homes and businesses that use that energy. Renewable energy includes well researched and widely implemented methods such as wind, solar, and hydraulic power, as well as newer and less commonly used types of power creation such as fusion energy. Sustainable energy infrastructure must maintain a strong supply relative to demand, and must also maintain sufficiently low prices for consumers so as not to decrease demand. Any type of renewable energy infrastructure that fails to meet these consumption and price requirements will ultimately be forced out of the market by prevailing non renewable energy sources.
Sustainable water
Sustainable water infrastructure is focused on a community's sufficient access to clean, safe drinking water. Water is a public good along with electricity, which means that sustainable water catchment and distribution systems must remain affordable to all members of a population. "Sustainable Water" may refer to a nation or community's ability to be self-sustainable, with enough water to meet multiple needs including agriculture, industry, sanitation, and drinking water. It can also refer to the holistic and effective management of water resources. Increasingly, policy makers and regulators are incorporating Nature-based solutions (NBS or NbS) into attempts to achieve sustainable water infrastructure.
Sustainable waste management
Sustainable waste management systems aim to minimize the amount of waste products produced by individuals and corporations. Commercial waste management plans have transitioned from simple waste removal plans into comprehensive plans focused on reducing the total amount of waste produced before removal. Sustainable waste management is beneficial environmentally, and can also cut costs for businesses that reduce their amount of disposed goods.
Sustainable transportation
Sustainable transportation includes a shift away from private, greenhouse gas emitting cars in favor of adopting methods of transportation that are either carbon neutral or reduce carbon emissions such as bikes or electric bus systems. Additionally, cities must invest in the appropriate built environments for these ecologically preferable modes of transportation. Cities will need to invest in public transportation networks, as well as bike path networks among other sustainable solutions that incentivize citizens to use these alternate transit options. Reducing the urban dependency on cars is a fundamental goal of developing sustainable transportation, and this cannot be accomplished without a coordinated focus on both creating the methods of transportation themselves and providing them with networks that are equally or more efficient than existing car networks such as aging highway systems.
Sustainable materials
Another solution to transition into a more sustainable infrastructure is using more sustainable materials. A material is sustainable if the needed amount can be produced without depleting non-renewable resources. It also should have low environmental impacts by not disrupting the established steady-state equilibrium of it. The materials should also be resilient, renewable, reusable, and recyclable.
Today, concrete is one of the most common materials used in infrastructure. There is twice as much concrete used in construction than all other building materials combined. It is the backbone of industrialization, as it is used in bridges, piers, pipelines, pavements, and buildings. However, while they do serve as a connection between cities, transportation for people and goods, and protection for land against flooding and erosion, they only last for 50 to 100 years. Many were built within the last 50 years, which means many infrastructures need substantial maintenance to continue functioning.
However, concrete is not sustainable. The production of concrete contributes up to 8% of the world's greenhouse gas emissions. A tenth of the world's industrial water usage is from producing concrete. Even transporting the raw materials to concrete production sites adds to airborne pollution. Furthermore, the production sites and the infrastructures themselves all strip away agricultural land that could have been fertile soil or habitats vital to the ecosystem.
Green infrastructure
Green infrastructure is a type of sustainable infrastructure. Green infrastructure uses plant or soil systems to restore some of the natural processes needed to manage water, reduce the effects of disasters such as flooding, and create healthier urban environments. In a more practical sense, it refers to a decentralized network of stormwater management practices, which includes green roofs, trees, bioretention and infiltration, and permeable pavement. Green infrastructure has become an increasingly popular strategy in recent years due to its effectiveness in providing ecological, economic, and social benefitsincluding positively impacting energy consumption, air quality, and carbon reduction and sequestration.
Green roofs
A green roof is a rooftop that is partially or completely covered with growing vegetation planted over a membrane. It also includes additional layers, including a root barrier and drainage and irrigation systems. There are several categories of green roofs, including extensive (have a growing media depth ranging from two to six inches) and intensive (have a growing media with a depth greater than six inches). One benefit of green roofs is that they reduce stormwater runoff because of its ability to store water in its growing media, reducing the runoff entering the sewer system and waterways, which also decreases the risk of combined sewer overflows. They reduce energy usage since the growing media provides additional insulation, reduces the amount of solar radiation on the roof's surface, and provides evaporative cooling from water in the plants, which reduce the roof surface temperatures and heat influx. Green roofs also reduce atmospheric carbon dioxide since the vegetation sequesters carbon and, since they reduce energy usage and the urban heat island by reducing the roof temperature, they also lower carbon dioxide emissions from electricity generation.
Tree planting
Tree planting provides a host of ecological, social, and economic benefits. Trees can intercept rain, support infiltration and water storage in soil, diminish the impact of raindrops on barren surfaces, minimize soil moisture through transpiration, and they help reduce stormwater runoff. Additionally, trees contribute to recharging local aquifers and improve the health of watershed systems. Trees also reduce energy usage by providing shade and releasing water into the atmosphere which cools the air and reduces the amount of heat absorbed by buildings. Finally, trees improve air quality by absorbing harmful air pollutants reducing the amount of greenhouse gases.
Bioretention and infiltration practices
There are a variety of types of bioretention and infiltration practices, including rain gardens and bioswales. A rain garden is planted in a small depression or natural slope and includes native shrubs and flowers. They temporarily hold and absorb rain water and are effective in removing up to 90% of nutrients and chemicals and up to 80% of sediments from the runoff. As a result, they soak 30% more water than conventional gardens. Bioswales are planted in paved areas like parking lots or sidewalks and are made to allow for overflow into the sewer system by trapping silt and other pollutants, which are normally left over from impermeable surfaces. Both rain gardens and bioswales mitigate flood impacts and prevent stormwater from polluting local waterways; increase the usable water supply by reducing the amount of water needed for outdoor irrigation; improve air quality by minimizing the amount of water going into treatment facilities, which also reduces energy usage and, as a result, reduces air pollution since less greenhouse gases are emitted.
Smart cities
Smart cities use innovative methods of design and implementation in various sectors of infrastructure and planning to create communities that operate at a higher level of relative sustainability than their traditional counterparts. In a sustainable city, urban resilience as well as infrastructure reliability must both be present. Urban resilience is defined by a city's capacity to quickly adapt or recover from infrastructure defects, and infrastructure reliability means that systems must work efficiently while continuing to maximize their output. When urban resilience and infrastructure reliability interact, cities are able to produce the same level of output at similarly reasonable costs as compared to other non sustainable communities, while still maintaining ease of operation and usage.
Masdar City
Masdar City is a proposed zero emission smart city that will be contracted in the United Arab Emirates. Some individuals have referred to this planned settlement as "utopia-like", due to the fact that it will feature multiple sustainable infrastructure elements, including energy, water, waste management, and transportation. Masdar City will have a power infrastructure containing renewable energy methods including solar energy.
Masdar City is located in a desert region, meaning that sustainable collection and distribution of water is dependent on the city's ability to use water at innovative stages of the water cycle. The city will use groundwater, greywater, seawater, blackwater, and other water resources to obtain both drinking and landscaping water.
Initially, Masdar City will be waste-free. Recycling and other waste management and waste reduction methods will be encouraged. Additionally, the city will implement a system to convert waste into fertilizer, which will decrease the amount of space needed for waste accumulation as well as provide an environmentally friendly alternative to traditional fertilizer production methods.
No cars will be allowed in Masdar City, contributing to low carbon emissions within the city boundaries. Instead, alternative transportation options will be prioritized during infrastructure development. This means that a bike lane network will be accessible and comprehensive, and other options will also be available.
See also
Agile infrastructure
Airport infrastructure
Asset Management Plan
Green infrastructure
Infrastructure as a service
Infrastructure asset management
Infrastructure building
Infrastructure security
Logistics
Megaproject
Project finance
Pseudo-urbanization
Public capital
Sustainable architecture
Sustainable engineering
References
Bibliography
Koh, Jae Myong (2018) Green Infrastructure Financing: Institutional Investors, PPPs and Bankable Projects, London: Palgrave Macmillan. .
Larry W. Beeferman, "Pension Fund Investment in Infrastructure: A Resource Paper", Capital Matter (Occasional Paper Series), No. 3 December 2008
A. Eberhard, "Infrastructure Regulation in Developing Countries", PPIAF Working Paper No. 4 (2007) World Bank
M. Nicolas J. Firzli and Vincent Bazi, "Infrastructure Investments in an Age of Austerity: The Pension and Sovereign Funds Perspective", published jointly in Revue Analyse Financière, Q4 2011 issue, pp. 34–37 and USAK/JTW July 30, 2011 (online edition)
Georg Inderst, "Pension Fund Investment in Infrastructure", OECD Working Papers on Insurance and Private Pensions, No. 32 (2009)
External links
Body of Knowledge on Infrastructure Regulation
Next Generation Infrastructures international research programme
Report Card on America's Infrastructure
sustainable sports infrastructure
Dirk van Laak: Infrastructures, version: 1.0, in: Docupedia Zeitgeschichte, 20th may 2021 | Infrastructure | [
"Engineering"
] | 5,749 | [
"Construction",
"Infrastructure"
] |
174,609 | https://en.wikipedia.org/wiki/Chicxulub%20crater | The Chicxulub crater ( ) is an impact crater buried underneath the Yucatán Peninsula in Mexico. Its center is offshore, but the crater is named after the onshore community of Chicxulub Pueblo (not the larger coastal town of Chicxulub Puerto). It was formed slightly over 66 million years ago when an asteroid, about in diameter, struck Earth. The crater is estimated to be in diameter and in depth. It is believed to be the second largest impact structure on Earth, and the only one whose peak ring is intact and directly accessible for scientific research.
The crater was discovered by Antonio Camargo and Glen Penfield, geophysicists who had been looking for petroleum in the Yucatán Peninsula during the late 1970s. Penfield was initially unable to obtain evidence that the geological feature was a crater and gave up his search. Later, through contact with Alan R. Hildebrand in 1990, Penfield obtained samples that suggested it was an impact feature. Evidence for the crater's impact origin includes shocked quartz, a gravity anomaly, and tektites in surrounding areas.
The date of the impact coincides with the Cretaceous–Paleogene boundary (commonly known as the K–Pg or K–T boundary). It is now widely accepted that the devastation and climate disruption resulting from the impact was the primary cause of the Cretaceous–Paleogene extinction event, a mass extinction of 75% of plant and animal species on Earth, including all non-avian dinosaurs.
Discovery
In the late 1970s, geologist Walter Alvarez and his father, Nobel Prize-winning scientist Luis Walter Alvarez, put forth their theory that the Cretaceous–Paleogene extinction was caused by an impact event. The main evidence of such an impact was contained in a thin layer of clay present in the Cretaceous–Paleogene boundary (K–Pg boundary) in Gubbio, Italy. The Alvarezes and colleagues reported that it contained an abnormally high concentration of iridium, a chemical element rare on Earth but common in asteroids. Iridium levels in this layer were as much as 160 times above the background level. It was hypothesized that the iridium was spread into the atmosphere when the impactor was vaporized and settled across Earth's surface among other material thrown up by the impact, producing the layer of iridium-enriched clay. At the time, there was no consensus on what caused the Cretaceous–Paleogene extinction and the boundary layer, with theories including a nearby supernova, climate change, or a geomagnetic reversal. The Alvarezes' impact hypothesis was rejected by many paleontologists, who believed that the lack of fossils found close to the K–Pg boundary—the "three-meter problem"—suggested a more gradual die-off of fossil species.
The Alvarezes, joined by Frank Asaro and Helen Michel from University of California, Berkeley, published their paper on the iridium anomaly in Science in June 1980. Almost simultaneously Jan Smit and Jan Hertogen published their iridium findings from Caravaca, Spain, in Nature in May 1980. These papers were followed by other reports of similar iridium spikes at the K–Pg boundary across the globe, and sparked wide interest in the cause of the K–Pg extinction; over 2,000 papers were published in the 1980s on the topic. There were no known impact craters that were the right age and size, spurring a search for a suitable candidate. Recognizing the scope of the work, Lee Hunt and Lee Silver organized a cross-discipline meeting in Snowbird, Utah, in 1981. Unknown to them, evidence of the crater they were looking for was being presented the same week, and would be largely missed by the scientific community.
In 1978, geophysicists Glen Penfield and Antonio Camargo were working for the Mexican state-owned oil company Petróleos Mexicanos (Pemex) as part of an airborne magnetic survey of the Gulf of Mexico north of the Yucatán Peninsula. Penfield's job was to use geophysical data to scout possible locations for oil drilling. In the offshore magnetic data, Penfield noted anomalies whose depth he estimated and mapped. He then obtained onshore gravity data from the 1940s. When the gravity maps and magnetic anomalies were compared, Penfield described a shallow "bullseye", in diameter, appearing on the otherwise non-magnetic and uniform surroundings—clear evidence to him of an impact feature. A decade earlier, the same map had suggested a crater to contractor Robert Baltosser, but Pemex corporate policy prevented him from publicizing his conclusion.
Penfield presented his findings to Pemex, who rejected the crater theory, instead deferring to findings that ascribed the feature to volcanic activity. Pemex disallowed release of specific data, but let Penfield and Camargo present the results at the 1981 Society of Exploration Geophysicists conference. That year's conference was under-attended and their report attracted little attention, with many experts on impact craters and the K–Pg boundary attending the Snowbird conference instead. Carlos Byars, a Houston Chronicle journalist who was familiar with Penfield and had seen the gravitational and magnetic data himself, wrote a front-page story on Penfield and Camargo's claim, but the news did not disseminate widely.
Although Penfield had plenty of geophysical data sets, he had no rock cores or other physical evidence of an impact. He knew Pemex had drilled exploratory wells in the region. In 1951, one well bored into what was described as a thick layer of andesite about down. This layer could have resulted from the intense heat and pressure of an Earth impact, but at the time of the borings it was dismissed as a lava dome—a feature uncharacteristic of the region's geology. Penfield was encouraged by William C. Phinney, curator of lunar rocks at the Johnson Space Center, to find these samples to support his hypothesis. Penfield tried to secure site samples, but was told they had been lost or destroyed. When attempts to return to the drill sites to look for corroborating rocks proved fruitless, Penfield abandoned his search, published his findings and returned to his Pemex work. Seeing the 1980 Science paper, Penfield wrote to Walter Alvarez about the Yucatán structure, but received no response.
Alvarez and other scientists continued their search for the crater, although they were searching in oceans based on incorrect analysis of glassy spherules from the K–Pg boundary that suggested the impactor had landed in open water. Unaware of Penfield's discovery, University of Arizona graduate student Alan R. Hildebrand and faculty adviser William V. Boynton looked for a crater near the Brazos River in Texas. Their evidence included greenish-brown clay with surplus iridium, containing shocked quartz grains and small weathered glass beads that looked to be tektites. Thick, jumbled deposits of coarse rock fragments were also present, thought to have been scoured from one place and deposited elsewhere by an impact event. Such deposits occur in many locations but seemed concentrated in the Caribbean Basin at the K–Pg boundary. When Haitian professor Florentine Morás discovered what he thought to be evidence of an ancient volcano on Haiti, Hildebrand suggested it could be a telltale feature of a nearby impact. Tests on samples retrieved from the K–Pg boundary revealed more tektite glass, formed only in the heat of asteroid impacts and high-yield nuclear detonations.
In 1990, Carlos Byars told Hildebrand of Penfield's earlier discovery of a possible impact crater. Hildebrand contacted Penfield and the pair soon secured two drill samples from the Pemex wells, which had been stored in New Orleans for decades. Hildebrand's team tested the samples, which clearly showed shock-metamorphic materials. A team of California researchers surveying satellite images found a cenote (sinkhole) ring centered on the town of Chicxulub Pueblo that matched the one Penfield saw earlier; the cenotes were thought to be caused by subsidence of bolide-weakened lithostratigraphy around the impact crater wall. More recent evidence suggests the crater is wide, and the ring observed is an inner wall of the larger crater. Hildebrand, Penfield, Boynton, Camargo, and others published their paper identifying the crater in 1991. The crater was named for the nearby town of Chicxulub Pueblo. Penfield also recalled that part of the motivation for the name was "to give the academics and NASA naysayers a challenging time pronouncing it" after years of dismissing its existence.
In March 2010, forty-one experts from many countries reviewed the available evidence: twenty years' worth of data spanning a variety of fields. They concluded that the impact at Chicxulub triggered the mass extinctions at the K–Pg boundary. Dissenters, notably Gerta Keller of Princeton University, have proposed an alternate culprit: the eruption of the Deccan Traps in what is now the Indian subcontinent. This period of intense volcanism occurred before and after the Chicxulub impact; dissenting studies argue that the worst of the volcanic activity occurred before the impact, and the role of the Deccan Traps was instead shaping the evolution of surviving species post-impact. A 2013 study compared isotopes in impact glass from the Chicxulub impact with isotopes in ash from the K–Pg boundary, concluding that they were dated almost exactly the same, and within experimental error.
Impact specifics
A 2013 study published in Science estimated the age of the impact as 66,043,000 ± 11,000 years ago (± 43,000 years ago considering systematic error), based on multiple lines of evidence, including argon–argon dating of tektites from Haiti and bentonite horizons overlying the impact horizon in northeastern Montana. This date was supported by a 2015 study based on argon–argon dating of tephra found in lignite beds in the Hell Creek and overlying Fort Union formations in northeastern Montana. A 2018 study based on argon–argon dating of spherules from Gorgonilla Island, Colombia, obtained a slightly different result of 66,051,000 ± 31,000 years ago. The impact has been interpreted to have occurred in the Northern Hemisphere's spring season based on annual isotope curves in sturgeon and paddlefish bones found in an ejecta-bearing sedimentary unit at the Tanis site in southwestern North Dakota. This sedimentary unit is thought to have formed within hours of impact. A 2020 study concluded that the Chicxulub crater was formed by an inclined (45–60° to horizontal) impact from the northeast. The site of the crater at the time of impact was a marine carbonate platform. The water depth at the impact site varied from on the western edge of the crater to over on the northeastern edge, with an estimated depth at the centre of the impact of approximately . The seafloor rocks consisted of a sequence of Jurassic–Cretaceous marine sediments thick. They were predominantly carbonate rock, including dolomite (35–40% of total sequence) and limestone (25–30%), along with evaporites (anhydrite 25–30%) and minor amounts of shale and sandstone (3–4%) underlain by approximately of continental crust, composed of igneous crystalline basement including granite.
The impactor was around in diameter—large enough that, if set at sea level, it would have reached taller than Mount Everest.
Effects
The impactor's velocity was estimated at . The kinetic energy of the impact was estimated at . The impact generated winds in excess of near the blast's center, and produced a transient cavity wide and deep that later collapsed. This formed a crater mainly under the sea and currently covered by ~ of sediment. The impact, expansion of water after filling the crater, and related seismic activity spawned megatsunamis over tall, with one simulation suggesting the immediate waves from the impact may have reached up to high. The waves scoured the sea floor, leaving ripples underneath what is now Louisiana with average wavelengths of and average wave heights of , the largest ripples documented. Material shifted by subsequent earthquakes and the waves reached to what are now Texas and Florida, and may have disturbed sediments as far as from the impact site. The impact triggered a seismic event with an estimated moment magnitude of 9–11 .
A cloud of hot dust, ash and steam would have spread from the crater, with as much as 25 trillion metric tons of excavated material being ejected into the atmosphere by the blast. Some of this material escaped orbit, dispersing throughout the Solar System, while some of it fell back to Earth, vaporizing upon re-entry. The rock heated Earth's surface and ignited wildfires, estimated to have enveloped nearly 70% of the planet's forests. The effect on living creatures even hundreds of kilometers away was immense, and much of present-day Mexico and the United States would have been devastated. Fossil evidence for an instantaneous extinction of diverse animals was found in a soil layer only thick in New Jersey, away from the impact site, indicating that death and burial under debris occurred suddenly and quickly over wide distances on nearby land. Field research from the Hell Creek Formation in North Dakota published in 2019 shows the simultaneous mass extinction of a myriad of species, combined with geological and atmospheric features that are consistent with the impact event.
Due to the relatively shallow water at the impact site, the rock that was vaporized included sulfur-rich gypsum from the lower part of the Cretaceous sequence, and this was injected into the atmosphere. This global dispersal of dust and sulfates would have led to a sudden and catastrophic effect on the climate worldwide, instigating large temperature drops and devastating the food chain. Researchers stated that the impact generated an environmental calamity that extinguished life, but it also induced a vast subsurface hydrothermal system that became an oasis for the recovery of life. Using seismic images of the crater in 2008, scientists determined that the impactor landed in deeper water than previously assumed, which may have resulted in increased sulfate aerosols in the atmosphere as a result of more water vapor being available to react with the vaporized anhydrite. This could have made the impact even deadlier by rapidly cooling the climate and generating acid rain.
The emission of dust and particles could have covered the entire surface of Earth for several years, possibly up to a decade, creating a harsh environment for biological life. Production of carbon dioxide caused by the destruction of carbonate rocks would have led to a sudden greenhouse effect. For over a decade or longer, sunlight would have been blocked from reaching the surface of Earth by the dust particles in the atmosphere, cooling the surface dramatically. Photosynthesis by plants would also have been interrupted, affecting the entire food chain. A model of the event developed by Lomax et al (2001) suggests that net primary productivity rates may have increased to higher than pre-impact levels over the long term because of the high carbon dioxide concentrations.
A long-term local effect of the impact was the creation of the Yucatán sedimentary basin which "ultimately produced favorable conditions for human settlement in a region where surface water is scarce".
Post-discovery investigations
Geophysical data
Two seismic reflection datasets have been acquired over the offshore parts of the crater since its discovery. Older 2D seismic datasets have also been used that were originally acquired for hydrocarbon exploration. A set of three long-record 2D lines was acquired in October 1996, with a total length of , by the BIRPS group. The longest of the lines, Chicx-A, was shot parallel to the coast, while Chicx-B and Chicx-C were shot NW–SE and SSW–NNE respectively. In addition to the conventional seismic reflection imaging, data was recorded onshore to allow for wide-angle refraction imaging.
In 2005, another set of profiles was acquired, bringing the total length of the 2D deep-penetration seismic data up to . This survey also used ocean bottom seismometers and land stations to allow 3D travel time inversion to improve the understanding of the velocity structure of the crater. The data was concentrated around the interpreted offshore peak ring to help identify possible drilling locations. At the same time, gravity data was acquired along of profiles. The acquisition was funded by the National Science Foundation (NSF), Natural Environment Research Council (NERC) with logistical assistance from the National Autonomous University of Mexico (UNAM) and the Centro de Investigación Científica de Yucatán (CICY – Yucatán Center for Scientific Investigation).
Borehole drilling
Intermittent core samples from hydrocarbon exploration boreholes drilled by Pemex on the Yucatán peninsula have provided some useful data. UNAM drilled a series of eight fully-cored boreholes in 1995, three of which penetrated deep enough to reach the ejecta deposits outside the main crater rim (UNAM-5, 6, and 7). Between 2001 and 2002, a scientific borehole was drilled near the Hacienda Yaxcopoil, known as Yaxcopoil-1 (or more commonly Yax-1), to a depth of below the surface, as part of the International Continental Scientific Drilling Program. The borehole was cored continuously, passing through of impactites. Three fully-cored boreholes were also drilled by the Comisión Federal de Electricidad (Federal Electricity Commission) with UNAM. One of them, (BEV-4), was deep enough to reach the ejecta deposits.
In 2016, a joint United Kingdom–United States team obtained the first offshore core samples from the peak ring in the central zone of the crater with the drilling of the borehole known as M0077A, part of Expedition 364 of the International Ocean Discovery Program. The borehole reached below the seafloor.
Morphology
The form and structure (geomorphology) of the Chicxulub crater is known mainly from geophysical data. It has a well-defined concentric multi-ring structure. The outermost ring was identified using seismic reflection data. It is up to from the crater center, and is a ring of normal faults, throwing down towards the crater center, marking the outer limit of significant crustal deformation. This makes it one of the three largest impact structures on Earth. Moving toward the center, the next ring is the main crater rim, also known as the "inner rim," which correlates with a ring of cenotes onshore and a major circular Bouguer gravity gradient anomaly. This ring has a radius that varies between . The next inner ring structure is the peak ring. The area between the inner rim and peak ring is described as the "terrace zone", characterized by a series of fault blocks defined by normal faults dipping towards the crater center, sometimes referred to as "slump blocks". The peak ring is about 80 km in diameter and of variable height, above the base of the crater in the west and northwest and in the north, northeast, and east. The central part of the crater lies above a zone where the mantle was uplifted such that the Mohorovičić discontinuity is shallower by about compared to regional values.
The ring structures are best developed to the south, west and northwest, becoming more indistinct towards the north and northeast of the structure. This is interpreted to be a result of variable water depth at the time of impact, with less well-defined rings resulting from the areas with water depths significantly deeper than .
Geology
Pre-impact geology
Before the impact, the geology of the Yucatán area, sometimes referred to as the "target rocks", consisted of a sequence of mainly Cretaceous limestones, overlying red beds of uncertain age above an unconformity with the dominantly granitic basement. The basement forms part of the Maya Block and information about its makeup and age in the Yucatán area has come only from drilling results around the Chicxulub crater and the analysis of basement material found as part of the ejecta at more distant K–Pg boundary sites. The Maya block is one of a group of crustal blocks found at the edge of the Gondwana continent. Zircon ages are consistent with the presence of an underlying Grenville age crust, with large amounts of late Ediacaran arc-related igneous rocks, interpreted to have formed in the Pan-African orogeny. Late Paleozoic granitoids (the distinctive "pink granite") were found in the peak ring borehole M0077A, with an estimated age of 326 ± 5 million years ago (Carboniferous). These have an adakitic composition and are interpreted to represent the effects of slab detachment during the Marathon-Ouachita orogeny, part of the collision between Laurentia and Gondwana that created the Pangaea supercontinent.
Red beds of variable thickness, up to , overlay the granitic basement, particularly in the southern part of the area. These continental clastic rocks are thought to be of Triassic-to-Jurassic age, although they may extend into the Lower Cretaceous. The lower part of the Lower Cretaceous sequence consists of dolomite with interbedded anhydrite and gypsum, with the upper part being limestone, with dolomite and anhydrite in part. The thickness of the Lower Cretaceous varies from up to in the boreholes. The Upper Cretaceous sequence is mainly platform limestone, with marl and interbedded anhydrite. It varies in thickness from up to . There is evidence for a Cretaceous basin within the Yucatán area that has been named the Yucatán Trough, running approximately south–north, widening northwards, explaining the observed thickness variations.
Impact rocks
The most common observed impact rocks are suevites, found in many of the boreholes drilled around the Chicxulub crater. Most of the suevites were resedimented soon after the impact by the resurgence of oceanic water into the crater. This gave rise to a layer of suevite extending from the inner part of the crater out as far as the outer rim.
Impact melt rocks are thought to fill the central part of the crater, with a maximum thickness of . The samples of melt rock that have been studied have overall compositions similar to that of the basement rocks, with some indications of mixing with carbonate source, presumed to be derived from the Cretaceous carbonates. An analysis of melt rocks sampled by the M0077A borehole indicates two types of melt rock, an upper impact melt (UIM), which has a clear carbonate component as shown by its overall chemistry and the presence of rare limestone clasts and a lower impact melt-bearing unit (LIMB) that lacks any carbonate component. The difference between the two impact melts is interpreted to be a result of the upper part of the initial impact melt, represented by the LIMB in the borehole, becoming mixed with materials from the shallow part of the crust either falling back into the crater or being brought back by the resurgence forming the UIM.
The "pink granite", a granitoid rich in alkali feldspar found in the peak ring borehole shows many deformation features that record the extreme strains associated with the formation of the crater and the subsequent development of the peak ring. The granitoid has an unusually low density and P-wave velocity compared to typical granitic basement rocks. Study of the core from M0077A shows the following deformation features in apparent order of development: pervasive fracturing along and through grain boundaries, a high density of shear faults, bands of cataclasite and ultra-cataclasite and some ductile shear structures. This deformation sequence is interpreted to result from initial crater formation involving acoustic fluidization followed by shear faulting with the development of cataclasites with fault zones containing impact melts.
The peak ring drilling below the sea floor also discovered evidence of a massive hydrothermal system, which modified approximately of Earth's crust and lasted for hundreds of thousands of years. These hydrothermal systems may provide support for the impact origin of life hypothesis for the Hadean eon, when the entire surface of Earth was affected by impactors much larger than the Chicxulub impactor.
Post-impact geology
After the immediate effects of the impact had stopped, sedimentation in the Chicxulub area returned to the shallow water platform carbonate depositional environment that characterised it before the impact. The sequence, which dates back as far as the Paleocene, consists of marl and limestone, reaching a thickness of about . The K–Pg boundary inside the crater is significantly deeper than in the surrounding area.
On the Yucatán peninsula, the inner rim of the crater is marked by clusters of cenotes, which are the surface expression of a zone of preferential groundwater flow, moving water from a recharge zone in the south to the coast through a karstic aquifer system. From the cenote locations, the karstic aquifer is clearly related to the underlying crater rim, possibly through higher levels of fracturing,
caused by differential compaction.
Astronomical origin and type of impactor
There is broad consensus that the Chicxulub impactor was a C-type asteroid with a carbonaceous chondrite-like composition, rather than a comet. These types of asteroids originally formed in the outer Solar System, beyond the orbit of Jupiter. In 1998, a meteorite, approximately across, was described from a deep sea sediment core from the North Pacific, from a sediment sequence spanning the Cretaceous–Paleogene boundary (when the site was located in the central Pacific), with the meteorite being found at the base of the K-Pg boundary iridium anomaly within the sediment core. The meteorite was suggested to represent a fragment of the Chicxulub impactor. Analysis suggested that it best fitted the criteria of the CV, CO and CR groups of carbonaceous chondrites. A 2021 paper suggested, based on geochemical evidence including the excess of chromium isotope 54Cr and the ratios of platinum group metals found in marine impact layers, that the impactor matched the characteristics of CM or CR carbonaceous chondrites. Ruthenium isotope ratios found in impact layers also support a carbonaceous chondrite composition for the impactor.
A 2007 Nature report proposed a specific astronomical origin for the Chicxulub asteroid. The authors, William F. Bottke, David Vokrouhlický, and David Nesvorný, argued that a collision in the asteroid belt 160 million years ago between a diameter parent body and another diameter body resulted in the Baptistina family of asteroids, the largest surviving member of which is 298 Baptistina. They proposed that the Chicxulub asteroid was also a member of this group. Subsequent evidence has cast doubt on this theory. A 2009 spectrographic analysis revealed that 298 Baptistina has a different composition more typical of an S-type asteroid than the presumed carbonaceous chondrite composition of the Chicxulub impactor. In 2011, data from the Wide-field Infrared Survey Explorer revised the date of the collision which created the Baptistina family to about 80 million years ago, allowing only 15 million years for the process of resonance and collision, which takes many tens of millions of years. In 2010, another hypothesis implicated the newly discovered asteroid 354P/LINEAR, a member of the Flora family, as a possible remnant cohort of the K–Pg impactor. In 2021, a numerical simulation study argued that the impactor likely originated in the outer main part of the asteroid belt.
Some scholars have argued that the impactor was a comet, not an asteroid. Two papers in 1984 proposed it to be a comet originating from the Oort cloud, and it was proposed in 1992 that tidal disruption of comets could potentially increase impact rates. In 2021, Avi Loeb and a colleague suggested in Scientific Reports that the impactor was a fragment from a disrupted comet. A rebuttal in Astronomy & Geophysics countered that Loeb et al. had ignored that the amount of iridium deposited around the globe, , was too large for a comet of the size implied by the crater, and that they had overestimated likely comet impact rates. They concluded that all available evidence strongly favors an asteroid impactor, effectively ruling out a comet. Ruthenium isotope ratios in impact layers also strongly support an asteroid rather than a comet nature for the impactor.
See also
Timeline of Cretaceous–Paleogene extinction event research
Tenejapa-Lacandón Formation
Nadir crater
List of impact structures on Earth
List of possible impact structures on Earth
Barberton Greenstone Belt
Permian–Triassic extinction event
References
External links
Chicxulub Crater
Chicxulub: Variations in the magnitude of the gravity field at sea level image (Lunar and Planetary Institute, USRA)
"Doubts on Dinosaurs" – Scientific American
Papers and presentations resulting from the 2016 Chicxulub drilling project
+
Cretaceous impact craters
Extinction events
Impact craters of Mexico
Mérida, Yucatán
Natural history of the Caribbean
Natural history of the Yucatán Peninsula
Oceans
Dinosaurs | Chicxulub crater | [
"Biology"
] | 5,978 | [
"Evolution of the biosphere",
"Extinction events"
] |
174,686 | https://en.wikipedia.org/wiki/Carl%20Bosch | Carl Bosch (; 27 August 1874 – 26 April 1940) was a German chemist and engineer and Nobel Laureate in Chemistry. He was a pioneer in the field of high-pressure industrial chemistry and founder of IG Farben, at one point the world's largest chemical company.
He also developed the Haber–Bosch process, important for the large-scale synthesis of fertilizers and explosives. It is estimated that one-third of annual global food production uses ammonia from the Haber–Bosch process, and that this supports nearly half of the world's population. In addition, he co-developed the so-called Bosch-Meiser process for the industrial production of urea.
Biography
Early years
Carl Bosch was born in Cologne to a successful gas and plumbing supplier. His father was Carl Friedrich Alexander Bosch (1843–1904) and his uncle was Robert Bosch, who pioneered the development of the spark plug and founded the multinational company Bosch. Carl, trying to decide between a career in metallurgy or chemistry, studied at the Königlich Technische Hochschule in Charlottenburg (now Technische Universität Berlin) and the University of Leipzig from 1892 to 1898.
Career
Carl Bosch attended the University of Leipzig, and this is where he studied under Johannes Wislicenus, and he obtained his doctorate in 1898 for research in organic chemistry. After he left in 1899 he took an entry-level job at BASF, then Germany's largest chemical and dye firm. From 1909 until 1913 he transformed Fritz Haber's tabletop demonstration of a method to fix nitrogen using high-pressure chemistry through the Haber–Bosch process to produce synthetic nitrate, a process that has countless industrial applications for making a near-infinite variety of industrial compounds, consumer goods, and commercial products. His primary contribution was to expand the scale of the process, enabling the industrial production of vast quantities of synthetic nitrate. To do this, he had to construct a plant and equipment that would function effectively under high gas pressures and high temperatures. Bosch was also responsible for finding a more practical catalyst than the scarce osmium and expensive uranium being used by Haber.
There were many more obstacles as well, such as designing large compressors and safe high-pressure furnaces. A means was needed to provide pure hydrogen gas in quantity as the feedstock. Also, cheap and safe means had to be developed to clean and process the product ammonia. The first full-scale Haber–Bosch plant was erected in Oppau, Germany, now part of Ludwigshafen. With the process complete he was able to synthesize large amounts of ammonia, which was available for the industrial and agricultural fields. In fact, this production has increased the agricultural yields throughout the world. This work won him the Nobel prize for Chemistry in 1931.
After World War I Bosch extended high-pressure techniques to the production of synthetic fuel via the Bergius process and methanol. In 1925 Bosch helped found IG Farben, and was the first head of the company. From 1935, Bosch was chairman of the board of directors.
He received the Siemens-Ring in 1924 for his contributions to applied research and his support of basic research. In 1931 he was awarded the Nobel Prize in Chemistry together with Friedrich Bergius for the introduction of high pressure chemistry. Today the Haber–Bosch process produces 100 million tons of nitrogen fertilizer every year. After the Nazi seizure of power, Bosch was one of the industrialists selected for membership in Hans Frank's Academy for German Law in October 1933, where he served on the General Economic Council (Generalrat der Wirtschaft). In December 1933, Bosch received a contract to expand the production of synthetic oil, a development which was essential to Adolf Hitler's future war plans.
Personal life
Bosch married Else Schilbach in 1902. Carl and Else had a son and a daughter together. A critic of many Nazi policies, including anti-Semitism, Bosch was gradually relieved of his high positions, and fell into depression and alcoholism. He died in Heidelberg.
Legacy
The Haber–Bosch Process today consumes more than one percent of humanity's energy production and is responsible for feeding roughly one-third of its population. On average, one-half of the nitrogen in a human body comes from synthetically fixed sources, the product of a Haber–Bosch plant. Bosch was an ardent collector of insects, minerals, and gems. His collected meteorites and other mineral samples were loaned to Yale University, and eventually purchased by the Smithsonian. He was an amateur astronomer with a well-equipped private observatory. The asteroid 7414 Bosch was named in his honour.
Carl Bosch along with Fritz Haber were voted the world's most influential chemical engineers of all time by members of the Institution of Chemical Engineers.
The Haber–Bosch process, quite possibly the best-known chemical process in the world, which captures nitrogen from the air and converts it to ammonia, has its hand in the process of the Green Revolution that has been feeding the increasing population of the world.
Bosch also won numerous awards including an honorary doctorate from Technische Hochschule Karlsruhe (1918), the Liebig Memorial Medal of the Association of German Chemists along with the Bunsen Medal of the German Bunsen Society, the Siemens Ring, and the Golden Grashof Memorial medal of the VDI. In 1931 he was awarded the Nobel Prize for Chemistry for the contribution to the invention of chemical high pressure methods. He also received the Exner medal from the Austrian Trade Association and the Carl Lueg Memorial Medal. Bosch also enjoyed his membership of various German and foreign scientific academics, and his chairmanship of the Kaiser Wilhelm Society of which he became the President in 1937.
Awards and honours
1931: Nobel Prize in Chemistry
1919: Liebig Medal of German Chemists Association
1924: Werner von Siemens Ring of Stiftung Werner-von-Siemens-Ring foundation
1932: Wilhelm Exner Medal of Austrian Trade Association
Bunsen Medal of the German Bunsen Society
Golden Grashof Memorial medal of the VDI
Carl Lueg Memorial Medal
See also
German inventors and discoverers
Fritz Haber
References
Further reading
Thomas Hager, The Alchemy of Air: A Jewish Genius, a Doomed Tycoon, and the Scientific Discovery That Fed the World but Fueled the Rise of Hitler (2008) .
External links
.
Fritz Haber and Carl Bosch
BASF Where Carl Worked
BASF's Production
1874 births
1940 deaths
BASF people
Engineers from Cologne
German chemical engineers
German industrialists
German Nobel laureates
IG Farben people
Leipzig University alumni
Members of the Academy for German Law
Members of the Prussian Academy of Sciences
Nobel laureates in Chemistry
People from the Rhine Province
Scientists from Cologne
Technische Universität Berlin alumni
Werner von Siemens Ring laureates
German organic chemists | Carl Bosch | [
"Chemistry"
] | 1,382 | [
"Organic chemists",
"German organic chemists"
] |
174,704 | https://en.wikipedia.org/wiki/Space%20syntax | Space syntax is a set of theories and techniques for the analysis of spatial configurations. It was conceived by Bill Hillier, Julienne Hanson, and colleagues at The Bartlett, University College London in the late 1970s to early 1980s to develop insights into the mutually constructive relation between society and space. As space syntax has evolved, certain measures have been found to correlate with human spatial behaviour, and space syntax has thus come to be used to forecast likely effects of architectural and urban space on users.
Thesis
The general idea is that spaces can be broken down into components, analysed as networks of choices, then represented as maps and graphs that describe the relative connectivity and integration of those spaces. It rests on three basic conceptions of space:
an isovist (popularised by Michael Benedikt at University of Texas), or viewshed or visibility polygon, the field of view from any particular point
axial space (idea popularised by Bill Hillier at UCL), a straight sight-line and possible path
convex space (popularised by John Peponis, and his collaborators at Georgia Tech), an occupiable void where, if imagined as a wireframe diagram, no line between two of its points goes outside its perimeter: all points within the polygon are visible to all other points within the polygon.
The three most popular ways of analysing a street network are integration, choice and depth distance.
Integration
Integration measures the amount of street-to-street transitions needed from a street segment, to reach all other street segments in the network, using shortest paths. The graph analysis could also limit measure integration at radius 'n', for segments further than this radius not to be taken into account. The first intersecting segment requires only one transition, the second two transitions and so on. The result of the analysis finds street segments that require fewest turns to reach all other streets, which are called 'most integrated' and are usually represented with hotter colours, such as red or yellow. Integration can also be analysed in local scale instead of the scale of the whole network. In the case of radius 4, for instance, only four turns are counted departing from each street segment. Measure also is highly related to network analysis Centrality.
Theoretically, the integration measure shows the cognitive complexity of reaching a street, and is often argued to 'predict' the pedestrian use of a street: the easier it is to reach a street, the more popular it should be.
While there is some evidence of this being true, the method is biased towards long, straight streets that intersect with many other streets. Such streets, as Oxford Street in London, come out as especially strongly integrated. However, a slightly curvy street of the same length would typically be segmented into individual straight segments, not counted as a single line, which makes curvy streets appear less integrated in the analysis.
Choice
The choice measure is easiest to understand as a 'water-flow' in the street network. Imagine that each street segment is given an initial load of one unit of water, which then starts pouring from the starting street segment to all segments that successively connect to it. Each time an intersection appears, the remaining value of flow is divided equally among the splitting streets, until all the other street segments in the graph are reached. For instance, at the first intersection with a single other street, the initial value of one is split into two remaining values of one half, and allocated to the two intersecting street segments. Moving further down, the remaining one half value is again split among the intersecting streets and so on. When the same procedure has been conducted using each segment as a starting point for the initial value of one, a graph of final values appears. The streets with the highest total values of accumulated flow are said to have the highest choice values.
Like integration, choice analysis can be restricted to limited local radii, for instance 400m, 800m, 1600m. Interpreting Choice analysis is trickier than integration. Space syntax argues that these values often predict the car traffic flow of streets, but, strictly speaking, choice analysis can also be thought to represent the number of intersections that need to be crossed to reach a street. However, since flow values are divided (not subtracted) at each intersection, the output shows an exponential distribution. It is considered best to take a log of base two of the final values in order to get a more accurate picture.
Depth distance
Depth distance is the most intuitive of the analysis methods. It explains the linear distance from the center point of each street segment to the center points of all the other segments. If every segment is successively chosen as a starting point, a graph of cumulative final values is achieved. The streets with lowest Depth Distance values are said to be nearest to all the other streets. Again, the search radius can be limited to any distance.
Applications
From these components it is thought to be possible to quantify and describe how easily navigable any space is, useful for the design of museums, airports, hospitals, and other settings where wayfinding is a significant issue. Space syntax has also been applied to predict the correlation between spatial layouts and social effects such as crime, traffic flow, and sales per unit area.
In general, the analysis uses one of many software programs that allow researchers to analyse graphs of one (or more) of the primary spatial components.
History
Space syntax originated as a programme research in the early 1970s when Bill Hillier, Adrian Leaman and Alan Beattie came together at the School of Environmental Studies at University College London (now part of the Bartlett School of Architecture). Bill Hillier had been appointed Director of the Unit for Architectural Studies (UAS) as successor to John Musgrove. They established a new MSc programme in Advanced Architectural Studies and embarked on a programme of research aimed at developing a theoretical basis for architecture. Previously Bill Hillier had written papers with others as secretary to the RIBA, notably 'Knowledge and Design' and 'How is Design Possible'. These laid the theoretical foundation for a series of studies that sought to clarify how the built environment relates to society. One of the first cohorts of students on the MScAAS was Julienne Hanson who went on to co-author The Social Logic of Space (SLS) with Bill Hillier (CUP, 1984). This brought together in one place a comprehensive review of the programme of research up to that point, but also developed a full theoretical account for how the buildings and settlements we construct an not merely the product of social processes, but also play a role in producing social forms. SLS also developed an analytic approach to representation and quantification of spatial configuration at the building and the settlement scale, making possible both comparative studies as well as analysis of the relationship between spatial configuration and aspect of social function in the built environment. These methods coupled to the social theories have turned out to have a good deal of explanatory power. Space syntax has grown to become a tool used around the world in a variety of research areas and design applications in architecture, urban design, urban planning, transport and interior design. Many prominent design applications have been made by the architectural and urban planning practice Space Syntax Limited, which was founded at The Bartlett, University College London in 1989. These include the redesign of Trafalgar Square with Foster and Partners and the Pedestrian Movement Model for the City of London.
Over the past decade, Space syntax techniques have been used for research in archaeology, information technology, urban and human geography, and anthropology. Since 1997, the Space syntax community has held biennial conferences, and many journal papers have been published on the subject, chiefly in Environment and Planning B.
Criticism
Space syntax's mathematical reliability has come under scrutiny because of a seeming paradox that arises under certain geometric configurations with 'axial maps', one of the method's primary representations of spatial configuration. This paradox was proposed by Carlo Ratti at the Massachusetts Institute of Technology, but comprehensively refuted in a passionate academic exchange with Bill Hillier and Alan Penn. There have been moves to combine space syntax with more traditional transport engineering models, using intersections as nodes and constructing visibility graphs to link them, by researchers including Bin Jiang, Valerio Cutini and Michael Batty. Recently there has also been research development that combines space syntax with geographic accessibility analysis in GIS, such as the place syntax-models developed by the research group Spatial Analysis and Design at the Royal Institute of Technology in Stockholm, Sweden. A series of interdisciplinary works published in 2006 by Vito Latora, Sergio Porta and colleagues, proposing a network approach to street centrality analysis and design, have highlighted space syntax' contribution to decades of previous studies in the physics of spatial complex networks.
See also
Permeability (spatial and transport planning)
Spatial network
Spatial network analysis software
Visibility graph analysis
Fuzzy architectural spatial analysis
References
Further reading
Hillier B. and Hanson J. (1984), The Social Logic of Space, Cambridge: Cambridge University Press.
Hillier B. (1999), Space is the Machine: A Configurational Theory of Architecture, Cambridge: Cambridge University Press.
Pafka E. et al (2020), Limits of space syntax for urban design: Axiality, scale and sinuosity. Environment and Planning B - Planning and Design, 47 (3), 508–522.
van Nes A. and Yamu C. (2021) Introduction to Space Syntax in Urban Studies, Springer.
External links
UCL Space Syntax homepage
Architectural theory
Environmental psychology
Urban planning | Space syntax | [
"Engineering",
"Environmental_science"
] | 1,929 | [
"Architectural theory",
"Environmental psychology",
"Urban planning",
"Environmental social science",
"Architecture"
] |
174,705 | https://en.wikipedia.org/wiki/Algebraic%20number%20theory | Algebraic number theory is a branch of number theory that uses the techniques of abstract algebra to study the integers, rational numbers, and their generalizations. Number-theoretic questions are expressed in terms of properties of algebraic objects such as algebraic number fields and their rings of integers, finite fields, and function fields. These properties, such as whether a ring admits unique factorization, the behavior of ideals, and the Galois groups of fields, can resolve questions of primary importance in number theory, like the existence of solutions to Diophantine equations.
History
Diophantus
The beginnings of algebraic number theory can be traced to Diophantine equations, named after the 3rd-century Alexandrian mathematician, Diophantus, who studied them and developed methods for the solution of some kinds of Diophantine equations. A typical Diophantine problem is to find two integers x and y such that their sum, and the sum of their squares, equal two given numbers A and B, respectively:
Diophantine equations have been studied for thousands of years. For example, the solutions to the quadratic Diophantine equation x2 + y2 = z2 are given by the Pythagorean triples, originally solved by the Babylonians (). Solutions to linear Diophantine equations, such as 26x + 65y = 13, may be found using the Euclidean algorithm (c. 5th century BC).
Diophantus's major work was the Arithmetica, of which only a portion has survived.
Fermat
Fermat's Last Theorem was first conjectured by Pierre de Fermat in 1637, famously in the margin of a copy of Arithmetica where he claimed he had a proof that was too large to fit in the margin. No successful proof was published until 1995 despite the efforts of countless mathematicians during the 358 intervening years. The unsolved problem stimulated the development of algebraic number theory in the 19th century and the proof of the modularity theorem in the 20th century.
Gauss
One of the founding works of algebraic number theory, the Disquisitiones Arithmeticae (Latin: Arithmetical Investigations) is a textbook of number theory written in Latin by Carl Friedrich Gauss in 1798 when Gauss was 21 and first published in 1801 when he was 24. In this book Gauss brings together results in number theory obtained by mathematicians such as Fermat, Euler, Lagrange and Legendre and adds important new results of his own. Before the Disquisitiones was published, number theory consisted of a collection of isolated theorems and conjectures. Gauss brought the work of his predecessors together with his own original work into a systematic framework, filled in gaps, corrected unsound proofs, and extended the subject in numerous ways.
The Disquisitiones was the starting point for the work of other nineteenth century European mathematicians including Ernst Kummer, Peter Gustav Lejeune Dirichlet and Richard Dedekind. Many of the annotations given by Gauss are in effect announcements of further research of his own, some of which remained unpublished. They must have appeared particularly cryptic to his contemporaries; we can now read them as containing the germs of the theories of L-functions and complex multiplication, in particular.
Dirichlet
In a couple of papers in 1838 and 1839 Peter Gustav Lejeune Dirichlet proved the first class number formula, for quadratic forms (later refined by his student Leopold Kronecker). The formula, which Jacobi called a result "touching the utmost of human acumen", opened the way for similar results regarding more general number fields. Based on his research of the structure of the unit group of quadratic fields, he proved the Dirichlet unit theorem, a fundamental result in algebraic number theory.
He first used the pigeonhole principle, a basic counting argument, in the proof of a theorem in diophantine approximation, later named after him Dirichlet's approximation theorem. He published important contributions to Fermat's last theorem, for which he proved the cases n = 5 and n = 14, and to the biquadratic reciprocity law. The Dirichlet divisor problem, for which he found the first results, is still an unsolved problem in number theory despite later contributions by other researchers.
Dedekind
Richard Dedekind's study of Lejeune Dirichlet's work was what led him to his later study of algebraic number fields and ideals. In 1863, he published Lejeune Dirichlet's lectures on number theory as Vorlesungen über Zahlentheorie ("Lectures on Number Theory") about which it has been written that:
1879 and 1894 editions of the Vorlesungen included supplements introducing the notion of an ideal, fundamental to ring theory. (The word "Ring", introduced later by Hilbert, does not appear in Dedekind's work.) Dedekind defined an ideal as a subset of a set of numbers, composed of algebraic integers that satisfy polynomial equations with integer coefficients. The concept underwent further development in the hands of Hilbert and, especially, of Emmy Noether. Ideals generalize Ernst Eduard Kummer's ideal numbers, devised as part of Kummer's 1843 attempt to prove Fermat's Last Theorem.
Hilbert
David Hilbert unified the field of algebraic number theory with his 1897 treatise Zahlbericht (literally "report on numbers"). He also resolved a significant number-theory problem formulated by Waring in 1770. As with the finiteness theorem, he used an existence proof that shows there must be solutions for the problem rather than providing a mechanism to produce the answers. He then had little more to publish on the subject; but the emergence of Hilbert modular forms in the dissertation of a student means his name is further attached to a major area.
He made a series of conjectures on class field theory. The concepts were highly influential, and his own contribution lives on in the names of the Hilbert class field and of the Hilbert symbol of local class field theory. Results were mostly proved by 1930, after work by Teiji Takagi.
Artin
Emil Artin established the Artin reciprocity law in a series of papers (1924; 1927; 1930). This law is a general theorem in number theory that forms a central part of global class field theory. The term "reciprocity law" refers to a long line of more concrete number theoretic statements which it generalized, from the quadratic reciprocity law and the reciprocity laws of Eisenstein and Kummer to Hilbert's product formula for the norm symbol. Artin's result provided a partial solution to Hilbert's ninth problem.
Modern theory
Around 1955, Japanese mathematicians Goro Shimura and Yutaka Taniyama observed a possible link between two apparently completely distinct, branches of mathematics, elliptic curves and modular forms. The resulting modularity theorem (at the time known as the Taniyama–Shimura conjecture) states that every elliptic curve is modular, meaning that it can be associated with a unique modular form.
It was initially dismissed as unlikely or highly speculative, but was taken more seriously when number theorist André Weil found evidence supporting it, yet no proof; as a result the "astounding" conjecture was often known as the Taniyama–Shimura-Weil conjecture. It became a part of the Langlands program, a list of important conjectures needing proof or disproof.
From 1993 to 1994, Andrew Wiles provided a proof of the modularity theorem for semistable elliptic curves, which, together with Ribet's theorem, provided a proof for Fermat's Last Theorem. Almost every mathematician at the time had previously considered both Fermat's Last Theorem and the Modularity Theorem either impossible or virtually impossible to prove, even given the most cutting-edge developments. Wiles first announced his proof in June 1993 in a version that was soon recognized as having a serious gap at a key point. The proof was corrected by Wiles, partly in collaboration with Richard Taylor, and the final, widely accepted version was released in September 1994, and formally published in 1995. The proof uses many techniques from algebraic geometry and number theory, and has many ramifications in these branches of mathematics. It also uses standard constructions of modern algebraic geometry, such as the category of schemes and Iwasawa theory, and other 20th-century techniques not available to Fermat.
Basic notions
Failure of unique factorization
An important property of the ring of integers is that it satisfies the fundamental theorem of arithmetic, that every (positive) integer has a factorization into a product of prime numbers, and this factorization is unique up to the ordering of the factors. This may no longer be true in the ring of integers of an algebraic number field .
A prime element is an element of such that if divides a product , then it divides one of the factors or . This property is closely related to primality in the integers, because any positive integer satisfying this property is either or a prime number. However, it is strictly weaker. For example, is not a prime number because it is negative, but it is a prime element. If factorizations into prime elements are permitted, then, even in the integers, there are alternative factorizations such as
In general, if is a unit, meaning a number with a multiplicative inverse in , and if is a prime element, then is also a prime element. Numbers such as and are said to be associate. In the integers, the primes and are associate, but only one of these is positive. Requiring that prime numbers be positive selects a unique element from among a set of associated prime elements. When K is not the rational numbers, however, there is no analog of positivity. For example, in the Gaussian integers , the numbers and are associate because the latter is the product of the former by , but there is no way to single out one as being more canonical than the other. This leads to equations such as
which prove that in , it is not true that factorizations are unique up to the order of the factors. For this reason, one adopts the definition of unique factorization used in unique factorization domains (UFDs). In a UFD, the prime elements occurring in a factorization are only expected to be unique up to units and their ordering.
However, even with this weaker definition, many rings of integers in algebraic number fields do not admit unique factorization. There is an algebraic obstruction called the ideal class group. When the ideal class group is trivial, the ring is a UFD. When it is not, there is a distinction between a prime element and an irreducible element. An irreducible element is an element such that if , then either or is a unit. These are the elements that cannot be factored any further. Every element in O admits a factorization into irreducible elements, but it may admit more than one. This is because, while all prime elements are irreducible, some irreducible elements may not be prime. For example, consider the ring . In this ring, the numbers , and are irreducible. This means that the number has two factorizations into irreducible elements,
This equation shows that divides the product . If were a prime element, then it would divide or , but it does not, because all elements divisible by are of the form . Similarly, and divide the product , but neither of these elements divides itself, so neither of them are prime. As there is no sense in which the elements , and can be made equivalent, unique factorization fails in . Unlike the situation with units, where uniqueness could be repaired by weakening the definition, overcoming this failure requires a new perspective.
Factorization into prime ideals
If is an ideal in , then there is always a factorization
where each is a prime ideal, and where this expression is unique up to the order of the factors. In particular, this is true if is the principal ideal generated by a single element. This is the strongest sense in which the ring of integers of a general number field admits unique factorization. In the language of ring theory, it says that rings of integers are Dedekind domains.
When is a UFD, every prime ideal is generated by a prime element. Otherwise, there are prime ideals which are not generated by prime elements. In , for instance, the ideal is a prime ideal which cannot be generated by a single element.
Historically, the idea of factoring ideals into prime ideals was preceded by Ernst Kummer's introduction of ideal numbers. These are numbers lying in an extension field of . This extension field is now known as the Hilbert class field. By the principal ideal theorem, every prime ideal of generates a principal ideal of the ring of integers of . A generator of this principal ideal is called an ideal number. Kummer used these as a substitute for the failure of unique factorization in cyclotomic fields. These eventually led Richard Dedekind to introduce a forerunner of ideals and to prove unique factorization of ideals.
An ideal which is prime in the ring of integers in one number field may fail to be prime when extended to a larger number field. Consider, for example, the prime numbers. The corresponding ideals are prime ideals of the ring . However, when this ideal is extended to the Gaussian integers to obtain , it may or may not be prime. For example, the factorization implies that
note that because , the ideals generated by and are the same. A complete answer to the question of which ideals remain prime in the Gaussian integers is provided by Fermat's theorem on sums of two squares. It implies that for an odd prime number , is a prime ideal if and is not a prime ideal if . This, together with the observation that the ideal is prime, provides a complete description of the prime ideals in the Gaussian integers. Generalizing this simple result to more general rings of integers is a basic problem in algebraic number theory. Class field theory accomplishes this goal when K is an abelian extension of Q (that is, a Galois extension with abelian Galois group).
Ideal class group
Unique factorization fails if and only if there are prime ideals that fail to be principal. The object which measures the failure of prime ideals to be principal is called the ideal class group. Defining the ideal class group requires enlarging the set of ideals in a ring of algebraic integers so that they admit a group structure. This is done by generalizing ideals to fractional ideals. A fractional ideal is an additive subgroup of which is closed under multiplication by elements of , meaning that if . All ideals of are also fractional ideals. If and are fractional ideals, then the set of all products of an element in and an element in is also a fractional ideal. This operation makes the set of non-zero fractional ideals into a group. The group identity is the ideal , and the inverse of is a (generalized) ideal quotient:
The principal fractional ideals, meaning the ones of the form where , form a subgroup of the group of all non-zero fractional ideals. The quotient of the group of non-zero fractional ideals by this subgroup is the ideal class group. Two fractional ideals and represent the same element of the ideal class group if and only if there exists an element such that . Therefore, the ideal class group makes two fractional ideals equivalent if one is as close to being principal as the other is. The ideal class group is generally denoted , , or (with the last notation identifying it with the Picard group in algebraic geometry).
The number of elements in the class group is called the class number of K. The class number of is 2. This means that there are only two ideal classes, the class of principal fractional ideals, and the class of a non-principal fractional ideal such as .
The ideal class group has another description in terms of divisors. These are formal objects which represent possible factorizations of numbers. The divisor group is defined to be the free abelian group generated by the prime ideals of . There is a group homomorphism from , the non-zero elements of up to multiplication, to . Suppose that satisfies
Then is defined to be the divisor
The kernel of is the group of units in , while the cokernel is the ideal class group. In the language of homological algebra, this says that there is an exact sequence of abelian groups (written multiplicatively),
Real and complex embeddings
Some number fields, such as , can be specified as subfields of the real numbers. Others, such as , cannot. Abstractly, such a specification corresponds to a field homomorphism or . These are called real embeddings and complex embeddings, respectively.
A real quadratic field , with , and not a perfect square, is so-called because it admits two real embeddings but no complex embeddings. These are the field homomorphisms which send to and to , respectively. Dually, an imaginary quadratic field admits no real embeddings but admits a conjugate pair of complex embeddings. One of these embeddings sends to , while the other sends it to its complex conjugate, .
Conventionally, the number of real embeddings of is denoted , while the number of conjugate pairs of complex embeddings is denoted . The signature of K is the pair . It is a theorem that , where is the degree of .
Considering all embeddings at once determines a function , or equivalently
This is called the Minkowski embedding.
The subspace of the codomain fixed by complex conjugation is a real vector space of dimension called Minkowski space. Because the Minkowski embedding is defined by field homomorphisms, multiplication of elements of by an element corresponds to multiplication by a diagonal matrix in the Minkowski embedding. The dot product on Minkowski space corresponds to the trace form .
The image of under the Minkowski embedding is a -dimensional lattice. If is a basis for this lattice, then is the discriminant of . The discriminant is denoted or . The covolume of the image of is .
Places
Real and complex embeddings can be put on the same footing as prime ideals by adopting a perspective based on valuations. Consider, for example, the integers. In addition to the usual absolute value function |·| : Q → R, there are p-adic absolute value functions |·|p : Q → R, defined for each prime number p, which measure divisibility by p. Ostrowski's theorem states that these are all possible absolute value functions on Q (up to equivalence). Therefore, absolute values are a common language to describe both the real embedding of Q and the prime numbers.
A place of an algebraic number field is an equivalence class of absolute value functions on K. There are two types of places. There is a -adic absolute value for each prime ideal of O, and, like the p-adic absolute values, it measures divisibility. These are called finite places. The other type of place is specified using a real or complex embedding of K and the standard absolute value function on R or C. These are infinite places. Because absolute values are unable to distinguish between a complex embedding and its conjugate, a complex embedding and its conjugate determine the same place. Therefore, there are real places and complex places. Because places encompass the primes, places are sometimes referred to as primes. When this is done, finite places are called finite primes and infinite places are called infinite primes. If is a valuation corresponding to an absolute value, then one frequently writes to mean that is an infinite place and to mean that it is a finite place.
Considering all the places of the field together produces the adele ring of the number field. The adele ring allows one to simultaneously track all the data available using absolute values. This produces significant advantages in situations where the behavior at one place can affect the behavior at other places, as in the Artin reciprocity law.
Places at infinity geometrically
There is a geometric analogy for places at infinity which holds on the function fields of curves. For example, let and be a smooth, projective, algebraic curve. The function field has many absolute values, or places, and each corresponds to a point on the curve. If is the projective completion of an affine curve then the points in correspond to the places at infinity. Then, the completion of at one of these points gives an analogue of the -adics.
For example, if then its function field is isomorphic to where is an indeterminant and the field is the field of fractions of polynomials in . Then, a place at a point measures the order of vanishing or the order of a pole of a fraction of polynomials at the point . For example, if , so on the affine chart this corresponds to the point , the valuation measures the order of vanishing of minus the order of vanishing of at . The function field of the completion at the place is then which is the field of power series in the variable , so an element is of the formfor some . For the place at infinity, this corresponds to the function field which are power series of the form
Units
The integers have only two units, and . Other rings of integers may admit more units. The Gaussian integers have four units, the previous two as well as . The Eisenstein integers have six units. The integers in real quadratic number fields have infinitely many units. For example, in , every power of is a unit, and all these powers are distinct.
In general, the group of units of , denoted , is a finitely generated abelian group. The fundamental theorem of finitely generated abelian groups therefore implies that it is a direct sum of a torsion part and a free part. Reinterpreting this in the context of a number field, the torsion part consists of the roots of unity that lie in . This group is cyclic. The free part is described by Dirichlet's unit theorem. This theorem says that rank of the free part is . Thus, for example, the only fields for which the rank of the free part is zero are and the imaginary quadratic fields. A more precise statement giving the structure of O× ⊗Z Q as a Galois module for the Galois group of K/Q is also possible.
The free part of the unit group can be studied using the infinite places of . Consider the function
where varies over the infinite places of and |·|v is the absolute value associated with . The function is a homomorphism from to a real vector space. It can be shown that the image of is a lattice that spans the hyperplane defined by The covolume of this lattice is the regulator of the number field. One of the simplifications made possible by working with the adele ring is that there is a single object, the idele class group, that describes both the quotient by this lattice and the ideal class group.
Zeta function
The Dedekind zeta function of a number field, analogous to the Riemann zeta function, is an analytic object which describes the behavior of prime ideals in . When is an abelian extension of , Dedekind zeta functions are products of Dirichlet L-functions, with there being one factor for each Dirichlet character. The trivial character corresponds to the Riemann zeta function. When is a Galois extension, the Dedekind zeta function is the Artin L-function of the regular representation of the Galois group of , and it has a factorization in terms of irreducible Artin representations of the Galois group.
The zeta function is related to the other invariants described above by the class number formula.
Local fields
Completing a number field K at a place w gives a complete field. If the valuation is Archimedean, one obtains R or C, if it is non-Archimedean and lies over a prime p of the rationals, one obtains a finite extension a complete, discrete valued field with finite residue field. This process simplifies the arithmetic of the field and allows the local study of problems. For example, the Kronecker–Weber theorem can be deduced easily from the analogous local statement. The philosophy behind the study of local fields is largely motivated by geometric methods. In algebraic geometry, it is common to study varieties locally at a point by localizing to a maximal ideal. Global information can then be recovered by gluing together local data. This spirit is adopted in algebraic number theory. Given a prime in the ring of algebraic integers in a number field, it is desirable to study the field locally at that prime. Therefore, one localizes the ring of algebraic integers to that prime and then completes the fraction field much in the spirit of geometry.
Major results
Finiteness of the class group
One of the classical results in algebraic number theory is that the ideal class group of an algebraic number field K is finite. This is a consequence of Minkowski's theorem since there are only finitely many Integral ideals with norm less than a fixed positive integer page 78. The order of the class group is called the class number, and is often denoted by the letter h.
Dirichlet's unit theorem
Dirichlet's unit theorem provides a description of the structure of the multiplicative group of units O× of the ring of integers O. Specifically, it states that O× is isomorphic to G × Zr, where G is the finite cyclic group consisting of all the roots of unity in O, and r = r1 + r2 − 1 (where r1 (respectively, r2) denotes the number of real embeddings (respectively, pairs of conjugate non-real embeddings) of K). In other words, O× is a finitely generated abelian group of rank r1 + r2 − 1 whose torsion consists of the roots of unity in O.
Reciprocity laws
In terms of the Legendre symbol, the law of quadratic reciprocity for positive odd primes states
A reciprocity law is a generalization of the law of quadratic reciprocity.
There are several different ways to express reciprocity laws. The early reciprocity laws found in the 19th century were usually expressed in terms of a power residue symbol (p/q) generalizing the quadratic reciprocity symbol, that describes when a prime number is an nth power residue modulo another prime, and gave a relation between (p/q) and (q/p). Hilbert reformulated the reciprocity laws as saying that a product over p of Hilbert symbols (a,b/p), taking values in roots of unity, is equal to 1. Artin's reformulated reciprocity law states that the Artin symbol from ideals (or ideles) to elements of a Galois group is trivial on a certain subgroup. Several more recent generalizations express reciprocity laws using cohomology of groups or representations of adelic groups or algebraic K-groups, and their relationship with the original quadratic reciprocity law can be hard to see.
Class number formula
The class number formula relates many important invariants of a number field to a special value of its Dedekind zeta function.
Related areas
Algebraic number theory interacts with many other mathematical disciplines. It uses tools from homological algebra. Via the analogy of function fields vs. number fields, it relies on techniques and ideas from algebraic geometry. Moreover, the study of higher-dimensional schemes over Z instead of number rings is referred to as arithmetic geometry. Algebraic number theory is also used in the study of arithmetic hyperbolic 3-manifolds.
See also
Class field theory
Kummer theory
Locally compact field
Tamagawa number
Notes
Further reading
Introductory texts
Intermediate texts
Graduate level texts
External links
Number theory | Algebraic number theory | [
"Mathematics"
] | 5,789 | [
"Discrete mathematics",
"Algebraic number theory",
"Number theory"
] |
174,706 | https://en.wikipedia.org/wiki/Laplace%20operator | In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian of a function at a point measures by how much the average value of over small spheres or balls centered at deviates from .
The Laplace operator is named after the French mathematician Pierre-Simon de Laplace (1749–1827), who first applied the operator to the study of celestial mechanics: the Laplacian of the gravitational potential due to a given mass density distribution is a constant multiple of that density distribution. Solutions of Laplace's equation are called harmonic functions and represent the possible gravitational potentials in regions of vacuum.
The Laplacian occurs in many differential equations describing physical phenomena. Poisson's equation describes electric and gravitational potentials; the diffusion equation describes heat and fluid flow; the wave equation describes wave propagation; and the Schrödinger equation describes the wave function in quantum mechanics. In image processing and computer vision, the Laplacian operator has been used for various tasks, such as blob and edge detection. The Laplacian is the simplest elliptic operator and is at the core of Hodge theory as well as the results of de Rham cohomology.
Definition
The Laplace operator is a second-order differential operator in the n-dimensional Euclidean space, defined as the divergence () of the gradient (). Thus if is a twice-differentiable real-valued function, then the Laplacian of is the real-valued function defined by:
where the latter notations derive from formally writing:
Explicitly, the Laplacian of is thus the sum of all the unmixed second partial derivatives in the Cartesian coordinates :
As a second-order differential operator, the Laplace operator maps functions to functions for . It is a linear operator , or more generally, an operator for any open set .
Alternatively, the Laplace operator can be defined as:
Where is the dimension of the space, is the average value of on the surface of a n-sphere of radius R, is the surface integral over a n-sphere of radius R, and is the hypervolume of the boundary of a unit n-sphere.
Motivation
Diffusion
In the physical theory of diffusion, the Laplace operator arises naturally in the mathematical description of equilibrium. Specifically, if is the density at equilibrium of some quantity such as a chemical concentration, then the net flux of through the boundary (also called ) of any smooth region is zero, provided there is no source or sink within :
where is the outward unit normal to the boundary of . By the divergence theorem,
Since this holds for all smooth regions , one can show that it implies:
The left-hand side of this equation is the Laplace operator, and the entire equation is known as Laplace's equation. Solutions of the Laplace equation, i.e. functions whose Laplacian is identically zero, thus represent possible equilibrium densities under diffusion.
The Laplace operator itself has a physical interpretation for non-equilibrium diffusion as the extent to which a point represents a source or sink of chemical concentration, in a sense made precise by the diffusion equation. This interpretation of the Laplacian is also explained by the following fact about averages.
Averages
Given a twice continuously differentiable function and a point , the average value of over the ball with radius centered at is:
Similarly, the average value of over the sphere (the boundary of a ball) with radius centered at is:
Density associated with a potential
If denotes the electrostatic potential associated to a charge distribution , then the charge distribution itself is given by the negative of the Laplacian of :
where is the electric constant.
This is a consequence of Gauss's law. Indeed, if is any smooth region with boundary , then by Gauss's law the flux of the electrostatic field across the boundary is proportional to the charge enclosed:
where the first equality is due to the divergence theorem. Since the electrostatic field is the (negative) gradient of the potential, this gives:
Since this holds for all regions , we must have
The same approach implies that the negative of the Laplacian of the gravitational potential is the mass distribution. Often the charge (or mass) distribution are given, and the associated potential is unknown. Finding the potential function subject to suitable boundary conditions is equivalent to solving Poisson's equation.
Energy minimization
Another motivation for the Laplacian appearing in physics is that solutions to in a region are functions that make the Dirichlet energy functional stationary:
To see this, suppose is a function, and is a function that vanishes on the boundary of . Then:
where the last equality follows using Green's first identity. This calculation shows that if , then is stationary around . Conversely, if is stationary around , then by the fundamental lemma of calculus of variations.
Coordinate expressions
Two dimensions
The Laplace operator in two dimensions is given by:
In Cartesian coordinates,
where and are the standard Cartesian coordinates of the -plane.
In polar coordinates,
where represents the radial distance and the angle.
Three dimensions
In three dimensions, it is common to work with the Laplacian in a variety of different coordinate systems.
In Cartesian coordinates,
In cylindrical coordinates,
where represents the radial distance, the azimuth angle and the height.
In spherical coordinates:
or
by expanding the first and second term, these expressions read
where represents the azimuthal angle and the zenith angle or co-latitude. In particular, the above is equivalent to
where is the Laplace-Beltrami operator on the unit sphere.
In general curvilinear coordinates ():
where summation over the repeated indices is implied,
is the inverse metric tensor and are the Christoffel symbols for the selected coordinates.
dimensions
In arbitrary curvilinear coordinates in dimensions (), we can write the Laplacian in terms of the inverse metric tensor, :
from the Voss-Weyl formula for the divergence.
In spherical coordinates in dimensions, with the parametrization with representing a positive real radius and an element of the unit sphere ,
where is the Laplace–Beltrami operator on the -sphere, known as the spherical Laplacian. The two radial derivative terms can be equivalently rewritten as:
As a consequence, the spherical Laplacian of a function defined on can be computed as the ordinary Laplacian of the function extended to so that it is constant along rays, i.e., homogeneous of degree zero.
Euclidean invariance
The Laplacian is invariant under all Euclidean transformations: rotations and translations. In two dimensions, for example, this means that:
for all θ, a, and b. In arbitrary dimensions,
whenever ρ is a rotation, and likewise:
whenever τ is a translation. (More generally, this remains true when ρ is an orthogonal transformation such as a reflection.)
In fact, the algebra of all scalar linear differential operators, with constant coefficients, that commute with all Euclidean transformations, is the polynomial algebra generated by the Laplace operator.
Spectral theory
The spectrum of the Laplace operator consists of all eigenvalues for which there is a corresponding eigenfunction with:
This is known as the Helmholtz equation.
If is a bounded domain in , then the eigenfunctions of the Laplacian are an orthonormal basis for the Hilbert space . This result essentially follows from the spectral theorem on compact self-adjoint operators, applied to the inverse of the Laplacian (which is compact, by the Poincaré inequality and the Rellich–Kondrachov theorem). It can also be shown that the eigenfunctions are infinitely differentiable functions. More generally, these results hold for the Laplace–Beltrami operator on any compact Riemannian manifold with boundary, or indeed for the Dirichlet eigenvalue problem of any elliptic operator with smooth coefficients on a bounded domain. When is the -sphere, the eigenfunctions of the Laplacian are the spherical harmonics.
Vector Laplacian
The vector Laplace operator, also denoted by , is a differential operator defined over a vector field. The vector Laplacian is similar to the scalar Laplacian; whereas the scalar Laplacian applies to a scalar field and returns a scalar quantity, the vector Laplacian applies to a vector field, returning a vector quantity. When computed in orthonormal Cartesian coordinates, the returned vector field is equal to the vector field of the scalar Laplacian applied to each vector component.
The vector Laplacian of a vector field is defined as
This definition can be seen as the Helmholtz decomposition of the vector Laplacian.
In Cartesian coordinates, this reduces to the much simpler form as
where , , and are the components of the vector field , and just on the left of each vector field component is the (scalar) Laplace operator. This can be seen to be a special case of Lagrange's formula; see Vector triple product.
For expressions of the vector Laplacian in other coordinate systems see Del in cylindrical and spherical coordinates.
Generalization
The Laplacian of any tensor field ("tensor" includes scalar and vector) is defined as the divergence of the gradient of the tensor:
For the special case where is a scalar (a tensor of degree zero), the Laplacian takes on the familiar form.
If is a vector (a tensor of first degree), the gradient is a covariant derivative which results in a tensor of second degree, and the divergence of this is again a vector. The formula for the vector Laplacian above may be used to avoid tensor math and may be shown to be equivalent to the divergence of the Jacobian matrix shown below for the gradient of a vector:
And, in the same manner, a dot product, which evaluates to a vector, of a vector by the gradient of another vector (a tensor of 2nd degree) can be seen as a product of matrices:
This identity is a coordinate dependent result, and is not general.
Use in physics
An example of the usage of the vector Laplacian is the Navier-Stokes equations for a Newtonian incompressible flow:
where the term with the vector Laplacian of the velocity field represents the viscous stresses in the fluid.
Another example is the wave equation for the electric field that can be derived from Maxwell's equations in the absence of charges and currents:
This equation can also be written as:
where is the D'Alembertian, used in the Klein–Gordon equation.
Some properties
First of all, we say that a smooth function is superharmonic whenever .
Let be a smooth function, and let be a connected compact set. If is superharmonic, then, for every , we have
for some constant depending on and .
Generalizations
A version of the Laplacian can be defined wherever the Dirichlet energy functional makes sense, which is the theory of Dirichlet forms. For spaces with additional structure, one can give more explicit descriptions of the Laplacian, as follows.
Laplace–Beltrami operator
The Laplacian also can be generalized to an elliptic operator called the Laplace–Beltrami operator defined on a Riemannian manifold. The Laplace–Beltrami operator, when applied to a function, is the trace () of the function's Hessian:
where the trace is taken with respect to the inverse of the metric tensor. The Laplace–Beltrami operator also can be generalized to an operator (also called the Laplace–Beltrami operator) which operates on tensor fields, by a similar formula.
Another generalization of the Laplace operator that is available on pseudo-Riemannian manifolds uses the exterior derivative, in terms of which the "geometer's Laplacian" is expressed as
Here is the codifferential, which can also be expressed in terms of the Hodge star and the exterior derivative. This operator differs in sign from the "analyst's Laplacian" defined above. More generally, the "Hodge" Laplacian is defined on differential forms by
This is known as the Laplace–de Rham operator, which is related to the Laplace–Beltrami operator by the Weitzenböck identity.
D'Alembertian
The Laplacian can be generalized in certain ways to non-Euclidean spaces, where it may be elliptic, hyperbolic, or ultrahyperbolic.
In Minkowski space the Laplace–Beltrami operator becomes the D'Alembert operator or D'Alembertian:
It is the generalization of the Laplace operator in the sense that it is the differential operator which is invariant under the isometry group of the underlying space and it reduces to the Laplace operator if restricted to time-independent functions. The overall sign of the metric here is chosen such that the spatial parts of the operator admit a negative sign, which is the usual convention in high-energy particle physics. The D'Alembert operator is also known as the wave operator because it is the differential operator appearing in the wave equations, and it is also part of the Klein–Gordon equation, which reduces to the wave equation in the massless case.
The additional factor of in the metric is needed in physics if space and time are measured in different units; a similar factor would be required if, for example, the direction were measured in meters while the direction were measured in centimeters. Indeed, theoretical physicists usually work in units such that in order to simplify the equation.
The d'Alembert operator generalizes to a hyperbolic operator on pseudo-Riemannian manifolds.
See also
Laplace–Beltrami operator, generalization to submanifolds in Euclidean space and Riemannian and pseudo-Riemannian manifold.
The Laplacian in differential geometry.
The discrete Laplace operator is a finite-difference analog of the continuous Laplacian, defined on graphs and grids.
The Laplacian is a common operator in image processing and computer vision (see the Laplacian of Gaussian, blob detector, and scale space).
The list of formulas in Riemannian geometry contains expressions for the Laplacian in terms of Christoffel symbols.
Weyl's lemma (Laplace equation).
Earnshaw's theorem which shows that stable static gravitational, electrostatic or magnetic suspension is impossible.
Del in cylindrical and spherical coordinates.
Other situations in which a Laplacian is defined are: analysis on fractals, time scale calculus and discrete exterior calculus.
Notes
References
The Feynman Lectures on Physics Vol. II Ch. 12: Electrostatic Analogs
.
.
Further reading
The Laplacian - Richard Fitzpatrick 2006
External links
Laplacian in polar coordinates derivation
Laplace equations on the fractal cubes and Casimir effect
Differential operators
Elliptic partial differential equations
Fourier analysis
Operator
Harmonic functions
Linear operators in calculus
Multivariable calculus | Laplace operator | [
"Mathematics"
] | 3,160 | [
"Multivariable calculus",
"Mathematical analysis",
"Differential operators",
"Calculus"
] |
174,721 | https://en.wikipedia.org/wiki/Village%20green | A village green is a common open area within a village or other settlement. Historically, a village green was common grassland with a pond for watering cattle and other stock, often at the edge of a rural settlement, used for gathering cattle to bring them later on to a common land for grazing. Later, planned greens were built into the centres of villages.
The village green also provided, and may still provide, an open-air meeting place for the local people, which may be used for public celebrations such as May Day festivities. The term is used more broadly to encompass woodland, moorland, sports grounds, buildings, roads and urban parks.
History
Most village greens in England originated in the Middle Ages. Individual greens may have been created for various reasons, including protecting livestock from wild animals or human raiders during the night, or providing a space for market trading.
In most cases where a village green is planned, it is placed in the centre of a settlement. Village greens can also be formed when a settlement expands to the edge of an existing area of common land, or when an area of waste land between two settlements becomes developed.
Some historical village greens have been lost as a result of the agricultural revolution and urban development. Greens are now most likely to be found in the older villages of mainland Europe, the United Kingdom, and older areas of the United States.
Some greens that used to be commons, or otherwise at the centres of villages, have been swallowed up by a city growing around them. Sometimes they become a city park or a square and manage to maintain a sense of place. London has several of these; one example is Newington Green, with Newington Green Unitarian Church anchoring the northern end.
In mid-20th-century England, town expansion led to the formation of local conservation societies, often centring on village green preservation, as celebrated and parodied in The Kinks' album The Kinks Are the Village Green Preservation Society. The Open Spaces Society is a present-day UK national campaigning body that continues this movement.
Examples
United States
In the United States, the most famous example of a town green is probably the New Haven Green in New Haven, Connecticut. New Haven was founded by settlers from England and was the first planned city in the United States. Originally used for grazing livestock, the Green dates from the 1630s and is well preserved today despite lying at the heart of the city centre.
The largest green in the U.S. is a mile in length and can be found in Lebanon, Connecticut. This is the only village green in the United States still used for agriculture. One of the most unusual examples is the Dartmouth College Green in Hanover, New Hampshire, which was owned and cleared by the college in 1770. The college, not the town, still owns it and surrounded it with buildings as a sort of collegiate quadrangle in the 1930s, although its origin as a town green remains apparent.
An example of a traditional American town green exists in downtown Morristown, NJ. The Morristown Green dates from 1715 and has hosted events ranging from executions to clothing drives.
There are two places in the United States called Village Green: Village Green-Green Ridge, Pennsylvania, and Village Green, New York. Some New England towns, along with some areas settled by New Englanders such as the townships in the Connecticut Western Reserve, refer to their town square as a village green. The village green of Bedford, New York, is preserved as part of Bedford Village Historic District.
Europe
A notable example of a village green is that in the village of Finchingfield in Essex, England, which is said to be "the most photographed village in England". The green dominates the village, and slopes down to a duck pond, and is occasionally flooded after heavy rain. The small village of Car Colston in Nottinghamshire, England, has two village greens, totaling 29 acres (12 ha), and the village of Burton Leonard in North Yorkshire has three.
The Open Spaces Society states that in 2005 there were about 3,650 registered greens in England covering and about 220 in Wales covering about .
The northern part of the province of Drenthe in the Netherlands is also known for its village greens. Zuidlaren is the village with the largest number of village greens in the Netherlands.
The Błonia Park, originally established in the Middle Ages, is an example of a large village green in Kraków, Poland.
Indonesia
In Indonesia, especially in Java, a similar place is called Alun-Alun. It is a central part of Javanese village architecture and culture.
Legal definitions
England and Wales
Apart from the general use of the term, village green has a specific legal meaning in England and Wales, and also includes the less common term town green. Town and village greens were defined in the Commons Registration Act 1965, as amended by the Countryside and Rights of Way Act 2000, as land:
which has been allotted by or under any act for the exercise or recreation of the inhabitants of any locality
or on which the inhabitants of any locality have a customary right to indulge in lawful sports and pastimes
or if it is land on which for not fewer than twenty years a significant number of the inhabitants of any locality, or of any neighbourhood within a locality, have indulged in lawful sports and pastimes as of right.
Registered greens in England and Wales are now governed by the Commons Act 2006, but the fundamental test of whether land is a town and village green remains the same. Thus land can become a village green if it has been used for twenty years without force, secrecy or request (nec vi, nec clam, nec precario). Village green legislation is often used to try to frustrate development. Recent case law (Oxfordshire County Council vs Oxford City Council and Robinson) makes it clear that registration as a green would render any development which prevented continuing use of the green as criminal activity under the Inclosure Act 1857 and the Commons Act 1876 (39 & 40 Vict. c. 56). This leads to some most curious areas being claimed as village greens, sometimes with success. Recent examples include a bandstand, two lakes and a beach.
On 11 December 2019, a Supreme Court decision put the future of some village greens at risk in England and Wales. The case involved five fields (13 hectares) in south Lancaster, the Moorside Fields, owned by Lancashire County Council. The lands had been available for public use for over 50 years. According to the Commons Act 2006, land used for informal recreation for at least 20 years can be registered as green and is then protected from development. (Granted, the Growth and Infrastructure Act 2013 specified that land designated for planning applications could not be registered as a village green, but that did not apply in the Moorside Fields case.)
The Moorside Fields Community Group attempted to register the lands in 2016 under the Commons Act 2006. The local authority challenged the registration, wanting to retain control of the lands for future expansion of the nearby Moorside Primary School's playing fields. The council's challenge failed in the High Court and then in the Court of Appeal; the registration of the land as a village green could proceed. Lancashire County Council subsequently appealed to the UK Supreme Court.
In the appeal decision, cited as R (on the application of Lancashire County Council) (Appellant) v Secretary of State for the Environment, Food and Rural Affairs (Respondent) the court overturned the previous judgments. At the same time, the Supreme Court also ruled against the registration of lands in a separate case in Surrey involving the 2.9 hectare Leach Grove Wood at Leatherhead, owned by the National Health Service. After publication of the decision in the Moorside Fields case, Lancashire County Council told the news media that the court had "protect[ed] this land for future generations".
In effect, the Supreme Court decision left lands owned by public authorities by their statutory powers open to development for any purpose that they deem to be appropriate. This could have far-reaching ramifications in England and Wales, according to the Open Spaces Society, a national conservation group that was founded in 1865. A representative made this comment to The Guardian: "This is a deeply worrying decision as it puts at risk countless publicly owned green spaces which local people have long enjoyed, but which, unknown to them, are held for purposes which are incompatible with recreational use".
Gallery
See also
Common land
Park
Town square
Urban green space
References
External links
The Open Spaces Society—gives UK information on how to claim a village green.
Town Greens of Connecticut—historical information on the town greens that are found in almost every Connecticut town
Green
Urban studies and planning terminology
Landscape history
Landscape
Parks
Common land
Grasslands | Village green | [
"Biology"
] | 1,756 | [
"Grasslands",
"Ecosystems"
] |
174,724 | https://en.wikipedia.org/wiki/Goro%20Shimura | was a Japanese mathematician and Michael Henry Strater Professor Emeritus of Mathematics at Princeton University who worked in number theory, automorphic forms, and arithmetic geometry. He was known for developing the theory of complex multiplication of abelian varieties and Shimura varieties, as well as posing the Taniyama–Shimura conjecture which ultimately led to the proof of Fermat's Last Theorem.
Biography
Gorō Shimura was born in Hamamatsu, Japan, on 23 February 1930. Shimura graduated with a B.A. in mathematics and a D.Sc. in mathematics from the University of Tokyo in 1952 and 1958, respectively.
After graduating, Shimura became a lecturer at the University of Tokyo, then worked abroad — including ten months in Paris and a seven-month stint at Princeton's Institute for Advanced Study — before returning to Tokyo, where he married Chikako Ishiguro. He then moved from Tokyo to join the faculty of Osaka University, but growing unhappy with his funding situation, he decided to seek employment in the United States. Through André Weil he obtained a position at Princeton University. Shimura joined the Princeton faculty in 1964 and retired in 1999, during which time he advised over 28 doctoral students and received the Guggenheim Fellowship in 1970, the Cole Prize for number theory in 1977, the Asahi Prize in 1991, and the Steele Prize for lifetime achievement in 1996.
Shimura described his approach to mathematics as "phenomenological": his interest was in finding new types of interesting behavior in the theory of automorphic forms. He also argued for a "romantic" approach, something he found lacking in the younger generation of mathematicians. Shimura used a two-part process for research, using one desk in his home dedicated to working on new research in the mornings and a second desk for perfecting papers in the afternoon.
Shimura had two children, Tomoko and Haru, with his wife Chikako. Shimura died on 3 May 2019 in Princeton, New Jersey at the age of 89.
Research
Shimura was a colleague and a friend of Yutaka Taniyama, with whom he wrote the first book on the complex multiplication of abelian varieties and formulated the Taniyama–Shimura conjecture. Shimura then wrote a long series of major papers, extending the phenomena found in the theory of complex multiplication of elliptic curves and the theory of modular forms to higher dimensions (e.g. Shimura varieties). This work provided examples for which the equivalence between motivic and automorphic L-functions postulated in the Langlands program could be tested: automorphic forms realized in the cohomology of a Shimura variety have a construction that attaches Galois representations to them.
In 1958, Shimura generalized the initial work of Martin Eichler on the Eichler–Shimura congruence relation between the local L-function of a modular curve and the eigenvalues of Hecke operators. In 1959, Shimura extended the work of Eichler on the Eichler–Shimura isomorphism between Eichler cohomology groups and spaces of cusp forms which would be used in Pierre Deligne's proof of the Weil conjectures.
In 1971, Shimura's work on explicit class field theory in the spirit of Kronecker's Jugendtraum resulted in his proof of Shimura's reciprocity law. In 1973, Shimura established the Shimura correspondence between modular forms of half integral weight k+1/2, and modular forms of even weight 2k.
Shimura's formulation of the Taniyama–Shimura conjecture (later known as the modularity theorem) in the 1950s played a key role in the proof of Fermat's Last Theorem by Andrew Wiles in 1995. In 1990, Kenneth Ribet proved Ribet's theorem which demonstrated that Fermat's Last Theorem followed from the semistable case of this conjecture. Shimura dryly commented that his first reaction on hearing of Andrew Wiles's proof of the semistable case was 'I told you so'.
Other interests
His hobbies were shogi problems of extreme length and collecting Imari porcelain. The Story of Imari: The Symbols and Mysteries of Antique Japanese Porcelain is a non-fiction work about the Imari porcelain that he collected over 30 years that was published by Ten Speed Press in 2008.
Works
Mathematical books
Later expanded and published as
- It is published from Iwanami Shoten in Japan.
An expanded version of .
Non-fiction
Collected papers
References
External links
Goro Shimura, a ‘giant’ of number theory, dies at 89 / Princeton University
The New York Times, Goro Shimura, 89, Mathematician with Broad Impact, Is Dead Princeton University, Professor Emeritus Goro Shimura 1930–2019
1930 births
2019 deaths
People from Hamamatsu
University of Tokyo alumni
20th-century Japanese mathematicians
21st-century Japanese mathematicians
20th-century American mathematicians
21st-century American mathematicians
Number theorists
Academic staff of Osaka University
Princeton University faculty
Institute for Advanced Study visiting scholars
Japanese emigrants to the United States
Academic staff of the University of Tokyo | Goro Shimura | [
"Mathematics"
] | 1,042 | [
"Number theorists",
"Number theory"
] |
174,754 | https://en.wikipedia.org/wiki/Teiji%20Takagi | Teiji Takagi (高木 貞治 Takagi Teiji, April 21, 1875 – February 28, 1960) was a Japanese mathematician, best known for proving the Takagi existence theorem in class field theory. The Blancmange curve, the graph of a nowhere-differentiable but uniformly continuous function, is also called the Takagi curve after his work on it.
Biography
He was born in the rural area of the Gifu Prefecture, Japan. He began learning mathematics in middle school, reading texts in English since none were available in Japanese. After attending a high school for gifted students, he went on to the Imperial University (later Tokyo Imperial University), at that time the only university in Japan before the Imperial University System was established on June 18, 1897. There he learned mathematics from such European classic texts as Salmon's Algebra and Weber's Lehrbuch der Algebra. Aided by Hilbert, he then studied at Göttingen. Aside from his work in algebraic number theory he wrote a great number of Japanese textbooks on mathematics and geometry.
During World War I, he was isolated from European mathematicians and developed his existence theorem in class field theory, building on the work of Heinrich Weber. As an Invited Speaker, he presented a synopsis of this research in a talk Sur quelques théoremes généraux de la théorie des nombres algébriques at the International Congress of Mathematicians in Strasbourg in 1920. There he found little recognition of the value of his research, since algebraic number theory was then studied mainly in Germany and German mathematicians were excluded from the Congress. Takagi published his theory in the same year in the journal of the University of Tokyo. However, the significance of Takagi's work was first recognized by Emil Artin in 1922, and was again pointed out by Carl Ludwig Siegel, and at the same time by Helmut Hasse, who lectured in Kiel in 1923 on class field theory and presented Takagi's work in a lecture at the meeting of the DMV in 1925 in Danzig and in his Klassenkörperbericht (class field report) in the 1926 annual report of the DMV. Takagi was then internationally recognized as one of the world's leading number theorists. In 1932 he was vice-president of the International Congress of Mathematicians in Zürich and in 1936 was a member of the selection committee for the first Fields Medal.
He was also instrumental during World War II in the development of Japanese encryption systems; see Purple.
The Autonne-Takagi factorization of complex symmetric matrices is named in his honour.
Family
Sigekatu Kuroda, son-in-law, a mathematician.
S.-Y. Kuroda, grandson (son of Sigekatu Kuroda), mathematician and Chomskyan linguist.
Bibliography
References
External links
Takagi Lectures by the Mathematical Society of Japan
Teiji Takagi: collected papers (2nd edition), edited by S. Iyanaga, K. Iwasawa, K. Kodaita and K. Yosida. Pp 376. DM188. 1990. (Springer) / CAMBRIDGE UNIVERSITY PRESS / The Mathematical Association 1991
People from Gifu Prefecture
People from the Empire of Japan
1875 births
1960 deaths
19th-century Japanese mathematicians
20th-century Japanese mathematicians
Number theorists
Academic staff of the University of Tokyo
University of Göttingen alumni
University of Tokyo alumni
Recipients of the Order of Culture | Teiji Takagi | [
"Mathematics"
] | 690 | [
"Number theorists",
"Number theory"
] |
174,761 | https://en.wikipedia.org/wiki/Timeline%20of%20computer%20viruses%20and%20worms | This timeline of computer viruses and worms presents a chronological timeline of noteworthy computer viruses, computer worms, Trojan horses, similar malware, related research and events.
1960s
John von Neumann's article on the "Theory of self-reproducing automata" is published in 1966. The article is based on lectures given by von Neumann at the University of Illinois about the "Theory and Organization of Complicated Automata" in 1949.
1970s
1970
The first story written about a computer virus, The Scarred Man by Gregory Benford, was published in the May 1970 issue of Venture Science Fiction.
1971
The Creeper system, an experimental self-replicating program, is written by Bob Thomas at BBN Technologies to test John von Neumann's theory. Creeper infected DEC PDP-10 computers running the TENEX operating system. Creeper gained access via the ARPANET and copied itself to the remote system where the message "I'm the creeper, catch me if you can!" was displayed. The Reaper program was later created to delete Creeper.
At the University of Illinois at Urbana-Champaign, a graduate student named Alan Davis (working for Prof. Donald Gillies) created a process on a PDP-11 that (a) checked to see if an identical copy of itself was currently running as an active process, and if not, created a copy of itself and started it running; (b) checked to see if any disk space (which all users shared) was available, and if so, created a file the size of that space; and (c) looped back to step (a). As a result, the process stole all available disk space. When users tried to save files, the operating system advised them that the disk was full and that they needed to delete some existing files. Of course, if they did delete a file, this process would immediately snatch up the available space. When users called in a system administrator (A. Ian Stocks) to fix the problem, he examined the active processes, discovered the offending process, and deleted it. Of course, before he left the room, the still existing process would create another copy of itself, and the problem would not go away. The only way to make the computer work again was to reboot.
1972
The science fiction novel, When HARLIE Was One, by David Gerrold, contains one of the first fictional representations of a computer virus, as well as one of the first uses of the word "virus" to denote a program that infects a computer.
1973
In fiction, the 1973 Michael Crichton movie Westworld made an early mention of the concept of a computer virus, being a central plot theme that causes androids to run amok. Alan Oppenheimer's character summarizes the problem by stating that "...there's a clear pattern here which suggests an analogy to an infectious disease process, spreading from one...area to the next." To which the replies are stated: "Perhaps there are superficial similarities to disease" and, "I must confess I find it difficult to believe in a disease of machinery." (Crichton's earlier work, the 1969 novel The Andromeda Strain and 1971 film were about an extraterrestrial biological virus-like disease that threatened the human race.)
1974
The Rabbit (or Wabbit) virus, more a fork bomb than a virus, is written. The Rabbit virus makes multiple copies of itself on a single computer (and was named "rabbit" for the speed at which it did so) until it clogs the system, reducing system performance, before finally reaching a threshold and crashing the computer.
1975
April: ANIMAL is written by John Walker for the UNIVAC 1108. ANIMAL asked several questions of the user in an attempt to guess the type of animal the user was thinking of, while the related program PERVADE would create a copy of itself and ANIMAL in every directory to which the current user had access. It spread across the multi-user UNIVACs when users with overlapping permissions discovered the game, and to other computers when tapes were shared. The program was carefully written to avoid damaging existing file or directory structures, and to avoid copying itself if permissions did not exist or if harm would result. Its spread was halted by an OS upgrade that changed the format of the file status tables PERVADE used. Though non-malicious, "Pervading Animal" represents the first Trojan "in the wild".
The novel The Shockwave Rider by John Brunner is published, coining the word "worm" to describe a program that propagates itself through a computer network.
1977
The Adolescence of P-1 novel, describes a worm program that propagates through modem-based networks, eventually developing its own strategy-developing AI, which deals with cross-hardware and cross-os issues, eventually infecting hardware manufactures and defense organizations.
1980s
1982
A program called Elk Cloner, written for Apple II systems, was created by high school student Richard Skrenta, originally as a prank. The Apple II was particularly vulnerable due to the storage of its operating system on a floppy disk. Elk Cloner's design combined with public ignorance about what malware was and how to protect against it led to Elk Cloner being responsible for the first large-scale computer virus outbreak in history.
1983
November: The term "virus" is re-coined by Frederick B. Cohen in describing self-replicating computer programs. In 1984 Cohen uses the phrase "computer virus" (suggested by his teacher Leonard Adleman) to describe the operation of such programs in terms of "infection". He defines a "virus" as "a program that can 'infect' other programs by modifying them to include a possibly evolved copy of itself." Cohen demonstrates a virus-like program on a VAX11/750 system at Lehigh University. The program could install itself in, or infect, other system objects.
1984
August: Ken Thompson publishes his seminal paper, "Reflections on Trusting Trust", in which he describes how he modified a C compiler so that when used to compile a specific version of the Unix operating system, it inserts a backdoor into the login command, and when used to compile a new copy of itself, it inserts the backdoor insertion code, even if neither the backdoor nor the backdoor insertion code is present in the source code of this new copy.
1986
January: The Brain boot sector virus is released. Brain is considered the first IBM PC compatible virus, and the program responsible for the first IBM PC compatible virus epidemic. The virus is also known as Lahore, Pakistani, Pakistani Brain, and Pakistani flu as it was created in Lahore, Pakistan, by 19-year-old Pakistani programmer Basit Farooq Alvi and his brother, Amjad Farooq Alvi.
December: Ralf Burger presented the Virdem model of programs at a meeting of the underground Chaos Computer Club in Germany. The Virdem model represented the first programs that could replicate themselves via addition of their code to executable DOS files in COM format.
1987
Appearance of the Vienna virus, which was subsequently neutralized – the first time this had happened on the IBM platform.
Appearance of Lehigh virus (discovered at its namesake university), boot sector viruses such as Yale from the US, Stoned from New Zealand, Ping Pong from Italy, and appearance of the first self-encrypting file virus, Cascade. Lehigh was stopped on campus before it spread to the "wild" (to computers beyond the university), and as a result, has never been found elsewhere. A subsequent infection of Cascade in the offices of IBM Belgium led to IBM responding with its own antivirus product development. Prior to this, antivirus solutions developed at IBM were intended for staff use only.
October: The Jerusalem virus, part of the (at that time unknown) Suriv family, is detected in the city of Jerusalem. The virus destroys all executable files on infected machines upon every occurrence of Friday the 13th (except Friday 13 November 1987 making its first trigger date May 13, 1988). Jerusalem caused a worldwide epidemic in 1988.
November: The SCA virus, a boot sector virus for Amiga computers, appears. It immediately creates a pandemic virus-writer storm. A short time later, SCA releases another, considerably more destructive virus, the Byte Bandit.
December: Christmas Tree EXEC was the first widely disruptive replicating network program, which paralyzed several international computer networks in December 1987. It was written in Rexx on the VM/CMS operating system and originated in West Germany. It re-emerged in 1990.
1988
March 1: The Ping-Pong virus (also called Boot, Bouncing Ball, Bouncing Dot, Italian, Italian-A or VeraCruz), an MS-DOS boot sector virus, is discovered at the University of Turin in Italy.
June: The CyberAIDS and Festering Hate Apple ProDOS viruses spreads from underground pirate BBS systems and starts infecting mainstream networks. Festering Hate was the last iteration of the CyberAIDS series extending back to 1985 and 1986. Unlike the few Apple viruses that had come before which were essentially annoying, but did no damage, the Festering Hate series of viruses was extremely destructive, spreading to all system files it could find on the host computer (hard drive, floppy, and system memory) and then destroying everything when it could no longer find any uninfected files.
November 2: The Morris worm, created by Robert Tappan Morris, infects DEC VAX and Sun machines running BSD UNIX that are connected to the Internet, and becomes the first worm to spread extensively "in the wild", and one of the first well-known programs exploiting buffer overrun vulnerabilities.
December: The Father Christmas worm attacks DEC VAX machines running VMS that are connected to the DECnet Internet (an international scientific research network using DECnet protocols), affecting NASA and other research centers. Its purpose was to deliver a Christmas greeting to all affected users.
1989
October: Ghostball, the first multipartite virus, is discovered by Friðrik Skúlason. It infects both executable .COM files and boot sectors on MS-DOS systems.
December: Several thousand floppy disks containing the AIDS Trojan, the first known ransomware, are mailed to subscribers of PC Business World magazine and a WHO AIDS conference mailing list. This DOS Trojan lies dormant for 90 boot cycles, then encrypts all filenames on the system, displaying a notice asking for $189 to be sent to a post office box in Panama in order to receive a decryption program.
1990s
1990
Mark Washburn, working on an analysis of the Vienna and Cascade viruses with Ralf Burger, develops the first family of polymorphic viruses, the Chameleon family. Chameleon series debuted with the release of 1260.
June: The Form computer virus is isolated in Switzerland. It would remain in the wild for almost 20 years and reappear afterward; during the 1990s it tended to be the most common virus in the wild with 20 to more than 50 percent of reported infections.
1991
Mattel releases a toyline called "Computer Warriors," bringing computer viruses into mainstream media. The villain, Megahert, is a sentient computer virus.
1992
March: The Michelangelo virus was expected to create a digital apocalypse on March 6, with millions of computers having their information wiped, according to mass media hysteria surrounding the virus. Later assessments of the damage showed the aftermath to be minimal. John McAfee had been quoted by the media as saying that five million computers would be affected. He later said that pressed by the interviewer to come up with a number, he had estimated a range from five thousand to five million, but the media naturally went with just the higher number.
October: Milton-Bradley releases Omega Virus, a board game containing one of the first examples of a sentient computer virus in mainstream media.
1993
"Leandro" or "Leandro & Kelly" and "Freddy Krueger" spread quickly due to popularity of BBS and shareware distribution.
1994
April: OneHalf is a DOS-based polymorphic computer virus.
September: ReBoot first airs, containing another memorable fictional, sentient computer virus, Megabyte.
1995
The first Macro virus, called "Concept", is created. It attacked Microsoft Word documents.
1996
"Ply" – DOS 16-bit based complicated polymorphic virus appeared with a built-in permutation engine.
Boza, the first virus designed specifically for Windows 95 files arrives.
Laroux, the first Excel macro virus appears.
Staog, the first Linux virus attacks Linux machines
1997
Esperanto, the first cross-platform virus, appears.
1998
June 2: The first version of the CIH virus appears. It is the first known virus able to erase flash ROM BIOS content.
1999
January 20: The Happy99 worm first appeared. It invisibly attaches itself to emails, displays fireworks to hide the changes being made, and wishes the user a happy New Year. It modifies system files related to Outlook Express and Internet Explorer (IE) on Windows 95 and Windows 98.
February : The Sub7 is released targeting the Windows 9x and on the Windows NT family of operating systems.
March 26: The Melissa worm was released, targeting Microsoft Word and Outlook-based systems, and creating considerable network traffic.
June 6: The ExploreZip worm, which destroys Microsoft Office documents, was first detected.
September: the CTX virus is isolated
December 30: The Kak worm is a JavaScript computer worm that spread itself by exploiting a bug in Outlook Express.
2000s
2000
May 5: The ILOVEYOU worm (also known as the Love Letter, VBS, or Love Bug worm), a computer worm written in VBScript and using social engineering techniques, infected millions of Windows computers worldwide within a few hours of its release.
June 28: The Pikachu virus is believed to be the first computer virus geared at children. It contains the character "Pikachu" from the Pokémon series. The operating systems affected by this worm are Windows 95, Windows 98, and Windows ME.
2001
February 11: The Anna Kournikova virus hits e-mail servers hard by sending e-mail to contacts in the Microsoft Outlook addressbook. Its creator, Jan de Wit, was sentenced to 150 hours of community service.
March 13: Magistr, also called Disembowler, is discovered. It is a complex email worm for Windows systems with multiple payloads that trigger months apart from each other. It targets members of the Law profession by searching the files on a user's computer for various keywords relating to court proceedings, activating if such are found.
May 8: The Sadmind worm spreads by exploiting holes in both Sun Solaris and Microsoft IIS.
July: The Sircam worm is released, spreading through Microsoft systems via e-mail and unprotected network shares.
July 13: The Code Red worm attacking the Index Server ISAPI Extension in Microsoft Internet Information Services is released.
August 4: A complete re-write of the Code Red worm, Code Red II begins aggressively spreading onto Microsoft systems, primarily in China.
September 18: The Nimda worm is discovered and spreads through a variety of means including vulnerabilities in Microsoft Windows and backdoors left by Code Red II and Sadmind worm.
October 26: The Klez worm is first identified. It exploits a vulnerability in Microsoft Internet Explorer and Microsoft Outlook and Outlook Express.
2002
February 11: The Simile virus is a metamorphic computer virus written in assembly.
Beast is a Windows-based backdoor Trojan horse, more commonly known as a RAT (Remote Administration Tool). It is capable of infecting almost all versions of Windows. Written in Delphi and released first by its author Tataye in 2002, its most current version was released on October 3, 2004.
March 7: Mylife is a computer worm that spread itself by sending malicious emails to all the contacts in Microsoft Outlook.
2003
January 24: The SQL Slammer worm, aka Sapphire worm, Helkern and other names, attacks vulnerabilities in Microsoft SQL Server and MSDE becomes the fastest spreading worm of all time (measured by doubling time at the peak rate of growth), causing massive Internet access disruptions worldwide just fifteen minutes after infecting its first victim.
April 2: Graybird is a trojan horse also known as Backdoor.Graybird.
June 13: ProRat is a Turkish-made Microsoft Windows based backdoor trojan horse, more commonly known as a RAT (Remote Administration Tool).
August 12: The Blaster worm, aka the Lovesan worm, rapidly spreads by exploiting a vulnerability in system services present on Windows computers.
August 18: The Welchia (Nachi) worm is discovered. The worm tries to remove the Blaster worm and patch Windows.
August 19: The Sobig worm (technically the SobigF worm) spreads rapidly through Microsoft systems via mail and network shares.
September 18: Swen is a computer worm written in C++.
October 24: The Sober worm is first seen on Microsoft systems and maintains its presence until 2005 with many new variants. The simultaneous attacks on network weak points by the Blaster and Sobig worms cause massive damage.
November 10: Agobot is a computer worm that can spread itself by exploiting vulnerabilities on Microsoft Windows. Some of the vulnerabilities are MS03-026 and MS05-039.
November 20: Bolgimo is a computer worm that spread itself by exploiting a buffer overflow vulnerability at Microsoft Windows DCOM RPC Interface (CVE-2003-0352).
2004
January 18: Bagle is a mass-mailing worm affecting all versions of Microsoft Windows. There were two variants of Bagle worm, BagleA and Bagle.B. BagleB was discovered on February 17, 2004.
January 26: The MyDoom worm emerges, and currently holds the record for the fastest-spreading mass mailer worm. The worm was most notable for performing a distributed denial-of-service (DDoS) attack on www.sco.com, which belonged to The SCO Group.
February 16: The Netsky worm is discovered. The worm spreads by email and by copying itself to folders on the local hard drive as well as on mapped network drives if available. Many variants of the Netsky worm appeared.
March 19: The Witty worm is a record-breaking worm in many regards. It exploited holes in several Internet Security Systems (ISS) products. It spread rapidly using a pre-populated list of ground-zero hosts.
May 1: The Sasser worm emerges by exploiting a vulnerability in the Microsoft Windows LSASS service and causes problems in networks, while removing MyDoom and Bagle variants, even interrupting business.
June 15: Caribe or Cabir is a computer worm that is designed to infect mobile phones that run Symbian OS. It is the first computer worm that can infect mobile phones. It spread itself through Bluetooth. More information can be found on F-Secure and Symantec.
August 16: Nuclear RAT (short for Nuclear Remote Administration Tool) is a backdoor trojan that infects Windows NT family systems (Windows 2000, Windows XP, Windows 2003).
August 20: Vundo, or the Vundo Trojan (also known as Virtumonde or Virtumondo and sometimes referred to as MS Juan) is a trojan known to cause popups and advertising for rogue antispyware programs, and sporadically other misbehavior including performance degradation and denial of service with some websites including Google and Facebook.
October 12: Bifrost, also known as Bifrose, is a backdoor trojan which can infect Windows 95 through Vista. Bifrost uses the typical server, server builder, and client backdoor program configuration to allow a remote attack.
December: Santy, the first known "webworm" is launched. It exploited a vulnerability in phpBB and used Google to find new targets. It infected around 40000 sites before Google filtered the search query used by the worm, preventing it from spreading.
2005
August 2005: Zotob is a computer worm which exploits security vulnerabilities in Microsoft operating systems like Windows 2000, including the MS05-039 plug-and-play vulnerability (CVE-2005-1983). This worm has been known to spread on Microsoft-ds or TCP port 445.
October 2005: The copy protection rootkit deliberately and surreptitiously included on music CDs sold by Sony BMG is exposed. The rootkit creates vulnerabilities on affected computers, making them susceptible to infection by worms and viruses.
Late 2005: The Zlob Trojan, is a Trojan horse program that masquerades as a required video codec in the form of the Microsoft Windows ActiveX component. It was first detected in late 2005.
2006
January 20: The Nyxem worm was discovered. It spread by mass-mailing. Its payload, which activates on the third of every month, starting on February 3, attempts to disable security-related and file-sharing software, and destroy files of certain types, such as Microsoft Office files.
February 16: Discovery of the first-ever malware for Mac OS X, a low-threat trojan-horse known as OSX/Leap-A or OSX/Oompa-A, is announced.
Late March: Brontok variant N was found in late March. Brontok was a mass-email worm and the origin for the worm was from Indonesia.
June: Starbucks is a virus that infects StarOffice and OpenOffice.
Late September: Stration or Warezov worm first discovered.
Development of Stuxnet is presumed to have been started between 2005 and 2006.
2007
January 17: Storm Worm identified as a fast-spreading email spamming threat to Microsoft systems. It begins gathering infected computers into the Storm botnet. By around June 30, it had infected 1.7 million computers, and it had compromised between 1 and 10 million computers by September. Thought to have originated from Russia, it disguises itself as a news email containing a film about bogus news stories asking the user to download the attachment which it claims is a film.
July: Zeus is a trojan that targets Microsoft Windows to steal banking information by keystroke logging.
2008
February 17: Mocmex is a trojan, which was found in a digital photo frame in February 2008. It was the first serious computer virus on a digital photo frame. The virus was traced back to a group in China.
March 3: Torpig, also known as Sinowal and Mebroot, is a Trojan horse that affects Windows, turning off anti-virus applications. It allows others to access the computer, modifies data, steals confidential information (such as user passwords and other sensitive data) and installs more malware on the victim's computer.
May 6: Rustock.C, a hitherto-rumored spambot-type malware with advanced rootkit capabilities, was announced to have been detected on Microsoft systems and analyzed, having been in the wild and undetected since October 2007 at the very least.
July 6: Bohmini.A is a configurable remote access tool or trojan that exploits security flaws in Adobe Flash 9.0.115 with Internet Explorer 7.0 and Firefox 2.0 under Windows XP SP2.
July 31: The Koobface computer worm targets users of Facebook and Myspace. New variants constantly appear.
November 21: Computer worm Conficker infected anywhere from 9 to 15 million Microsoft server systems running everything from Windows 2000 to the Windows 7 Beta. The French Navy, UK Ministry of Defence (including Royal Navy warships and submarines), Sheffield Hospital network, German Bundeswehr, and Norwegian Police were all affected among many others. Microsoft set a bounty of US$250,000 for information leading to the capture of the worm's author(s). Five main variants of the worm are known and have been dubbed Conficker A, B, C, D and E, increasingly adding self-defense mechanisms. They were discovered 21 November 2008, 29 December 2008, 20 February 2009, 4 March 2009, and 7 April 2009, respectively. On December 16, 2008, Microsoft releases KB958644 patching the server service vulnerability (CVE-2008-4250) responsible for the spread of Conficker.
2009
July 4: The July 2009 cyber attacks occur and the emergence of the W32.Dozer attack the United States and South Korea.
July 15: Symantec discovered Daprosy Worm, a trojan worm is intended to steal online-game passwords in internet cafes. It could intercept all keystrokes and send them to its author, making it potentially a very dangerous worm to infect B2B (business-to-business) systems.
August 24: Source code for MegaPanzer is released by its author under GPLv3. and appears to have been apparently detected in the wild.
November 27: The virus Kenzero is a virus that spreads online from peer-to-peer networks (P2P) taking browsing history.
2010s
2010
January: The Waledac botnet sent spam emails. In February 2010, an international group of security researchers and Microsoft took Waledac down.
January: The Psyb0t worm is discovered. It is thought to be unique in that it can infect routers and high-speed modems.
February 18: Microsoft announced that a BSoD problem on some Windows machines which was triggered by a batch of Patch Tuesday updates was caused by the Alureon Trojan.
June 17: Stuxnet, a Windows Trojan, was detected. It is the first worm to attack SCADA systems. There are suggestions that it was designed to target Iranian nuclear facilities. It uses a valid certificate from Realtek.
September 9: The virus, called "here you have" or "VBMania", is a simple Trojan horse that arrives in the inbox with the odd-but-suggestive subject line "here you have". The body reads "This is The Document I told you about, you can find it Here" or "This is The Free Download Sex Movies, you can find it Here".
2011
SpyEye and Zeus merged code is seen. New variants attack mobile phone banking information.
Anti-Spyware 2011, a Trojan horse that attacks Windows 9x, 2000, XP, Vista, and Windows 7, posing as an anti-spyware program. It disables security-related processes of anti-virus programs, while also blocking access to the Internet, which prevents updates.
Summer 2011: The Morto worm attempts to propagate itself to additional computers via the Microsoft Windows Remote Desktop Protocol (RDP). Morto spreads by forcing infected systems to scan for Windows servers allowing RDP login. Once Morto finds an RDP-accessible system, it attempts to log into a domain or local system account named 'Administrator' using several common passwords. A detailed overview of how the worm works – along with the password dictionary Morto uses – was done by Imperva.
July 13: the ZeroAccess rootkit (also known as Sirefef or max++) was discovered.
September 1: Duqu is a worm thought to be related to the Stuxnet worm. The Laboratory of Cryptography and System Security (CrySyS Lab) of the Budapest University of Technology and Economics in Hungary discovered the threat, analysed the malware, and wrote a 60-page report naming the threat Duqu. Duqu gets its name from the prefix "~DQ" it gives to the names of files it creates.
2012
May: Flame – also known as Flamer, sKyWIper, and Skywiper – a modular computer malware that attacks computers running Microsoft Windows. Used for targeted cyber espionage in Middle Eastern countries. Its discovery was announced on 28 May 2012 by MAHER Center of Iranian National Computer Emergency Response Team (CERT), Kaspersky Lab and CrySyS Lab of the Budapest University of Technology and Economics. CrySyS stated in their report that "sKyWIper is certainly the most sophisticated malware we encountered during our practice; arguably, it is the most complex malware ever found".
August 16: Shamoon is a computer virus designed to target computers running Microsoft Windows in the energy sector. Symantec, Kaspersky Lab, and Seculert announced its discovery on August 16, 2012.
September 20: NGRBot is a worm that uses the IRC network for file transfer, sending and receiving commands between zombie network machines and the attacker's IRC server, and monitoring and controlling network connectivity and intercept. It employs a user-mode rootkit technique to hide and steal its victim's information. This family of bot is also designed to infect HTML pages with inline frames (iframes), causing redirections, blocking victims from getting updates from security/antimalware products, and killing those services. The bot is designed to connect via a predefined IRC channel and communicate with a remote botnet.
2013
September: The CryptoLocker Trojan horse is discovered. CryptoLocker encrypts the files on a user's hard drive, then prompts them to pay a ransom to the developer to receive the decryption key. In the following months, several copycat ransomware Trojans were also discovered.
December: The Gameover ZeuS Trojan is discovered. This type of virus steals one's login details on popular Web sites that involve monetary transactions. It works by detecting a login page, then proceeds to inject malicious code into the page, keystroke logging the computer user's details.
December: Linux.Darlloz targets the Internet of things and infects routers, security cameras, set-top boxes by exploiting a PHP vulnerability.
2014
November: The Regin Trojan horse is discovered. Regin is a dropper, primarily spread via spoofed Web pages. Once installed, it quietly downloads additional malware, making it difficult for signature-based anti-virus programs to detect. It is believed to have been created by the United States and United Kingdom as a tool for espionage and mass surveillance.
2015
The BASHLITE malware is leaked leading to a massive spike in DDoS attacks.
Linux.Wifatch is revealed to the general public. It is found to attempt to secure devices from other more malicious malware.
2016
January: A trojan named "MEMZ" is created. The creator, Leurak, explained that the trojan was intended merely as a joke. The trojan alerts the user to the fact that it is a trojan and warns them that if they proceed, the computer may no longer be usable. It contains complex payloads that corrupt the system, displaying artifacts on the screen as it runs. Once run, the application cannot be closed without causing further damage to the computer, which will stop functioning properly regardless. When the computer is restarted, in place of the bootsplash is a message that reads "Your computer has been trashed by the MEMZ Trojan. Now enjoy the Nyan cat...", which follows with an animation of the Nyan Cat.
February: Ransomware Locky with its over 60 derivatives spread throughout Europe and infected several million computers. At the height of the spread over five thousand computers per hour were infected in Germany alone. Although ransomware was not a new thing at the time, insufficient cyber security as well as a lack of standards in IT was responsible for the high number of infections. Unfortunately, even up to date antivirus and internet security software was unable to protect systems from early versions of Locky.
February: Tiny Banker Trojan (Tinba) makes headlines. Since its discovery, it has been found to have infected more than two dozen major banking institutions in the United States, including TD Bank, Chase, HSBC, Wells Fargo, PNC and Bank of America. Tiny Banker Trojan uses HTTP injection to force the user's computer to believe that it is on the bank's website. This spoof page will look and function just as the real one. The user then enters their information to log on, at which point Tinba can launch the bank webpage's "incorrect login information" return, and redirect the user to the real website. This is to trick the user into thinking they had entered the wrong information and proceed as normal, although now Tinba has captured the credentials and sent them to its host.
August: Journalists and researchers report the discovery of spyware, called Pegasus, developed and distributed by a private company which can and has been used to infect iOS and Android smartphones often – based on 0-day exploits – without the need for any user-interaction or significant clues to the user and then be used to exfiltrate data, track user locations, capture film through its camera, and activate the microphone at any time. The investigation suggests it was used on many targets worldwide and revealed its use for e.g. governments' espionage on journalists, opposition politicians, activists, business people and others.
September: Mirai creates headlines by launching some of the most powerful and disruptive DDoS attacks seen to date by infecting the Internet of Things. Mirai ends up being used in the DDoS attack on 20 September 2016 on the Krebs on Security site which reached 620 Gbit/s. Ars Technica also reported a 1 Tbit/s attack on French web host OVH. On 21 October 2016 multiple major DDoS attacks in DNS services of DNS service provider Dyn occurred using Mirai malware installed on a large number of IoT devices, resulting in the inaccessibility of several high-profile websites such as GitHub, Twitter, Reddit, Netflix, Airbnb and many others. The attribution of the attack to the Mirai botnet was originally reported by BackConnect Inc., a security firm.
2017
May: The WannaCry ransomware attack spreads globally. Exploits revealed in the NSA hacking toolkit leak of late 2016 were used to enable the propagation of the malware. Shortly after the news of the infections broke online, a UK cybersecurity researcher in collaboration with others found and activated a "kill switch" hidden within the ransomware, effectively halting the initial wave of its global propagation. The next day, researchers announced that they had found new variants of the malware without the kill switch.
June: The Petya attack spreads globally affecting Windows systems. Researchers at Symantec reveal that this ransomware uses the EternalBlue exploit, similar to the one used in the WannaCry ransomware attack.
September: The Xafecopy Trojan attacks 47 countries, affecting only Android operating systems. Kaspersky Lab identified it as a malware from the Ubsod family, stealing money through click based WAP billing systems.
September: A new variety of Remote Access Trojan (RAT), Kedi RAT, is distributed in a Spear Phishing Campaign. The attack targeted Citrix users. The Trojan was able to evade usual system scanners. Kedi Trojan had all the characteristics of a common Remote Access Trojan and it could communicate to its Command and Control center via Gmail using common HTML, HTTP protocols.
2018
February: Thanatos, a ransomware, becomes the first ransomware program to accept ransom payment in Bitcoin Cash.
2019
November: Titanium is an advanced backdoor malware, developed by the PLATINUM APT.
2020s
2024
Researchers Nassi, Cohen, and Bitton developed a computer worm called Morris II, targeting generative AI email assistants to steal data and send spam, thereby breaching security protections of systems like ChatGPT and Gemini. Conducted in a test environment, this research highlights the security risks of multimodal large language models (LLMs) that now generate text, images, and videos. Generative AI systems, which operate on prompts, can be exploited through weaponized prompts. For instance, hidden text on a webpage could instruct an LLM to perform malicious activities, such as phishing for bank details. While generative AI worms like Morris II haven’t been observed in the public, their potential threat is a concern for the tech industry.
March 29: XZ Utils backdoor is discovered.
April 1: The Linux's WALLSCAPE Bug is discovered.
June 29: Brain Cipher - a variant of LockBit 3.0 Ransomware behind Indonesia's Data Center attacks.
See also
Helpful worm
History of computer viruses
List of security hacking incidents
Timeline of computing 2020–present
References
External links
A short history of hacks, worms, and cyberterror by Mari Keefe, Computerworld, April 2009
5th Utility Ltd list of the 10 worst computer viruses of all time
viruses
Malware
Trojan horses | Timeline of computer viruses and worms | [
"Technology"
] | 7,629 | [
"Malware",
"Computer security exploits"
] |
174,762 | https://en.wikipedia.org/wiki/Reflection%20nebula | In astronomy, reflection nebulae are clouds of interstellar dust which might reflect the light of a nearby star or stars. The energy from the nearby stars is insufficient to ionize the gas of the nebula to create an emission nebula, but is enough to give sufficient scattering to make the dust visible. Thus, the frequency spectrum shown by reflection nebulae is similar to that of the illuminating stars. Among the microscopic particles responsible for the scattering are carbon compounds (e. g. diamond dust) and compounds of other elements such as iron and nickel. The latter two are often aligned with the galactic magnetic field and cause the scattered light to be slightly polarized.
Discovery
Analyzing the spectrum of the nebula associated with the star Merope in the Pleiades, Vesto Slipher concluded in 1912 that the source of its light is most likely the star itself, and that the nebula reflects light from the star (and that of the star Alcyone). Calculations by Ejnar Hertzsprung in 1913 lend credence to that hypothesis. Edwin Hubble further distinguished between the emission and reflection nebulae in 1922.
Reflection nebula are usually blue because the scattering is more efficient for blue light than red (this is the same scattering process that gives us blue skies and red sunsets).
Reflection nebulae and emission nebulae are often seen together and are sometimes both referred to as diffuse nebulae.
Some 500 reflection nebulae are known. A blue reflection nebula can also be seen in the same area of the sky as the Trifid Nebula. The supergiant star Antares, which is very red (spectral class M1), is surrounded by a large, yellow reflection nebula.
Reflection nebulae may also be the site of star formation.
Luminosity law
In 1922, Edwin Hubble published the result of his investigations on bright nebulae. One part of this work is the Hubble luminosity law for reflection nebulae, which makes a relationship between the angular size (R) of the nebula and the apparent magnitude (m) of the associated star:
where is a constant that depends on the sensitivity of the measurement.
See also
Variable nebula
List of reflected light sources
References
Bibliography
James B. Kaler (1997). Cosmic Clouds -- Birth, Death, and Recycling in the Galaxy, Scientific American Library, Freeman, New York, 1998.
Nebulae | Reflection nebula | [
"Astronomy"
] | 477 | [
"Nebulae",
"Astronomical objects"
] |
174,773 | https://en.wikipedia.org/wiki/Sharpless%20asymmetric%20dihydroxylation | Sharpless asymmetric dihydroxylation (also called the Sharpless bishydroxylation) is the chemical reaction of an alkene with osmium tetroxide in the presence of a chiral quinine ligand to form a vicinal diol. The reaction has been applied to alkenes of virtually every substitution, often high enantioselectivities are realized, with the chiral outcome controlled by the choice of dihydroquinidine (DHQD) vs dihydroquinine (DHQ) as the ligand. Asymmetric dihydroxylation reactions are also highly site selective, providing products derived from reaction of the most electron-rich double bond in the substrate.
It is common practice to perform this reaction using a catalytic amount of osmium tetroxide, which after reaction is regenerated with reoxidants such as potassium ferricyanide or N-methylmorpholine N-oxide. This dramatically reduces the amount of the highly toxic and very expensive osmium tetroxide needed. These four reagents are commercially available premixed ("AD-mix"). The mixture containing (DHQ)2-PHAL is called AD-mix-α, and the mixture containing (DHQD)2-PHAL is called AD-mix-β.
Such chiral diols are important in organic synthesis. The introduction of chirality into nonchiral reactants through usage of chiral catalysts is an important concept in organic synthesis. This reaction was developed principally by K. Barry Sharpless building on the already known racemic Upjohn dihydroxylation, for which he was awarded a share of the 2001 Nobel Prize in Chemistry.
Background
Alkene dihydroxylation by osmium tetroxide is an old and extremely useful method for the functionalization of alkenes. However, since osmium(VIII) reagents like osmium tetroxide (OsO4) are expensive and extremely toxic, it has become desirable to develop catalytic variants of this reaction. Some stoichiometric terminal oxidants that have been employed in these catalytic reactions include potassium chlorate, hydrogen peroxide (Milas hydroxylation), N-Methylmorpholine N-oxide (NMO, Upjohn dihydroxylation), tert-butyl hydroperoxide (tBHP), and potassium ferricyanide (K3Fe(CN)6). K. Barry Sharpless was the first to develop a general, reliable enantioselective alkene dihydroxylation, referred to as the Sharpless asymmetric dihydroxylation (SAD). Low levels of OsO4 are combined with a stoichiometric ferricyanide oxidant in the presence of chiral nitrogenous ligands to create an asymmetric environment around the oxidant.
Reaction mechanism
The reaction mechanism of the Sharpless dihydroxylation begins with the formation of the osmium tetroxide – ligand complex (2). A [3+2]-cycloaddition with the alkene (3) gives the cyclic intermediate 4. Basic hydrolysis liberates the diol (5) and the reduced osmate (6). Methanesulfonamide (CH3SO2NH2) has been identified as a catalyst to accelerate this step of the catalytic cycle and if frequently used as an additive to allow non-terminal alkene substrates to react efficiently at 0 °C. Finally, the stoichiometric oxidant regenerates the osmium tetroxide – ligand complex (2).
The mechanism of the Sharpless asymmetric dihydroxylation has been extensively studied and a potential secondary catalytic cycle has been identified (see below). If the osmylate ester intermediate is oxidized before it dissociates, then an osmium(VIII)-diol complex is formed which may then dihydroxylate another alkene. Dihydroxylations resulting from this secondary pathway generally suffer lower enantioselectivities than those resulting from the primary pathway. A schematic showing this secondary catalytic pathway is shown below. This secondary pathway may be suppressed by using a higher molar concentration of ligand.
[2+2] vs [3+2] debate
In his original report Sharpless suggested the reaction proceeded via a [2+2] cycloaddition of OsO4 onto the alkene to give an osmaoxetane intermediate (see below). This intermediate would then undergo a 1,1- migratory insertion to form an osmylate ester which after hydrolysis would give the corresponding diol. In 1989 E. J. Corey published a slightly different variant of this reaction and suggested that the reaction most likely proceeded via a [3+2] cycloaddition of OsO4 with the alkene to directly generate the osmylate ester. Corey's suggestion was based on a previous computational study done by Jorgensen and Hoffmann which determined the [3+2] reaction pathway to be the lower energy pathway. In addition Corey reasoned that steric repulsions in the octahedral intermediate would disfavor the [2+2] pathway.
The next ten years saw numerous publications by both Corey and Sharpless, each supporting their own version of the mechanism. While these studies were not able to distinguish between the two proposed cyclization pathways, they were successful in shedding light on the mechanism in other ways. For example, Sharpless provided evidence for the reaction proceeding via a step-wise mechanism. Additionally both Sharpless and Corey showed that the active catalyst possesses a U-shaped chiral binding pocket. Corey also showed that the catalyst obeys Michaelis-Menten kinetics and acts like an enzyme pocket with a pre-equilibrium. In the February 1997 issue of the Journal of the American Chemical Society Sharpless published the results of a study (a Hammett analysis) which he claimed supported a [2+2] cyclization over a [3+2]. In the October issue of the same year, however, Sharpless also published the results of another study conducted in collaboration with Ken Houk and Singleton which provided conclusive evidence for the [3+2] mechanism. Thus Sharpless was forced to concede the decade-long debate.
Catalyst structure
Crystallographic evidence has shown that the active catalyst possesses a pentacoordinate osmium species held in a U-shaped binding pocket. The nitrogenous ligand holds OsO4 in a chiral environment making approach of one side of the olefin sterically hindered while the other is not.
Catalytic systems
Numerous catalytic systems and modifications have been developed for the SAD. Given below is a brief overview of the various components of the catalytic system:
Catalytic Oxidant: This is always OsO4, however certain additives can coordinate to the osmium(VIII) and modify its electronic properties. OsO4 is often generated in situ from K2OsO2(OH)4 (an Os(VI) species) due to safety concerns.
Chiral Auxiliary: This is usually some kind of cinchona alkaloid.
Stoichiometric Oxidant:
Peroxides were among the first stoichiometric oxidants to be used in this catalytic cycle; see the Milas hydroxylation. Drawbacks of peroxides include chemoselectivity issues.
Trialkylammonium N-oxides, such as NMO—as in the Upjohn Reaction—and trimethylamine N-oxide.
Potassium ferricyanide (K3Fe(CN)6) is the most commonly used stoichiometric oxidant for the reaction, and is the oxidant that comes in the commercially available AD-mix preparations.
Additive:
Citric acid: Osmium tetroxide is an electrophilic oxidant and as such reacts slowly with electron-deficient olefins. It has been found that the rate of oxidation of electron-deficient olefins can be accelerated by maintaining the pH of the reaction slightly acidic. On the other hand, a high pH can increase the rate of oxidation of internal olefins, and also increase the enantiomeric excess (e.e.) for the oxidation of terminal olefins.
Regioselectivity
In general Sharpless asymmetric dihydroxylation favors oxidation of the more electron-rich alkene (scheme 1).
In this example SAD gives the diol of the alkene closest to the (electron-withdrawing) para-methoxybenzoyl group, albeit in low yield. This is likely due to the ability of the aryl ring to interact favorably with the active site of the catalyst via π-stacking. In this manner the aryl substituent can act as a directing group.
Stereoselectivity
The diastereoselectivity of SAD is set primarily by the choice of ligand (i.e. AD-mix-α versus AD-mix-β), however factors such as pre-existing chirality in the substrate or neighboring functional groups may also play a role. In the example shown below, the para-methoxybenzoyl substituent serves primarily as a source of steric bulk to allow the catalyst to differentiate the two faces of the alkene.
It is often difficult to obtain high diastereoselectivity on cis-disubstituted alkenes when both ends of the olefin have similar steric environments.
Further reading
See also
Asymmetric catalytic oxidation
Milas hydroxylation
Upjohn dihydroxylation
Sharpless aminohydroxylation
Lemieux–Johnson oxidation - olefin to diol, followed by oxidative cleavage to form two aldehydes
References
Organic redox reactions
Name reactions | Sharpless asymmetric dihydroxylation | [
"Chemistry"
] | 2,057 | [
"Name reactions",
"Organic redox reactions",
"Organic reactions"
] |
174,781 | https://en.wikipedia.org/wiki/Emission%20nebula | An emission nebula is a nebula formed of ionized gases that emit light of various wavelengths. The most common source of ionization is high-energy ultraviolet photons emitted from a nearby hot star. Among the several different types of emission nebulae are H II regions, in which star formation is taking place and young, massive stars are the source of the ionizing photons; and planetary nebulae, in which a dying star has thrown off its outer layers, with the exposed hot core then ionizing them.
General information
Usually, a young star will ionize part of the same cloud from which it was born, although only massive, hot stars can release sufficient energy to ionize a significant part of a cloud. In many emission nebulae, an entire cluster of young stars is contributing energy.
Stars that are hotter than 25,000 K generally emit enough ionizing ultraviolet radiation (wavelength shorter than 91.2 nm) to cause the emission nebulae around them to be brighter than the reflection nebulae. The radiation emitted by cooler stars is generally not energetic enough to ionize hydrogen, which results in the reflection nebulae around these stars giving off less light than the emission nebulae.
The nebula's color depends on its chemical composition and degree of ionization. Due to the prevalence of hydrogen in interstellar gas, and its relatively low energy of ionization, many emission nebulae appear red due to strong emissions of the Balmer series. If more energy is available, other elements will be ionized, and green and blue nebulae become possible. By examining the spectra of nebulae, astronomers infer their chemical content. Most emission nebulae are about 90% hydrogen, with the remaining helium, oxygen, nitrogen, and other elements.
Some of the most prominent emission nebulae visible from the northern celestial hemisphere are the North America Nebula (NGC 7000) and Veil Nebula NGC 6960/6992 in Cygnus, while in the south celestial hemisphere, the Lagoon Nebula M8 / NGC 6523 in Sagittarius and the Orion Nebula M42. Further in the southern hemisphere is the bright Carina Nebula NGC 3372.
Emission nebulae often have dark areas in them which result from clouds of dust which block the light.
Many nebulae are made up of both reflection and emission components such as the Trifid Nebula.
Image gallery
See also
Herbig–Haro object
N41 (nebula)
Further reading
References
Nebulae | Emission nebula | [
"Astronomy"
] | 495 | [
"Nebulae",
"Astronomical objects"
] |
174,782 | https://en.wikipedia.org/wiki/Gravitational%20field | In physics, a gravitational field or gravitational acceleration field is a vector field used to explain the influences that a body extends into the space around itself. A gravitational field is used to explain gravitational phenomena, such as the gravitational force field exerted on another massive body. It has dimension of acceleration (L/T2) and it is measured in units of newtons per kilogram (N/kg) or, equivalently, in meters per second squared (m/s2).
In its original concept, gravity was a force between point masses. Following Isaac Newton, Pierre-Simon Laplace attempted to model gravity as some kind of radiation field or fluid, and since the 19th century, explanations for gravity in classical mechanics have usually been taught in terms of a field model, rather than a point attraction. It results from the spatial gradient of the gravitational potential field.
In general relativity, rather than two particles attracting each other, the particles distort spacetime via their mass, and this distortion is what is perceived and measured as a "force". In such a model one states that matter moves in certain ways in response to the curvature of spacetime, and that there is either no gravitational force, or that gravity is a fictitious force.
Gravity is distinguished from other forces by its obedience to the equivalence principle.
Classical mechanics
In classical mechanics, a gravitational field is a physical quantity. A gravitational field can be defined using Newton's law of universal gravitation. Determined in this way, the gravitational field around a single particle of mass is a vector field consisting at every point of a vector pointing directly towards the particle. The magnitude of the field at every point is calculated by applying the universal law, and represents the force per unit mass on any object at that point in space. Because the force field is conservative, there is a scalar potential energy per unit mass, , at each point in space associated with the force fields; this is called gravitational potential. The gravitational field equation is
where is the gravitational force, is the mass of the test particle, is the radial vector of the test particle relative to the mass (or for Newton's second law of motion which is a time dependent function, a set of positions of test particles each occupying a particular point in space for the start of testing), is time, is the gravitational constant, and is the del operator.
This includes Newton's law of universal gravitation, and the relation between gravitational potential and field acceleration. and are both equal to the gravitational acceleration (equivalent to the inertial acceleration, so same mathematical form, but also defined as gravitational force per unit mass). The negative signs are inserted since the force acts antiparallel to the displacement. The equivalent field equation in terms of mass density of the attracting mass is:
which contains Gauss's law for gravity, and Poisson's equation for gravity. Newton's law implies Gauss's law, but not vice versa; see Relation between Gauss's and Newton's laws.
These classical equations are differential equations of motion for a test particle in the presence of a gravitational field, i.e. setting up and solving these equations allows the motion of a test mass to be determined and described.
The field around multiple particles is simply the vector sum of the fields around each individual particle. A test particle in such a field will experience a force that equals the vector sum of the forces that it would experience in these individual fields. This is
i.e. the gravitational field on mass is the sum of all gravitational fields due to all other masses mi, except the mass itself. is the position vector of the gravitating particle , and is that of the test particle.
General relativity
In general relativity, the Christoffel symbols play the role of the gravitational force field and the metric tensor plays the role of the gravitational potential.
In general relativity, the gravitational field is determined by solving the Einstein field equations
where is the stress–energy tensor, is the Einstein tensor, and is the Einstein gravitational constant. The latter is defined as , where is the Newtonian constant of gravitation and is the speed of light.
These equations are dependent on the distribution of matter, stress and momentum in a region of space, unlike Newtonian gravity, which is depends on only the distribution of matter. The fields themselves in general relativity represent the curvature of spacetime. General relativity states that being in a region of curved space is equivalent to accelerating up the gradient of the field. By Newton's second law, this will cause an object to experience a fictitious force if it is held still with respect to the field. This is why a person will feel himself pulled down by the force of gravity while standing still on the Earth's surface. In general the gravitational fields predicted by general relativity differ in their effects only slightly from those predicted by classical mechanics, but there are a number of easily verifiable differences, one of the most well known being the deflection of light in such fields.
Embedding diagram
Embedding diagrams are three dimensional graphs commonly used to educationally illustrate gravitational potential by drawing gravitational potential fields as a gravitational topography, depicting the potentials as so-called gravitational wells, sphere of influence.
See also
Classical mechanics
Entropic gravity
Gravitation
Gravitational energy
Gravitational potential
Gravitational wave
Gravity map
Newton's law of universal gravitation
Newton's laws of motion
Potential energy
Specific force
Speed of gravity
Tests of general relativity
References
Theories of gravity
Geodesy
General relativity | Gravitational field | [
"Physics",
"Mathematics"
] | 1,114 | [
"Applied mathematics",
"Theoretical physics",
"Theory of relativity",
"General relativity",
"Theories of gravity",
"Geodesy"
] |
174,815 | https://en.wikipedia.org/wiki/Backslash | The backslash is a mark used mainly in computing and mathematics. It is the mirror image of the common slash . It is a relatively recent mark, first documented in the 1930s. It is sometimes called a hack, whack, escape (from C/UNIX), reverse slash, slosh, downwhack, backslant, backwhack, bash, reverse slant, reverse solidus, and reversed virgule.
History
, efforts to identify either the origin of this character or its purpose before the 1960s have not been successful. The earliest known reference found to date is a 1937 maintenance manual from the Teletype Corporation with a photograph showing the keyboard of its Kleinschmidt keyboard perforator WPE-3 using the Wheatstone system. The symbol was called the "diagonal key", and given a (non-standard) Morse code of .
In June 1960, IBM published an "Extended character set standard" that includes the symbol at 0x19. In September 1961, Bob Bemer (IBM) proposed to the X3.2 standards committee that , and be made part of the proposed standard, describing the backslash as a "reverse division operator" and cited its prior use by Teletype in telecommunications. In particular, he said, the was needed so that the ALGOL Boolean operators (logical conjunction) and (logical disjunction) could be composed using and respectively. The Committee adopted these changes into the draft American Standard (subsequently called ASCII) at its November 1961 meeting.
These operators were used for min and max in early versions of the C programming language supplied with Unix V6 and V7.
Usage
Programming languages
In many programming languages such as C, Perl, PHP, Python and Unix scripting languages, and in many file formats such as JSON, the backslash is used as an escape character, to indicate that the character following it should be treated specially (if it would otherwise be treated literally), or literally (if it would otherwise be treated specially). For instance, inside a C string literal the sequence produces a newline byte instead of an 'n', and the sequence produces an actual double quote rather than the special meaning of the double quote ending the string. An actual backslash is produced by a double backslash .
Regular expression languages used it the same way, changing subsequent literal characters into metacharacters and vice versa. For instance searches for either '|' or 'b', the first bar is escaped and searched for, the second is not escaped and acts as an "or".
Outside quoted strings, the only common use of backslash is to ignore ("escape") a newline immediately after it. In this context it may be called a "continued line" as the current line continues into the next one. Some software replaces the backslash+newline with a space.
To support computers that lacked the backslash character, the C trigraph was added, which is equivalent to a backslash. Since this can escape the next character, which may itself be a , the primary modern use may be for code obfuscation. Support for trigraphs in C++ was removed in C++17, and support for them in C is planned to be removed in C23.
In Visual Basic (and some other BASIC dialects) the backslash is used as an operator symbol to indicate integer division. This rounds toward zero.
The ALGOL 68 programming language uses the "\" as its Decimal Exponent Symbol. ALGOL 68 has the choice of 4 Decimal Exponent Symbols: e, E, \, or 10. Examples: , , or .
In APL is called Expand when used to insert fill elements into arrays, and Scan when used to produce prefix reduction (cumulative fold).
In PHP version 5.3 and higher, the backslash is used to indicate a namespace.
In Haskell, the backslash is used both to introduce special characters and to introduce lambda functions (since it is a reasonable approximation in ASCII of the Greek letter .
Filenames
MS-DOS 2.0, released 1983, copied the idea of a hierarchical file system from Unix and thus used the (forward) slash as the directory separator. Possibly on the insistence of IBM, Microsoft added the backslash to allow paths to be typed at the command line interpreter prompt, while retaining compatibility with MS-DOS 1.0 (in which was the command-line option indicator. Typing "" gave the "wide" option to the "" command, so some other method was needed if one actually wanted to run a program called inside a directory called ). Except for COMMAND.COM, all other parts of the operating system accept both characters in a path, but the Microsoft convention remains to use a backslash, and APIs that return paths use backslashes. In some versions, the option character can be changed from to via SWITCHAR, which allows COMMAND.COM to preserve in the command name.
The Microsoft Windows family of operating systems inherited the MS-DOS behavior and so still support either character – but individual Windows programs and sub-systems may, wrongly, only accept the backslash as a path delimiter, or may misinterpret a forward slash if it is used as such. Some programs will only accept forward slashes if the path is placed in double-quotes. The failure of Microsoft's security features to recognize unexpected-direction slashes in local and Internet paths, while other parts of the operating system still act upon them, has led to some serious lapses in security. Resources that should not be available have been accessed with paths using particular mixes, such as .
Text markup
The backslash is used in the TeX typesetting system and in RTF files to begin markup tags.
In USFM, the backslash is used to mark format features for editing Bible translations.
In caret notation, represents the control character 0x1C, file separator. This is entirely a coincidence and has nothing to do with its use in file paths.
Mathematics
A backslash-like symbol is used for the set difference.
The backslash is also sometimes used to denote the right coset space.
Especially when describing computer algorithms, it is common to define backslash so that is equivalent to . This is integer division that rounds down, not towards zero.
In MATLAB and GNU Octave the backslash is used for left matrix divide, while the (forward) slash is for right matrix divide.
Confusion with ¥ and other characters
In the Japanese encodings ISO 646-JP (a 7-bit code based on ASCII), JIS X 0201 (an 8-bit code), and Shift JIS (a multi-byte encoding which is 8-bit for ASCII), the code point 0x5C that would be used for backslash in ASCII is instead rendered as a yen sign . Due to extensive use of the 005C code point to represent the yen sign, even today some fonts such as MS Mincho render the backslash character as a ¥, so the characters at Unicode code points 00A5 (¥) and 005C (\) both render as when these fonts are selected. Computer programs still treat 005C as a backslash in these environments but display it as a yen sign, causing confusion, especially in MS-DOS filenames.
Several other ISO 646 versions also replace backslash with other characters, including ₩ (Korean), Ö (German, Swedish), Ø (Danish, Norwegian), ç (French) and Ñ (Spanish), leading to similar problems, though with less lasting impact compared to the yen sign.
In 1991, RFC 1345 suggested as a unique two-character mnemonic that might be used in internet standards as "a practical way of identifying [this] character, without reference to a coded character set and its code in [that] coded character set". Consequently, this style may be seen in early Internet Engineering Task Force documents.
Notes
References
External links
Punctuation
Typographical symbols
pl:Ukośnik | Backslash | [
"Mathematics"
] | 1,660 | [
"Symbols",
"Typographical symbols"
] |
174,818 | https://en.wikipedia.org/wiki/Wood%20gas | Wood gas is a fuel gas that can be used for furnaces, stoves, and vehicles. During the production process, biomass or related carbon-containing materials are gasified within the oxygen-limited environment of a wood gas generator to produce a combustible mixture. In some gasifiers this process is preceded by pyrolysis, where the biomass or coal is first converted to char, releasing methane and tar rich in polycyclic aromatic hydrocarbons.
In stark contrast with synthesis gas, which is almost pure mixture of wood gas also contains a variety of organic compound ("distillates") that require scrubbing for use in other applications. Depending on the kind of biomass, a variety of contaminants are produced that will condense out as the gas cools. When producer gas is used to power cars and boats or distributed to remote locations it is necessary to scrub the gas to remove the materials that can condense and clog carburetors and gas lines. Anthracite and coke are preferred for automotive use, because they produce the smallest amount of contamination, allowing smaller, lighter scrubbers to be used.
History
The first wood gasifier was apparently built by Gustav Bischof in 1839. The first vehicle powered by wood gas was built by T.H. Parker in 1901.
Around 1900, many cities delivered fuel gases (centrally produced, typically from coal) to residences. Natural gas came into use only in the 1930s.
Wood gas vehicles were used during World War II as a consequence of the rationing of fossil fuels. In Germany alone, around 500,000 "producer gas" vehicles were in use at the end of the war. Trucks, buses, tractors, motorcycles, ships, and trains were equipped with a wood gasification unit. In 1942, when wood gas had not yet reached the height of its popularity, there were about 73,000 wood gas vehicles in Sweden,
65,000 in France, 10,000 in Denmark, and almost 8,000 in Switzerland. In 1944, Finland had 43,000 "woodmobiles", of which 30,000 were buses and trucks, 7,000 private vehicles, 4,000 tractors and 600 boats.
Wood gasifiers are still manufactured in China and Russia for automobiles and as power generators for industrial applications. Trucks retrofitted with wood gasifiers are used in North Korea
in rural areas, particularly on the roads of the east coast.
Production
A wood gasifier takes wood chips, sawdust, charcoal, coal, rubber or similar materials as fuel and burns these incompletely in a fire box, producing wood gas, solid ash and soot, the latter of which have to be removed periodically from the gasifier. The wood gas can then be filtered for tars and soot/ash particles, cooled and directed to an engine or fuel cell.
Most of these engines have strict purity requirements of the wood gas, so the gas often has to pass through extensive gas cleaning in order to remove or convert, i.e., "crack", tars and particles. The removal of tar is often accomplished by using a water scrubber. Running wood gas in an unmodified gasoline-burning internal combustion engine may lead to problematic accumulation of unburned compounds.
The quality of the gas from different "gasifiers" varies a great deal. Staged gasifiers, where pyrolysis and gasification occur separately instead of in the same reaction zone as was the case in the World War II gasifiers, can be engineered to produce essentially tar-free gas (less than 1 mg/m3), while single-reactor fluidized bed gasifiers may exceed 50,000 mg/m³ tar. The fluidized bed reactors have the advantage of being much more compact, with more capacity per unit volume and price. Depending on the intended use of the gas, tar can be beneficial, as well by increasing the heating value of the gas.
The heat of combustion of "producer gas" – a term used in the United States, meaning wood gas produced for use in a combustion engine – is rather low compared to other fuels. Taylor (1985)
reports that producer gas has a lower heat of combustion of 5.7 MJ/kg versus 55.9 MJ/kg for natural gas and 44.1 MJ/kg for gasoline. The heat of combustion of wood is typically 15–18 MJ/kg. Presumably, these values can vary somewhat from sample to sample. The same source reports the following chemical composition by volume which most likely is also variable:
{| style="textalign:left;"
|+
|-
! !! !!
|-
| Nitrogen || ||
|-
| Carbon monoxide || ||
|-
| Hydrogen || ||
|-
| Carbon dioxide || ||
|-
| Methane || ||
|-
| Oxygen || ||
|}
The composition of the gas is strongly dependent on the gasification process, the gasification medium (air, oxygen or steam), and the fuel moisture. Steam-gasification processes typically yield high hydrogen contents, downdraft fixed bed gasifiers yield high nitrogen concentrations and low tar loads, while updraft fixed bed gasifiers yield high tar loads.
During the production of charcoal for blackpowder, the volatile wood gas is vented. Extremely-high-surface-area carbon results, suitable for use as a fuel in black powder.
See also
Biogas
Biochar – charcoal from biomass
Combined wood gas and biochar production
Gasification
Gasification – outdoor wood boilers
Producer gas
Rocket stove
Synthesis gas
Water gas
References
External links
Synthetic fuels
Fuel gas
Biofuels
Automotive engine technologies
Pyrolysis
Wood products
Industrial gases
Synthetic fuel technologies | Wood gas | [
"Chemistry"
] | 1,166 | [
"Pyrolysis",
"Petroleum technology",
"Oil shale technology",
"Organic reactions",
"Industrial gases",
"Synthetic fuel technologies",
"Chemical process engineering"
] |
174,822 | https://en.wikipedia.org/wiki/Producer%20gas | Producer gas is fuel gas that is manufactured by blowing through a coke or coal fire with air and steam simultaneously. It mainly consists of carbon monoxide (CO), hydrogen (H2), as well as substantial amounts of nitrogen (N2). The caloric value of the producer gas is low (mainly because of its high nitrogen content), and the technology is obsolete. Improvements over producer gas, also obsolete, include water gas where the solid fuel is treated intermittently with air and steam and, far more efficiently synthesis gas where the solid fuel is replaced with methane.
In the US, producer gas may also be referred to by other names based on the fuel used for production such as wood gas. Producer gas may also be referred to as suction gas. The term suction refers to the way the air was drawn into the gas generator by an internal combustion engine. Wood gas is produced in a gasifier
Production
Producer gas is generally made from coke, or other carbonaceous material such as anthracite. Air is passed over the red-hot carbonaceous fuel and carbon monoxide is produced. The reaction is exothermic and proceeds as follows:
Formation of producer gas from air and carbon:
C + O → CO, +97,600 calories/mol
CO + C → 2CO, –38,800 calories/mol (mol of the reaction formula)
2C + O → 2CO, +58,800 calories/mol (per mol of O i.e. per mol of the reaction formula)
Reactions between steam and carbon:
HO + C → H + CO, –28,800 calories/mol (presumably mol of the reaction formula)
2HO + C → 2H + CO, –18,800 calories/mol (presumably mol of the reaction formula)
Reaction between steam and carbon monoxide:
HO + CO → CO + H, +10,000 calories/mol (presumably mol of the reaction formula)
CO + H → CO + HO, –10,000 calories/mol (presumably mol of the reaction formula)
The average composition of ordinary producer gas according to Latta was: CO: 5.8%; O: 1.3%; CO: 19.8%; H: 15.1%; CH: 1.3%; N: 56.7%; B.T.U. gross per cu.ft 136 The concentration of carbon monoxide in the "ideal" producer gas was considered to be 34.7% carbon monoxide (carbonic oxide) and 65.3% nitrogen. After "scrubbing", to remove tar, the gas may be used to power gas turbines (which are well-suited to fuels of low calorific value), spark ignited engines (where 100% petrol fuel replacement is possible) or diesel internal combustion engines (where 15% to 40% of the original diesel fuel requirement is still used to ignite the gas ). During World War II in Britain, plants were built in the form of trailers for towing behind commercial vehicles, especially buses, to supply gas as a replacement for petrol (gasoline) fuel. A range of about 80 miles for every charge of anthracite was achieved.
In old movies and stories, when there is a description of suicide by "turning on the gas" and leaving an oven door open without lighting the flame, the reference was to coal gas or town gas. As this gas contained a significant amount of carbon monoxide it was quite toxic. Most town gas was also odorized, if it did not have its own odor. Modern 'natural gas' used in homes is far less toxic, and has a mercaptan added to it for odor for identifying leaks.
Various names are used for producer gas, air gas and water gas generally depending on the fuel source, process or end use including:
Air gas: also called "power gas", "generator gas", or "Siemens' producer gas". Produced from various fuels by partial combustion with air. Air gas consists principally of carbon monoxide with nitrogen from the air used and a small amount of hydrogen. This term is not commonly used, and tends to be used synonymously with wood gas.
Producer gas: Air gas modified by simultaneous injection of water or steam to maintain a constant temperature and obtain a higher heat content gas by enrichment of air gas with H. Current usage often includes air gas.
Semi-water gas: Producer gas.
Blue water-gas: Air, water or producer gas produced from clean fuels such as coke, charcoal and anthracite which contain insufficient hydrocarbon impurities for use as illuminating gas. Blue gas burns with a blue flame and does not produce light except when used with a Welsbach gas mantle.
Lowe's Water Gas: Water gas with a secondary pyrolysis reactor to introduce hydrocarbon gasses for illuminating purposes.
Carburetted gas: Any gas produced by a process similar to Lowe's in which hydrocarbons are added for illumination purposes.
Wood gas: produced from wood by partial combustion. Sometimes used in a gasifier to power cars with ordinary internal combustion engines.
Other similar fuel gasses
Coal gas or illuminating gas: Produced from coal by distillation.
Water gas: Produced by injection of steam into fuel preheated by combustion with air. The reaction is endothermic so the fuel must be continually re-heated to keep the reaction going. This was usually done by alternating the steam with an air stream. This name is sometimes used incorrectly when describing carburetted blue water gas simply as blue water gas.
Coke oven gas: Coke ovens give off a gas exactly similar to illuminating gas, part of which is used to heat the coal. There may be a large excess, however, which is used for industrial purposes after it has been purified.
Syngas, or synthesis gas: (from synthetic gas or synthesis gas) can be applied to any of the above gasses, but generally refers to modern industrial processes, such as natural gas reforming, hydrogen production, and processes for synthetic production of methane and other hydrocarbons.
City (Town) gas: any of the above-manufactured gases including producer gas containing sufficient hydrocarbons to produce a bright flame for illumination purposes, originally produced from coal, for sale to consumers and municipalities.
Uses and Advantages of Producer Gas:
It is used in furnace. When furnaces are big, no scrubbing etc. is required. When furnace is small, scrubbing is necessary to avoid chocking of small burners. In gas engines, it is used after scrubbing.
There is no loss due to smoke and convection current.
Quantity of air required for the combustion of producer gas is not much above the theoretical quantity; when burning solid fuel, far more than the theoretical quantity is required. With solid fuels, the larger quantity of exhaust takes away considerable heat with it.
Producer gas is more easily transmitted than solid fuel.
Gas-fired furnaces can be maintained at a constant temperature.
With gas, an oxidising and reducing flame can be obtained.
Heat loss due to converting solid fuel into producer gas can be made in an economic way.
Smoke nuisance can be avoided.
Producer gas can be produced even by the poorest quality of fuel.
See also
Fuel gas
Gasification
Gasifier
History of manufactured gas
Pyrolysis
Water gas
Wood gas
References
Mellor, J.W., Intermediate Inorganic Chemistry, Longmans, Green and Co., 1941, page 211
Adlam, G.H.J. and Price, L.S., A Higher School Certificate Inorganic Chemistry, John Murray, 1944, page 309
External links
Paxman Suction Gas Producers
Fuel gas
Industrial gases | Producer gas | [
"Chemistry"
] | 1,579 | [
"Chemical process engineering",
"Industrial gases"
] |
174,823 | https://en.wikipedia.org/wiki/Dark%20nebula | A dark nebula or absorption nebula is a type of interstellar cloud, particularly molecular clouds, that is so dense that it obscures the visible wavelengths of light from objects behind it, such as background stars and emission or reflection nebulae. The extinction of the light is caused by interstellar dust grains in the coldest, densest parts of molecular clouds. Clusters and large complexes of dark nebulae are associated with Giant Molecular Clouds. Isolated small dark nebulae are called Bok globules. Like other interstellar dust or material, the things it obscures are visible only using radio waves in radio astronomy or infrared in infrared astronomy.
Dark clouds appear so because of sub-micrometre-sized dust particles, coated with frozen carbon monoxide and nitrogen, which effectively block the passage of light at visible wavelengths. Also present are molecular hydrogen, atomic helium, C18O (CO with oxygen as the 18O isotope), CS, NH3 (ammonia), H2CO (formaldehyde), c-C3H2 (cyclopropenylidene) and a molecular ion N2H+ (diazenylium), all of which are relatively transparent. These clouds are the spawning grounds of stars and planets, and understanding their development is essential to understanding star formation.
The form of such dark clouds is very irregular: they have no clearly defined outer boundaries and sometimes take on convoluted serpentine shapes. The closest and largest dark nebulae are visible to the naked eye, since they are the least obscured by stars in between Earth and the nebula, and because they have the largest angular size, appearing as dark patches against the brighter background of the Milky Way like the Coalsack Nebula and the Great Rift. These naked-eye objects are sometimes known as dark cloud constellations and take on a variety of names.
In the inner molecular regions of dark nebulae, important events take place, such as the formation of stars and masers.
Complexes and constellations
Along with molecular clouds, dark nebula make up molecular cloud complexes.
Dark nebula form in the night sky apparent dark cloud constellations.
See also
List of dark nebulae
Bok globule
Dark Cloud (disambiguation)
References
Nebulae
Cosmic dust | Dark nebula | [
"Astronomy"
] | 458 | [
"Nebulae",
"Astronomical objects",
"Outer space",
"Cosmic dust"
] |
174,844 | https://en.wikipedia.org/wiki/Actinolite | Actinolite is an amphibole silicate mineral with the chemical formula .
Etymology
The name actinolite is derived from the Greek word aktis (), meaning "beam" or "ray", because of the mineral's fibrous nature.
Mineralogy
Actinolite is an intermediate member in a solid-solution series between magnesium-rich tremolite, , and iron-rich ferro-actinolite, . Mg and Fe ions can be freely exchanged in the crystal structure. Like tremolite, asbestiform actinolite is regulated as asbestos.
Occurrence
Actinolite is commonly found in metamorphic rocks, such as contact aureoles surrounding cooled intrusive igneous rocks. It also occurs as a product of metamorphism of magnesium-rich limestones.
The old mineral name uralite is at times applied to an alteration product of primary pyroxene by a mixture composed largely of actinolite. The metamorphosed gabbro or diabase rock bodies, referred to as epidiorite, contain a considerable amount of this uralitic alteration.
Fibrous actinolite is one of the six recognised types of asbestos, the fibres being so small that they can enter the lungs and damage the alveoli. Actinolite asbestos was once mined along Jones Creek at Gundagai, Australia.
Gemology
Some forms of actinolite are used as gemstones. One is nephrite, one of the two types of jade (the other being jadeite, a variety of pyroxene).
Another gem variety is the chatoyant form known as cat's-eye actinolite. This stone is translucent to opaque, and green to yellowish green color. This variety has had the misnomer jade cat's-eye. Transparent actinolite is rare and is faceted for gem collectors. Major sources for these forms of actinolite are Taiwan and Canada. Other sources are Madagascar, Tanzania, and the United States.
See also
Classification of minerals
List of minerals
References
Hurlbut, Cornelius S.; Klein, Cornelis, 1985, Manual of Mineralogy, 20th ed., John Wiley and Sons, New York
Inosilicates
Calcium minerals
Magnesium minerals
Iron(II) minerals
Asbestos
Amphibole group
Monoclinic minerals
Minerals in space group 12
Gemstones | Actinolite | [
"Physics",
"Environmental_science"
] | 487 | [
"Toxicology",
"Materials",
"Asbestos",
"Gemstones",
"Matter"
] |
174,850 | https://en.wikipedia.org/wiki/Andalusite | Andalusite is an aluminium nesosilicate mineral with the chemical formula Al2SiO5. This mineral was called andalousite by Delamétherie, who thought it came from Andalusia, Spain. It soon became clear that it was a locality error, and that the specimens studied were actually from El Cardoso de la Sierra, in the Spanish province of Guadalajara, not Andalusia.
Andalusite is trimorphic with kyanite and sillimanite, being the lower pressure mid temperature polymorph. At higher temperatures and pressures, andalusite may convert to sillimanite. Thus, as with its other polymorphs, andalusite is an aluminosilicate index mineral, providing clues to depth and pressures involved in producing the host rock.
Varieties
The variety chiastolite commonly contains dark inclusions of carbon or clay which form a cruciform pattern when shown in cross-section. This stone was known at least from the sixteenth century, being taken to many European countries, as a souvenir, by pilgrims returning from Santiago de Compostela.
Viridine is a green variety of andalusite in which manganese 3+ substitutes for aluminium, the same change is also responsible for the colour. Kanonaite is a greenish-black mineral related to andalusite and having the approximate composition .
A clear variety found in Brazil and Sri-Lanka can be cut into a gemstone. Faceted andalusite stones give a play of red, green, and yellow colors that resembles a muted form of iridescence, although the colors are actually the result of unusually strong pleochroism.
Occurrence
Andalusite is a common metamorphic mineral which forms under low pressure and low to high temperatures. The minerals kyanite and sillimanite are polymorphs of andalusite, each occurring under different temperature-pressure regimes and are therefore rarely found together in the same rock. Because of this the three minerals are a useful tool to help identify the pressure-temperature paths of the host rock in which they are found. It is particularly associated with pelitic metamorphic rocks such as mica schist.
The world's highest concentration of andalusite is found in the Glomel mine in Côtes-d'Armor (France) which accounts for 25% of the global production of this mineral. South Africa possesses the largest portion of the world's known andalusite deposits.
Uses
Andalusite is used as a refractory in furnaces, kilns and other industrial processes.
See also
List of minerals
References
Aluminium minerals
Nesosilicates
Orthorhombic minerals
Minerals in space group 58
Industrial minerals
Gemstones | Andalusite | [
"Physics"
] | 569 | [
"Materials",
"Gemstones",
"Matter"
] |
174,861 | https://en.wikipedia.org/wiki/Axinite | Axinite is a brown to violet-brown, or reddish-brown bladed group of minerals composed of calcium aluminium boro-silicate, . Axinite is pyroelectric and piezoelectric.
The axinite group includes:
Axinite-(Fe) or ferroaxinite, Ca2Fe2+Al2BOSi4O15(OH) iron rich, clove-brown, brown, plum-blue, pearl-gray
Axinite-(Mg) or magnesioaxinite, Ca2MgAl2BOSi4O15(OH) magnesium rich, pale blue to pale violet; light brown to light pink
Axinite-(Mn) or manganaxinite, Ca2Mn2+Al2BOSi4O15(OH) manganese rich, honey-yellow, clove-brown, brown to blue
Tinzenite (CaFe2+Mn2+)3Al2BOSi4O15(OH) iron – manganese intermediate, yellow, brownish yellow-green
Axinite is sometimes used as a gemstone.
Gallery
References
Calcium minerals
Iron(II) minerals
Manganese(II) minerals
Aluminium minerals
Sorosilicates
Triclinic minerals
Luminescent minerals
Gemstones
Minerals in space group 2
Hydroxide minerals | Axinite | [
"Physics",
"Chemistry"
] | 274 | [
"Luminescence",
"Luminescent minerals",
"Materials",
"Gemstones",
"Matter"
] |
174,867 | https://en.wikipedia.org/wiki/Dudley%20R.%20Herschbach | Dudley Robert Herschbach (born June 18, 1932) is an American chemist at Harvard University. He won the 1986 Nobel Prize in Chemistry jointly with Yuan T. Lee and John C. Polanyi "for their contributions concerning the dynamics of chemical elementary processes". Herschbach and Lee specifically worked with molecular beams, performing crossed molecular beam experiments that enabled a detailed molecular-level understanding of many elementary reaction processes. Herschbach is a member of the Board of Sponsors of the Bulletin of the Atomic Scientists.
Early life and education
Herschbach was born in San Jose, California on June 18, 1932. The eldest of six children, he grew up in a rural area. He graduated from Campbell High School, where he played football. Offered both athletic and academic scholarships to Stanford University, Herschbach chose the academic. His freshman advisor, Harold S. Johnston, hired him as a summer research assistant, and taught him chemical kinetics in his senior year. His master's research involved calculating Arrhenius A-factors for gas-phase reactions. Herschbach received a B.S. in mathematics in 1954 and an M.S. in chemistry in 1955 from Stanford University.
Herschbach then attended Harvard University, where he earned an A.M. in physics in 1956 and a Ph.D. in chemical physics in 1958 under the direction of Edgar Bright Wilson. At Harvard, Herschbach examined tunnel splitting in molecules, using microwave spectroscopy. He was awarded a three-year Junior Fellowship in the Society of Fellows at Harvard, lasting from 1957 to 1959.
Research
In 1959, Herschbach joined the University of California at Berkeley, where he was appointed an assistant professor of chemistry and became an associate professor in 1961. At Berkeley, he and graduate students George Kwei and James Norris constructed a cross-beam instrument large enough for reactive scattering experiments involve alkali and various molecular partners. His interest in studying elementary chemical processes in molecular-beam reactive collisions challenged an often-accepted belief that "collisions do not occur in crossed molecular beams". The results of his studies of K + CH3I were the first to provide a detailed view of an elementary collision, demonstrating a direct rebound process in which the KI product recoiled from an incoming K atom beam. Subsequent studies of K + Br2 resulted in the discovery that the hot-wire surface ionization detector they were using was potentially contaminated by previous use, and had to be pre-treated to obtain reliable results. Changes to the instrumentation yielded reliable results, including the observation that the K + Br2 reaction involved a stripping reaction, in which the KBr product scattered forward from the incident K atom beam. As the research continued, it became possible to correlate the electronic structure of reactants and products with the reaction dynamics.
In 1963, Herschbach returned to Harvard University as a professor of chemistry. There he continued his work on molecular-beam reactive dynamics, working with graduate students Sanford Safron and Walter Miller on the reactions of alkali atoms with alkali halides. In 1967, Yuan T. Lee joined the lab as a postdoctoral student, and Herschbach, Lee, and graduate students Doug MacDonald and Pierre LeBreton began to construct a "supermachine" for studying collisions such as Cl + Br2 and hydrogen and halogen reactions.
His most acclaimed work, for which he won the Nobel Prize in Chemistry in 1986 with Yuan T. Lee and John C. Polanyi, was his collaboration with Yuan T. Lee on crossed molecular beam experiments. Crossing collimated beams of gas-phase reactants allows partitioning of energy among translational, rotational, and vibrational modes of the product molecules—a vital aspect of understanding reaction dynamics. For their contributions to reaction dynamics, Herschbach and Lee are considered to have helped create a new field of research in chemistry. Herschbach is a pioneer in molecular stereodynamics, measuring and theoretically interpreting the role of angular momentum and its vector properties in chemical reaction dynamics.
In the course of his life's work in research, Herschbach has published over 400 scientific papers. Herschbach has applied his broad expertise in both the theory and practice of chemistry and physics to diverse problems in chemical physics, including theoretical work on dimensional scaling. One of his studies demonstrated that methane is, in fact, spontaneously formed at high-pressure and high-temperature environments such as those deep in the Earth's mantle; this finding is an exciting indication of abiogenic hydrocarbon formation, meaning that the actual amount of hydrocarbons available on Earth might be much larger than conventionally assumed under the assumption that all hydrocarbons are fossil fuels. His recent work also includes a collaboration with Steven Brams studying approval voting.
Science and education
Hershbach's teaching ranges from graduate seminars on chemical kinetics to an introductory undergraduate course in general chemistry that he taught for many years at Harvard, and described as his "most challenging assignment".
Herschbach has been a strong proponent of science education and science among the general public, and frequently gives lectures to students of all ages, imbuing them with his infectious enthusiasm for science and his playful spirit of discovery. Herschbach has also lent his voice to the animated television show The Simpsons for the episode "Treehouse of Horror XIV", where he is seen presenting the Nobel Prize in Physics to Professor Frink.
In October 2010, Herschbach participated in the USA Science and Engineering Festival's Lunch with a Laureate program, where middle and high school students get to engage in an informal conversation with a Nobel Prize-winning scientist over a brown-bag lunch. He is also a member of the Festival's advisory board. Herschbach has participated in the Distinguished Lecture Series of the Research Science Institute (RSI), a summer research program for high school students held at MIT.
Although still an active research professor at Harvard, he joined the Texas A&M University faculty September 1, 2005, as a professor of physics, teaching one semester per year in the chemical physics program. As of 2010, he holds the title of professor emeritus at Harvard and remains well known for his involvement as a lecturer and mentor in the Harvard research community. He and his wife Georgene Herschbach also served for several years as the co-Masters of Currier House, where they were highly involved in undergraduate life in addition to their full-time duties.
Public service
He is a board member of the Center for Arms Control and Non-Proliferation and was the chairman of the board for Society for Science & the Public from 1992 to 2010. Herschbach is a member of the Board of Sponsors of the Bulletin of the Atomic Scientists. In 2003 he was one of 22 Nobel Laureates who signed the Humanist Manifesto.
He is also an Eagle Scout and recipient of the Distinguished Eagle Scout Award (DESA).
Family
Herschbach's wife, Georgene Herschbach, served as the Associate Dean of Harvard College for Undergraduate Academic Programs. Prior to retirement in 2009, she chaired Harvard College's influential Committee on Undergraduate Education.
Awards and honors
Herschbach is a Fellow of the American Academy of Arts and Sciences, the National Academy of Sciences, the American Philosophical Society and the Royal Chemical Society of Great Britain. In addition to the Nobel Prize in Chemistry, he has received a wide variety of national and international awards. These include the National Medal of Science, the ACS Award in Pure Chemistry, the Linus Pauling Medal, the Irving Langmuir Award, the Golden Plate Award of the American Academy of Achievement, and the American Institute of Chemists Gold Medal. He endowed the Herschbach Medal, which is given by the biennial Conference on Molecular Collision Dynamics, to recognize "outstanding theoretical and experimental contributions to the field."
Publications
Herschbach, D. R. & V. W. Laurie. "Anharmonic Potential Constants and Their Dependence Upon Bond Length", University of California, Lawrence Radiation Laboratory, Berkeley, United States Department of Energy (through predecessor agency the Atomic Energy Commission) (January 1961).
Herschbach, D. R. "Reactive Collisions in Crossed Molecular Beams", University of California, Lawrence Radiation Laboratory, Berkeley, United States Department of Energy (through predecessor agency the Atomic Energy Commission) (February 1962).
Laurie, V. W. & D. R. Herschbach. "The Determination of Molecular Structure from Rotational Spectra", Stanford University, University of California, Lawrence Radiation Laboratory, Berkeley, United States Department of Energy (through predecessor agency the Atomic Energy Commission) (July 1962).
Zare, R. N. & D. R. Herschbach. "Proposed Molecular Beam Determination of Energy Partition in the Photodissociation of Polyatomic Molecules", University of California, Lawrence Radiation Laboratory, Berkeley, United States Department of Energy (through predecessor agency the Atomic Energy Commission) (January 29, 1964).
References
External links
Video of a talk by Herschbach on Linus Pauling
1932 births
Living people
Scientists from San Jose, California
Members of the United States National Academy of Sciences
National Medal of Science laureates
Nobel laureates in Chemistry
American Nobel laureates
American physical chemists
Fellows of the American Physical Society
American people of German descent
Stanford University alumni
Harvard University alumni
Harvard University faculty
University of California, Berkeley College of Letters and Science faculty
Society for Science & the Public
Articles containing video clips
Chemical physicists
Campbell High School (California) alumni | Dudley R. Herschbach | [
"Chemistry"
] | 1,914 | [
"Chemical physicists"
] |
174,868 | https://en.wikipedia.org/wiki/Stridulation | Stridulation is the act of producing sound by rubbing together certain body parts. This behavior is mostly associated with insects, but other animals are known to do this as well, such as a number of species of fish, snakes and spiders. The mechanism is typically that of one structure with a well-defined lip, ridge, or nodules (the "scraper" or plectrum) being moved across a finely-ridged surface (the "file" or stridulitrum—sometimes called the pars stridens) or vice versa, and vibrating as it does so, like the dragging of a phonograph needle across a vinyl record. Sometimes it is the structure bearing the file which resonates to produce the sound, but in other cases it is the structure bearing the scraper, with both variants possible in related groups. Common onomatopoeic words for the sounds produced by stridulation include chirp and chirrup.
Arthropod stridulation
Insects and other arthropods stridulate by rubbing together two parts of the body. These are referred to generically as the stridulatory organs.
The mechanism is best known in crickets, mole crickets, and grasshoppers, but other insects which stridulate include Curculionidae (weevils and bark beetles), Cerambycidae (longhorned beetles), Mutillidae ("velvet ants"), Reduviidae (assassin bugs), Buprestidae (metallic wood-boring beetles), Hydrophilidae (water scavenger beetles), Cicindelinae (tiger beetles), Scarabaeidae (scarab beetles), Glaresidae ("enigmatic scarabs"), larval Lucanidae (stag beetles), Passalidae (Bessbugs), Geotrupidae (earth-boring dung beetles), Alydidae (broad-headed bugs), Largidae (bordered plant bugs), Miridae (leaf bugs), Corixidae (water boatmen, notably Micronecta scholtzi), various ants (including the Black imported fire ant, Solenopsis richteri), some stick insects such as Pterinoxylus spinulosus, and some species of Agromyzidae (leaf-mining flies). While cicadas are well-known for sound production via abdominal tymbal organs, it has been demonstrated that some species can produce sounds via stridulation, as well.
Stridulation is also known in a few tarantulas (Arachnida), certain centipedes, such as Scutigera coleoptrata, and some pill millipedes (Diplopoda, Oniscomorpha).
It is also widespread among decapod crustaceans, e.g., rock lobsters. Most spiders are silent, but some tarantula species are known to stridulate. When disturbed, Theraphosa blondi, the Goliath tarantula, can produce a rather loud hissing noise by rubbing together the bristles on its legs. This is said to be audible to a distance of up to 15 feet (4.5 m). One of the wolf spiders, Schizocosa stridulans, produces low-frequency sounds by flexing its abdomen (tremulation, rather than stridulation) or high-frequency stridulation by using the cymbia on the ends of its pedipalps. In most species of spiders, stridulation commonly occurs by males during sexual encounters. In the species Holocnemus pluchei, females also possess stridulatory organs, and both sexes engage in stridulation. In the species Steatoda nobilis, the males produce stridulation sounds during mating.
The anatomical parts used to produce sound are quite varied: the most common system is that seen in grasshoppers and many other insects, where a hind leg scraper is rubbed against the adjacent forewing (in beetles and true bugs the forewings are hardened); in crickets and katydids a file on one wing is rubbed by a scraper on the other wing; in longhorned beetles, the back edge of the pronotum scrapes against a file on the mesonotum; in various other beetles, the sound is produced by moving the head—up/down or side-to-side—while in others the abdominal tergites are rubbed against the elytra; in assassin bugs, the tip of the mouthparts scrapes along a ridged groove in the prosternum; in velvet ants the back edge of one abdominal tergite scrapes a file on the dorsal surface of the following tergite.
Stridulation in several of these examples is for attracting a mate, or as a form of territorial behaviour, but can also be a warning signal (acoustic aposematism, as in velvet ants and tarantulas). This kind of communication was first described by Slovenian biologist Ivan Regen (1868–1947).
Vertebrate stridulation
Some species of venomous snakes stridulate as part of a threat display. They arrange their body into a series of parallel C-shaped (counterlooped) coils that they rub together to produce a sizzling sound, rather like water on a hot plate. The best-known examples are members of the genus Echis (saw-scaled vipers), although those of the genus Cerastes (North African desert vipers) and at least one bush viper species, Atheris desaixi, do this as well. A bird species, the club-winged manakin, has a dedicated stridulation apparatus, while a species of mammal, the lowland streaked tenrec, (Hemicentetes semispinosus) produces a high-pitched noise by rubbing together specialised quills on its back.
References
External links
The British Library Sound Archive contains over 150,000 recordings of animal sounds and natural atmospheres from around the world.
Insect behavior
Sound
Animal sounds
Articles containing video clips | Stridulation | [
"Biology"
] | 1,252 | [
"Ethology",
"Behavior",
"Animal sounds"
] |
174,883 | https://en.wikipedia.org/wiki/Catastrophism | In geology, catastrophism is the theory that the Earth has largely been shaped by sudden, short-lived, violent events, possibly worldwide in scope.
This contrasts with uniformitarianism (sometimes called gradualism), according to which slow incremental changes, such as erosion, brought about all the Earth's geological features. The proponents of uniformitarianism held that the present was "the key to the past", and that all geological processes (such as erosion) throughout the past resembled those that can be observed today. Since the 19th-century disputes between catastrophists and uniformitarians, a more inclusive and integrated view of geologic events has developed, in which the scientific consensus accepts that some catastrophic events occurred in the geologic past, but regards these as explicable as extreme examples of natural processes which can occur.
Proponents of catastrophism proposed that each geological epoch ended with violent and sudden natural catastrophes such as major floods and the rapid formation of major mountain chains. Plants and animals living in the parts of the world where such events occurred became extinct, to be replaced abruptly by the new forms whose fossils defined the geological strata. Some catastrophists attempted to relate at least one such change to the Biblical account of Noah's flood.
The French scientist Georges Cuvier (17691832) popularised the concept of catastrophism in the early 19th century; he proposed that new life-forms had moved in from other areas after local floods, and avoided religious or metaphysical speculation in his scientific writings.
History
Geology and biblical beliefs
In the early development of geology, efforts were made in a predominantly Christian western society to reconcile biblical narratives of Creation and the universal flood with new concepts about the processes which had formed the Earth. The discovery of other ancient flood myths was taken as explaining why the flood story was "stated in scientific methods with surprising frequency among the Greeks", an example being Plutarch's account of the Ogygian flood.
Cuvier and the natural theologians
The leading scientific proponent of catastrophism in the early nineteenth century was the French anatomist and paleontologist Georges Cuvier. His motivation was to explain the patterns of extinction and faunal succession that he and others were observing in the fossil record. While he did speculate that the catastrophe responsible for the most recent extinctions in Eurasia might have been the result of the inundation of low-lying areas by the sea, he did not make any reference to Noah's flood. Nor did he ever make any reference to divine creation as the mechanism by which repopulation occurred following the extinction event. In fact Cuvier, influenced by the ideas of the Enlightenment and the intellectual climate of the French Revolution, avoided religious or metaphysical speculation in his scientific writings. Cuvier also believed that the stratigraphic record indicated that there had been several of these revolutions, which he viewed as recurring natural events, amid long intervals of stability during the history of life on Earth. This led him to believe the Earth was several million years old.
By contrast in Britain, where natural theology was influential during the early nineteenth century, a group of geologists including William Buckland and Robert Jameson interpreted Cuvier's work differently. Cuvier had written an introduction to a collection of his papers on fossil quadrupeds, discussing his ideas on catastrophic extinction. Jameson translated Cuvier's introduction into English, publishing it under the title Theory of the Earth. He added extensive editorial notes to the translation, explicitly linking the latest of Cuvier's revolutions with the biblical flood. The resulting essay was extremely influential in the English-speaking world. Buckland spent much of his early career trying to demonstrate the reality of the biblical flood using geological evidence. He frequently cited Cuvier's work, even though Cuvier had proposed an inundation of limited geographic extent and extended duration, whereas Buckland, to be consistent with the biblical account, was advocating a universal flood of short duration. Eventually, Buckland abandoned flood geology in favor of the glaciation theory advocated by Louis Agassiz, following a visit to the Alps where Agassiz demonstrated the effects of glaciation at first hand. As a result of the influence of Jameson, Buckland, and other advocates of natural theology, the nineteenth century debate over catastrophism took on much stronger religious overtones in Britain than elsewhere in Europe.
The rise of uniformitarianism in geology
Uniformitarian explanations for the formation of sedimentary rock and an understanding of the immense stretch of geological time, or as the concept came to be known deep time, were found in the writing of James Hutton, sometimes known as the father of geology, in the late 18th century. The geologist Charles Lyell built upon Hutton's ideas during the first half of 19th century and amassed observations in support of the uniformitarian idea that the Earth's features had been shaped by same geological processes that could be observed in the present acting gradually over an immense period of time. Lyell presented his ideas in the influential three volume work, Principles of Geology, published in the 1830s, which challenged theories about geological cataclysms proposed by proponents of catastrophism like Cuvier and Buckland. One of the key differences between catastrophism and uniformitarianism is that uniformitarianism observes the existence of vast timelines, whereas catastrophism does not. Today most geologists combine catastrophist and uniformitarianist standpoints, taking the view that Earth's history is a slow, gradual story punctuated by occasional natural catastrophic events that have affected Earth and its inhabitants.
From around 1850 to 1980, most geologists endorsed uniformitarianism ("The present is the key to the past") and gradualism (geologic change occurs slowly over long periods of time) and rejected the idea that cataclysmic events such as earthquakes, volcanic eruptions, or floods of vastly greater power than those observed at the present time, played any significant role in the formation of the Earth's surface. Instead they believed that the earth had been shaped by the long term action of forces such as volcanism, earthquakes, erosion, and sedimentation, that could still be observed in action today. In part, the geologists' rejection was fostered by their impression that the catastrophists of the early nineteenth century believed that God was directly involved in determining the history of Earth. Some of the theories about Catastrophism in the nineteenth and early twentieth centuries were connected with religion and catastrophic origins were sometimes considered miraculous rather than natural events.
The rise in uniformitarianism made the introduction of a new catastrophe theory very difficult. In 1923 J Harlen Bretz published a paper on the channeled scablands formed by glacial Lake Missoula in Washington State, USA. Bretz encountered resistance to his theories from the geology establishment of the day, kicking off an acrimonious 40 year debate. Finally in 1979 Bretz received the Penrose Medal; the Geological Society of America's highest award.
Immanuel Velikovsky's views
In the 1950s, Immanuel Velikovsky propounded catastrophism in several popular books. He speculated that the planet Venus is a former "comet" which was ejected from Jupiter and subsequently 3,500 years ago made two catastrophic close passes by Earth, 52 years apart, and later interacted with Mars, which then had a series of near collisions with Earth which ended in 687 BCE, before settling into its current orbit. Velikovsky used this to explain the biblical plagues of Egypt, the biblical reference to the "Sun standing still" for a day (Joshua 10:12 & 13, explained by changes in Earth's rotation), and the sinking of Atlantis. Scientists vigorously rejected Velikovsky's conjectures.
Current application
Neocatastrophism is the explanation of sudden extinctions in the palaeontological record by high magnitude, low frequency events (such as asteroid impacts, super-volcanic eruptions, supernova gamma ray bursts, etc.), as opposed to the more prevalent geomorphological thought which emphasises low magnitude, high frequency events.
Luis Alvarez impact event hypothesis
In 1980, Walter and Luis Alvarez published a paper suggesting that a asteroid struck Earth 66 million years ago at the end of the Cretaceous period. The impact wiped out about 70% of all species, including the non-avian dinosaurs, leaving behind the Cretaceous–Paleogene boundary (K–T boundary). In 1990, a candidate crater marking the impact was identified at Chicxulub in the Yucatán Peninsula of Mexico. These events sparked a wide acceptance of a scientifically based catastrophism with regard to certain events in the distant past.
Since then, the debate about the extinction of the dinosaurs and other mass extinction events has centered on whether the extinction mechanism was the asteroid impact, widespread volcanism (which occurred about the same time), or some other mechanism or combination. Most of the mechanisms suggested are catastrophic in nature.
The observation of the Shoemaker-Levy 9 cometary collision with Jupiter illustrated that catastrophic events occur as natural events.
Moon-formation
Modern theories also suggest that Earth's anomalously large moon was formed catastrophically. In a paper published in Icarus in 1975, William K. Hartmann and Donald R. Davis proposed that a catastrophic near-miss by a large planetesimal early in Earth's formation approximately 4.5 billion years ago blew out rocky debris, remelted Earth and formed the Moon, thus explaining the Moon's lesser density and lack of an iron core. The impact theory does have some faults; some computer simulations show the formation of a ring or multiple moons post impact, and elements are not quite the same between the Earth and Moon.
See also
Alternatives to evolution by natural selection
Clarence King
Flood basalt
Glacial lake outburst flood
History of geology
History of paleontology
Megatsunami
Pensée (Immanuel Velikovsky Reconsidered)
Punctuated equilibrium
Supervolcano
Uniformitarianism
Volcanic winter
Zanclean flood
References
Sources
Further reading
Lewin, R.; Complexity, Dent, London, 1993, p. 75
Palmer, T.; Catastrophism, Neocatastrophism and Evolution. Society for Interdisciplinary Studies in association with Nottingham Trent University, 1994, (SIS) (Nottingham Trent University)
External links
Impact Tectonics
Catastrophism and Mass Extinctions
The Fall and Rise of Catastrophism
Catastrophism! Man, Myth and Mayhem in Ancient History and the Sciences
Answers In Creation - Catastrophism Article
Dictionary of the History of Ideas: "Uniformitarianism and Catastrophism"
History of Earth science
Creationism
Disasters
Geology theories | Catastrophism | [
"Biology"
] | 2,184 | [
"Creationism",
"Biology theories",
"Obsolete biology theories"
] |
174,901 | https://en.wikipedia.org/wiki/Hartree | The hartree (symbol: Eh), also known as the Hartree energy, is the unit of energy in the atomic units system, named after the British physicist Douglas Hartree. Its CODATA recommended value is =
The hartree is approximately the negative electric potential energy of the electron in a hydrogen atom in its ground state and, by the virial theorem, approximately twice its ionization energy; the relationships are not exact because of the finite mass of the nucleus of the hydrogen atom and relativistic corrections.
The hartree is usually used as a unit of energy in atomic physics and computational chemistry: for experimental measurements at the atomic scale, the electronvolt (eV) or the reciprocal centimetre (cm−1) are much more widely used.
Other relationships
= 2 Ry = 2 R∞hc
=
=
=
≘
≘
≘
≘
where:
ħ is the reduced Planck constant,
me is the electron mass,
e is the elementary charge,
a0 is the Bohr radius,
ε0 is the electric constant,
c is the speed of light in vacuum, and
α is the fine-structure constant.
Effective hartree units are used in semiconductor physics where is replaced by and is the static dielectric constant. Also, the electron mass is replaced by the effective band mass . The effective hartree in semiconductors becomes small enough to be measured in millielectronvolts (meV).
References
Units of energy
Physical constants | Hartree | [
"Physics",
"Mathematics"
] | 293 | [
"Physical quantities",
"Quantity",
"Units of energy",
"Physical constants",
"Units of measurement"
] |
174,907 | https://en.wikipedia.org/wiki/Covert%20channel | In computer security, a covert channel is a type of attack that creates a capability to transfer information objects between processes that are not supposed to be allowed to communicate by the computer security policy. The term, originated in 1973 by Butler Lampson, is defined as channels "not intended for information transfer at all, such as the service program's effect on system load," to distinguish it from legitimate channels that are subjected to access controls by COMPUSEC.
Characteristics
A covert channel is so called because it is hidden from the access control mechanisms of secure operating systems since it does not use the legitimate data transfer mechanisms of the computer system (typically, read and write), and therefore cannot be detected or controlled by the security mechanisms that underlie secure operating systems. Covert channels are exceedingly hard to install in real systems, and can often be detected by monitoring system performance. In addition, they suffer from a low signal-to-noise ratio and low data rates (typically, on the order of a few bits per second). They can also be removed manually with a high degree of assurance from secure systems by well established covert channel analysis strategies.
Covert channels are distinct from, and often confused with, legitimate channel exploitations that attack low-assurance pseudo-secure systems using schemes such as steganography or even less sophisticated schemes to disguise prohibited objects inside of legitimate information objects. The legitimate channel misuse by steganography is specifically not a form of covert channel.
Covert channels can tunnel through secure operating systems and require special measures to control. Covert channel analysis is the only proven way to control covert channels. By contrast, secure operating systems can easily prevent misuse of legitimate channels, so distinguishing both is important. Analysis of legitimate channels for hidden objects is often misrepresented as the only successful countermeasure for legitimate channel misuse. Because this amounts to analysis of large amounts of software, it was shown as early as 1972 to be impractical. Without being informed of this, some are misled to believe an analysis will "manage the risk" of these legitimate channels.
TCSEC criteria
The Trusted Computer Security Evaluation Criteria (TCSEC) was a set of criteria, now deprecated, that had been established by the National Computer Security Center, an agency managed by the United States' National Security Agency.
Lampson's definition of a covert channel was paraphrased in the TCSEC specifically to refer to ways of transferring information from a higher classification compartment to a lower classification. In a shared processing environment, it is difficult to completely insulate one process from the effects another process can have on the operating environment. A covert channel is created by a sender process that modulates some condition (such as free space, availability of some service, wait time to execute) that can be detected by a receiving process.
The TCSEC defines two kinds of covert channels:
Storage channels - Communicate by modifying a "storage location", such as a hard drive.
Timing channels - Perform operations that affect the "real response time observed" by the receiver.
The TCSEC, also known as the Orange Book, requires analysis of covert storage channels to be classified as a B2 system and analysis of covert timing channels is a requirement for class B3.
Timing channels
The use of delays between packets transmitted over computer networks was first explored by Girling for covert communication. This work motivated many other works to establish or detect a covert communication and analyze the fundamental limitations of such scenarios.
Identifying covert channels
Ordinary things, such as existence of a file or time used for a computation, have been the medium through which a covert channel communicates. Covert channels are not easy to find because these media are so numerous and frequently used.
Two relatively old techniques remain the standards for locating potential covert channels. One works by analyzing the resources of a system and other works at the source-code level.
Eliminating covert channels
The possibility of covert channels cannot be eliminated, although it can be significantly reduced by careful design and analysis.
The detection of a covert channel can be made more difficult by using characteristics of the communications medium for the legitimate channel that are never controlled or examined by legitimate users.
For example, a file can be opened and closed by a program in a specific, timed pattern that can be detected by another program, and the pattern can be interpreted as a string of bits, forming a covert channel.
Since it is unlikely that legitimate users will check for patterns of file opening and closing operations, this type of covert channel can remain undetected for long periods.
A similar case is port knocking.
In usual communications the timing of requests is irrelevant and unwatched.
Port knocking makes it significant.
Data hiding in OSI model
Handel and Sandford presented research where they study covert channels within the general design of network communication protocols. They employ the OSI model as a basis for their development in which they characterize system elements having potential to be used for data hiding. The adopted approach has advantages over these because standards opposed to specific network environments or architectures are considered.
Their study does not aim to present foolproof steganographic schemes. Rather, they establish basic principles for data hiding in each of seven OSI layers. Besides suggesting the use of the reserved fields of protocols headers (that are easily detectable) at higher network layers, they also propose the possibility of timing channels involving CSMA/CD manipulation at the physical layer.
Their work identifies covert channel merit such as:
Detectability: Covert channel must be measurable by the intended recipient only.
Indistinguishability: Covert channel must lack identification.
Bandwidth: number of data hiding bits per channel use.
Their covert channel analysis does not consider issues such as interoperability of these data hiding techniques with other network nodes, covert channel capacity estimation, effect of data hiding on the network in terms of complexity and compatibility. Moreover, the generality of the techniques cannot be fully justified in practice since the OSI model does not exist per se in functional systems.
Data hiding in LAN environment by covert channels
As Girling first analyzes covert channels in a network environment. His work focuses on local area networks (LANs) in which three obvious covert channels (two storage channel and one timing channel) are identified. This demonstrates the real examples of bandwidth possibilities for simple covert channels in LANs. For a specific LAN environment, the author introduced the notion of a wiretapper who monitors the activities of a specific transmitter on LAN. The covertly communicating parties are the transmitter and the wiretapper. The covert information according to Girling can be communicated through any of following obvious ways:
By observing the addresses as approached by the transmitter. If total number of addresses a sender can approach is 16, then there is a possibility of secret communication having 4 bits for the secret message. The author termed this possibility as covert storage channel as it depends in what is sent (i.e., which address is approached by the sender).
In the same way, the other obvious storage covert channel would depend on the size of the frame sent by the sender. For the 256 possible sizes, the amount of covert information deciphered from one size of the frame would be of 8 bits. Again this scenario was termed as the covert storage channel.
The third scenario presented uses the presence or absence of messages. For instance, "0" for an odd message time interval, "1" for even.
The scenario transmits covert information through a "when-is-sent" strategy therefore termed as timing covert channel. The time to transmit a block of data is calculated as function of software processing time, network speed, network block sizes and protocol overhead. Assuming block of various sizes are transmitted on the LAN, software overhead is computed on average and novel time evaluation is used to estimate the bandwidth (capacity) of covert channels are also presented. The work paves the way for future research.
Data hiding in TCP/IP Protocol suite by covert channels
Focusing on the IP and TCP headers of TCP/IP Protocol suite, an article published by Craig Rowland devises proper encoding and decoding techniques by utilizing the IP identification field, the TCP initial sequence number and acknowledge sequence number fields. These techniques are implemented in a simple utility written for Linux systems running version 2.0 kernels.
Rowland provides a proof of concept as well as practical encoding and decoding techniques for exploitation of covert channels using the TCP/IP protocol suite. These techniques are analyzed considering security mechanisms like firewall network address translation.
However, the non-detectability of these covert communication techniques is questionable. For instance, a case where sequence number field of TCP header is manipulated, the encoding scheme is adopted such that every time the same alphabet is covertly communicated, it is encoded with the same sequence number.
Moreover, the usages of sequence number field as well as the acknowledgment field cannot be made specific to the ASCII coding of English language alphabet as proposed, since both fields take into account the receipt of data bytes pertaining to specific network packet(s).
After Rowland, several authors in academia published more work on covert channels in the TCP/IP protocol suite, including a plethora of countermeasures ranging from statistical approaches to machine learning. The research on network covert channels overlaps with the domain of network steganography, which emerged later.
See also
References
Further reading
Timing Channels an early exploitation of a timing channel in Multics.
Covert channel tool hides data in IPv6, SecurityFocus, August 11, 2006.
An open online class on covert channels (GitHub)
External links
Gray-World - Open Source Research Team : Tools and Papers
Steath Network Operations Centre - Covert Communication Support System
Steganography
Computer security exploits | Covert channel | [
"Technology"
] | 1,972 | [
"Computer security exploits"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.