id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
18,468,216 | https://en.wikipedia.org/wiki/Shearing%20%28manufacturing%29 | Shearing, also known as die cutting, is a process that cuts stock without the formation of chips or the use of burning or melting. Strictly speaking, if the cutting blades are straight the process is called shearing; if the cutting blades are curved then they are shearing-type operations. The most commonly sheared materials are in the form of sheet metal or plates. However, rods can also be sheared. Shearing-type operations include blanking, piercing, roll slitting, and trimming. It is used for metal, fabric, paper and plastics.
Principle
A punch (or moving blade) is used to push a workpiece against the die (or fixed blade), which is fixed. Usually, the clearance between the two is 5 to 40% of the thickness of the material, but dependent on the material. Clearance is defined as the separation between the blades, measured at the point where the cutting action takes place and perpendicular to the direction of blade movement. It affects the finish of the cut (burr) and the machine's power consumption. This causes the material to experience highly localized shear stresses between the punch and die. The material will then fail when the punch has moved 15 to 60% of the thickness of the material because the shear stresses are greater than the shear strength of the material and the remainder of the material is torn.
Two distinct sections can be seen on a sheared workpiece, the first part being plastic deformation and the second being fractured. Because of normal inhomogeneities in materials and inconsistencies in clearance between the punch and die, the shearing action does not occur in a uniform manner. The fracture will begin at the weakest point and progress to the next weakest point until the entire workpiece has been sheared; this is what causes the rough edge. The rough edge can be reduced if the workpiece is clamped from the top with a die cushion. Above a certain pressure, the fracture zone can be completely eliminated. However, the sheared edge of the workpiece will usually experience work-hardening and cracking. If the workpiece has too much clearance, then it may experience roll-over or heavy burring.
Tool materials
Low alloy steel is used in low production of materials that range up to 0.64 cm ( in) thick
High-carbon, high chromium steel is used in high production of materials that also range up to 0.64 cm ( in) in thickness
Shock-resistant steel is used in materials that are equal to 0.64 cm ( in) thick or more
Tolerances and surface finish
When shearing a sheet, the typical tolerance is +0.1 inch or −0.1 inch, but it is feasible to get the tolerance to within +0.005 inch or −0.005 inch. While shearing a bar and angle, the typical tolerance is +0.06 inch or −0.06 inch, but it is possible to get the tolerance to +0.03 inch or −0.03 inches. Surface finishes typically occur within the 250 to 1000 microinches range but can range from 125 to 2000 microinches. A secondary operation is required if one wants better surfaces than this.
See also
Alligator shear
Shear (sheet metal)
Stamping (metalworking)
References
Citations
General sources
.
.
External links
Shearing Capacity Guide
Cutting machines
Fabrication (metal)
Metalworking cutting tools
Machine tool builders | Shearing (manufacturing) | Physics,Technology | 693 |
74,398,303 | https://en.wikipedia.org/wiki/Periscope%20lens | A periscope lens, sometimes called a folded lens, is a mechanical assembly of lens elements that uses a prism or mirror to redirect the light through the lenses with a 90° angle to the optical axis, as in a periscope.
Uses
The Kenworthy/Netman Snorkel Camera System, introduced in 1967 by Norman Paul Kenworthy and Bob Nettman, uses periscope lenses to allow filming very small scale models and objects from a very close distance.
Smartphones use periscope lenses to allow larger zoom ratios without increasing (too much) their thickness. The increased optical zoom range is aimed to improve macro photography. With a periscope lens, the zoom lenses are turned by 90° and are aligned along the length or the width of the smartphone instead of its depth. The Sharp 902, released in 2004, is sometimes credited to be the first mobile phone to feature a (2x variable zoom) periscope lens camera. The Asus ZenFone Zoom smartphone, released in 2015, used an Hoya dual-periscope lens mechanism to achieve a 3x zoom. In 2019, the Huawei P30 Pro featured a 5x zoom periscope lens. In 2020, the Huawei P40 Pro+ introduced a 10x zoom periscope lens camera.
See also
List of longest smartphone telephoto lenses
References
Telescopes
Film and video terminology
See also
Folded optics
List of longest smartphone telephoto lenses | Periscope lens | Astronomy | 298 |
36,888,225 | https://en.wikipedia.org/wiki/HD%2091496 | HD 91496 (HR 4142) is a giant star in the constellation Carina, with an apparent magnitude is 4.92 and an MK spectral class of K4/5 III. It has been suspected of varying in brightness, but this has not been confirmed.
HD 91496 has a faint companion, six magnitudes fainter and away. It is a distant background star.
References
Carina (constellation)
K-type giants
Durchmusterung objects
091496
051495
Carinae, 204
4142 | HD 91496 | Astronomy | 111 |
12,108,868 | https://en.wikipedia.org/wiki/Lapachol | Lapachol is a natural phenolic compound isolated from the bark of the lapacho tree. This tree is known botanically as Handroanthus impetiginosus, but was formerly known by various other botanical names such as Tabebuia avellanedae. Lapachol is also found in other species of Handroanthus.
Lapachol is usually encountered as a yellow, skin-irritating powder from wood. Chemically, it is a derivative of vitamin K.
Once studied as a possible treatment for some types of cancer, it is now considered too toxic for use.
See also
§Hooker-Oxidation§
References
1,4-Naphthoquinones
Hydroxynaphthoquinones
Plant toxins
Terpeno-phenolic compounds | Lapachol | Chemistry | 160 |
16,189,704 | https://en.wikipedia.org/wiki/Project%20Valkyrie | The Valkyrie is a theoretical spacecraft designed by Charles Pellegrino and Jim Powell (a physicist at Brookhaven National Laboratory). The Valkyrie is theoretically able to accelerate to 92% the speed of light and decelerate afterward, carrying a small human crew to another star system.
Design
The Valkyrie's high performance is attributable to its innovative design. Instead of a solid spacecraft with a rocket at the back, Valkyrie is built more like a cable car train, with the crew quarters, fuel tanks, radiation shielding, and other vital components being pulled between front and aft engines on long tethers. This greatly reduces the mass of the ship, because it no longer requires heavy structural members and radiation shielding. This is a considerable advantage because in a rocket every extra kilogram of payload (dry mass) will require a corresponding extra amount of propellant or fuel.
The Valkyrie would have a crew module trailing 10 kilometers behind the engine. A small 20-cm-thick tungsten shield would hang 100 meters behind the engine, to help protect the trailing crew module from its harmful radiation. The fuel tank might be placed between the crew module and the engine, to further protect it. At the trailing end of the ship would be a second engine, which the ship would use to decelerate. The forward engine and the tank holding its fuel supply might be jettisoned before deceleration, to reduce fuel consumption. The tether system requires that the elements of the ship must be moved "up" or "down" the tethers depending on flight direction.
Engines
Initially, the Valkyrie's engine would work by using small quantities of antimatter to initiate an extremely energetic fusion reaction. A magnetic coil captures the exhaust products of this reaction, expelling them with an exhaust velocity of 12-20% the speed of light (35,000-60,000 km/s). As the spacecraft approaches 20% the speed of light, more antimatter is fed into the engines until it switches over to pure matter-antimatter annihilation. It will use this mode to accelerate the remainder of the way to .92 c. Pellegrino estimates that the ship would require 100 tons of matter and antimatter to reach 0.1-0.2c, with an undetermined excess of matter to ensure the antimatter is efficiently utilized. To reach a speed of .92 c and decelerate afterward, Valkyrie would require a mass ratio of 22 (or 2200 tons of fuel for a 100-ton spacecraft).
At such high speeds, incident debris would be a major hazard. While accelerating, Valkyrie uses a device that combines the functions of a particle shield and a liquid droplet radiator. Waste heat is dumped into liquid droplets that are cast out in front of the ship. As the ship accelerates the droplets (now cool) effectively fall back into the ship, so the system is self-recycling. During deceleration, the ship will be protected by ultra-thin umbrella shields, augmented by a dust shield, possibly made by grinding up pieces of the discarded first stage.
Criticism
The chief feasibility issue of Valkyrie (or for any antimatter-beam drive) lies in its requirement of tons of antimatter fuel. Antimatter cannot be produced at an efficiency of more than 50% (that is to say, to produce one gram of antimatter requires twice as much energy as you would get from annihilating that gram with a gram of matter). Since half a kilogram of antimatter would yield 9×1016 J if annihilated with an equal amount of matter, this quickly adds up to enormous energy requirements for its production. To produce the 50 tons of antimatter Valkyrie would require 1.8×1022 J. This is the same amount of energy that the entire human race currently uses in about forty years.
This may be solved by creating a truly enormous power plant for the antimatter factory, probably in the form of a vast array of solar panels with a combined area of millions of square kilometers or many fusion reactors. Alternately the antimatter-fusion hybrid drive the Valkyrie uses to accelerate up to 0.2 c would require much less antimatter and, with an exhaust velocity of 30–60,000 km·s−1, still compares quite favorably with competing engines such as the inertial confinement pulse drive used by Project Daedalus or Project Orion. The Valkyrie's lightweight construction could also be applied to a wide variety of space vehicles.
By using tethers there is no rigidity between ship elements and engines. Without active acceleration or thrust to pull and straighten the tethers the slightest imbalance, excess force, or the moving of the ship elements into different flight configurations pose a danger for collisions between ship elements and engines. As long term space flight at interstellar velocities causes erosion due to collision with particles, gas, dust and micrometeorites the tethers are literally lifelines. Changing course or turning the ship requires re-positioning or aligning every ship element and presumably consumes more fuel in doing so.
As the liquid droplet radiators (LDR) are deployed on the other side of propulsion and the main body, the droplets and the collectors are exposed to the other half of the heat energy from the gamma radiation from the antimatter annihilation. If the total area of the collectors are larger than the radiation shield the LDR would serve to cool itself rather than the shield for the ship's main components.
Trivia
A superficially-similar interstellar spacecraft is featured in the movie Avatar.
See also
Project Prometheus
Project Longshot
References
External links
Valkyrie Edited Guide Entry (BBC.com)
Valkyrie at Atomic Rockets
Hypothetical spacecraft
Interstellar travel
Antimatter | Project Valkyrie | Physics,Astronomy,Technology | 1,185 |
71,881,497 | https://en.wikipedia.org/wiki/Arsaalkyne | In chemistry, an arsaalkyne is chemical compound with a triple bond between carbon and arsenic. These organoarsenic compounds are rare, especially in comparison with the phosphaalkynes. The parent HCAs has been characterized spectroscopically, otherwise the only arsaalkynes have bulky organic substituents.
Synthesis and isolation
Arsaalkynes are produced by dehydrohalogenation or related base-induced elimination reactions. The case of HCAs is illustrative:
Owing to the principles of the double bond rule, arsaalkynes tend to oligomerize more readily than the phosphorus analogues. Thus attempts to prepare AsCCMe3 produce the tetramer, which has a cubane structure. The very bulky substituent C6H2-2,4,6-(t-Bu)3 does however allow the crystallization of the monomeric arsaalkyne. Its As-C bond length is 1.657(7) Å.
See also
Cyaarside
References
Functional groups
Organoarsenic compounds | Arsaalkyne | Chemistry | 224 |
58,867,428 | https://en.wikipedia.org/wiki/AR%20Andromedae | AR Andromedae (AR And) is a dwarf nova of the SS Cygni type in the constellation Andromeda. Its typical apparent visual magnitude is 17.6, but increases up to 11.0 magnitude during outbursts. The outbursts occur approximately every 23 days.
System
Dwarf novae systems are made up by a classical star with a white dwarf companion. By measuring the Doppler shift of spectral lines, it was found to have an orbital period of 3.91 hours. The accretion disk around white dwarf seems to be axisymmetric and devoid of structure.
Variability
AR Andromedae was first listed as a variable star by Frank Elmore Ross in 1929, based on observations in 1907 (when the star was too faint to detect) and 1927 (when the star had flared to magnitude 12). It was initially classified as a Mira variable star. In 1934 it was given the variable star designation AR Andromedae.
The light emitted by dwarf novae like AR Andromedae comes entirely from the accretion disc and the white dwarf; the luminosity increase during outbursts is typically induced by a variation in the accretion rate of the white dwarf. The outbursts are unusually frequent, with 19 outbursts detected by 2016.
Spectrum
The spectral type of AR Andromedae is classified as peculiar of the U Geminorum type, since the spectrum is not a typical stellar blackbody. It also shows strong emission lines of the first two Balmer series lines as well as HeI ones. In addition, an unusually strong FeII line with other possible weak lines of the same origin were also reported.
References
Andromeda (constellation)
Andromedae, AR
J01450327+3756334
Dwarf novae | AR Andromedae | Astronomy | 364 |
40,429,776 | https://en.wikipedia.org/wiki/Siderin | Siderin is a coumarin derivative produced by Aspergillus versicolor, an endophytic fungus found in the green alga Halimeda opuntia in the Red Sea.
External links
Coumarins
Methoxy compounds | Siderin | Chemistry | 54 |
38,069,259 | https://en.wikipedia.org/wiki/The%20Birds%20%28sculpture%29 | The Birds comprises a pair of outdoor sculptures depicting house sparrows by Myfanwy MacLeod, installed after the 2010 Winter Olympics in Southeast False Creek Olympic Plaza, which served as the site of the 2010 Olympic Village in Vancouver, British Columbia, Canada. The work depicts one male and one female house sparrow, each approximately five metres tall, and was the first piece approved by the city's Olympic and Paralympic Public Art Program. It was inspired by Alfred Hitchcock's 1963 film of the same name, sustainability, the site's history as a shipyard, and immigration.
They were removed on November 23, 2017 for repairs, but later restored to their original location.
Background
The Birds was the first work of public art to be approved under the city's Olympic and Paralympic Public Art Program. It was installed in Southeast False Creek Olympic Plaza, which served as the site of the 2010 Olympic Village, in April 2010. MacLeod was inspired by Alfred Hitchcock's 1963 film of the same name. She has said of the piece, "I think it boils down to wanting to make something sublime for the plaza – that is something beautiful, but frightening at the same time." She has also said of its inspiration:
The Birds was also inspired by sustainability and the site's history as a shipyard, where sailors often wore sparrow tattoos. The sculptures have been called an "ode to immigration" based on MacLeod's interest in "alien species" and when non-native species are introduced to an environment (the house sparrow is not native to North America). MacLeod said: "My work for the Olympic Village tries to infuse the ordinary and commonplace sparrow with a touch of the ridiculous and the sublime. Locating this artwork in an urban plaza not only highlights what has become the 'natural' environment of the sparrow, it also reinforces the 'small' problem of introducing a foreign species and the subsequent havoc wreaked upon our ecosystems."
The sculptures were produced by Heavy Industries, with some bodywork completed by Semi-Rigid Plastic Parts Repair. The Birds was the last of the public art program's works to be installed, despite being the first approved.
Description
The work depicts one male and one female house sparrow, each between tall, or approximately life size. The birds have been described as realistic and "massive yet friendly-looking". Their bodies are made from hard coated expanded polystyrene (EPS) foam, coated with a polyurea skin and airbrush painted, all clad around a steel armature. Their cast bronze legs were sealed with wax. According to Heavy, the EPS form pieces were "glued together with pressure-sensitive adhesive, specifically formulated for bonding EPS to itself and other materials".
Reception
Marsha Lederman of The Globe and Mail called the sculptures "huge and intimidating and a bit creepy", but acknowledged the artist's intent. Explaining connection between immigration and sparrows specifically, she wrote that the birds were introduced "to satisfy cultural nostalgia for homesick Europeans. The birds were alien, exotic. They're now ubiquitous. Part of the everyday landscape. Barely noticed… Not these." Tuija Seipell of Jaunted said the sparrows were reminiscent of the bird that Flick and the Blueberries build to scare the Hoppers in the 1998 film A Bug's Life.
See also
"The Birds" (story), a 1952 novelette by Daphne du Maurier and the inspiration for Hitchcock's film
References
External links
Myfanwy MacLeod’s The Birds at Olympic Village Point to Fragile Biodiversity (June 2, 2010), Vancouver 21
2010 establishments in British Columbia
2010 sculptures
Sculptures of birds in Canada
Outdoor sculptures in Vancouver
Colossal statues | The Birds (sculpture) | Physics,Mathematics | 746 |
19,700,525 | https://en.wikipedia.org/wiki/Hypertree%20network | A hypertree network is a network topology that shares some traits with the binary tree network. It is a variation of the fat tree architecture.
A hypertree of degree k depth d may be visualized as a 3-dimensional object whose front view is the top-down complete k-ary tree of depth d and the side view is the bottom-up complete binary tree of depth d.
Hypertrees were proposed in 1981 by James R. Goodman and Carlo Sequin.
Hypertrees are a choice for parallel computer architecture, used, e.g., in the connection machine CM-5.
References
Network topology | Hypertree network | Mathematics,Technology | 127 |
11,205,258 | https://en.wikipedia.org/wiki/Modified%20condition/decision%20coverage | Modified condition/decision coverage (MC/DC) is a code coverage criterion used in software testing.
Overview
MC/DC requires all of the below during testing:
Each entry and exit point is invoked
Each decision takes every possible outcome
Each condition in a decision takes every possible outcome
Each condition in a decision is shown to independently affect the outcome of the decision.
Independence of a condition is shown by proving that only one condition changes at a time.
MC/DC is used in avionics software development guidance DO-178B and DO-178C to ensure adequate testing of the most critical (Level A) software, which is defined as that software which could provide (or prevent failure of) continued safe flight and landing of an aircraft. It is also highly recommended for SIL 4 in part 3 Annex B of the basic safety publication and ASIL D in part 6 of automotive standard ISO 26262.
Additionally, NASA requires 100% MC/DC coverage for any safety critical software component in Section 3.7.4 of NPR 7150.2D.
Definitions
Condition A condition is a leaf-level Boolean expression (it cannot be broken down into simpler Boolean expressions).
Decision A Boolean expression composed of conditions and zero or more Boolean operators. A decision without a Boolean operator is a condition. A decision does not imply a change of control flow, e.g. an assignment of a boolean expression to a variable is a decision for MC/DC.
Condition coverage Every condition in a decision in the program has taken all possible outcomes at least once.
Decision coverage Every point of entry and exit in the program has been invoked at least once, and every decision in the program has taken all possible outcomes at least once.
Condition/decision coverage Every point of entry and exit in the program has been invoked at least once, every condition in a decision in the program has taken all possible outcomes at least once, and every decision in the program has taken all possible outcomes at least once.
Modified condition/decision coverage Every point of entry and exit in the program has been invoked at least once, every condition in a decision in the program has taken all possible outcomes at least once, and each condition has been shown to affect that decision outcome independently. A condition is shown to affect a decision's outcome independently by varying just that condition while holding fixed all other possible conditions. The condition/decision criterion does not guarantee the coverage of all conditions in the module because in many test cases, some conditions of a decision are masked by the other conditions. Using the modified condition/decision criterion, each condition must be shown to be able to act on the decision outcome by itself, everything else being held fixed. The MC/DC criterion is thus much stronger than the condition/decision coverage.
Criticism
It is a misunderstanding that by purely syntactic rearrangements of decisions (breaking them into several independently evaluated conditions using temporary variables, the values of which are then used in the decision) which do not change the semantics of a program can lower the difficulty of obtaining complete MC/DC coverage.
This is because MC/DC is driven by the program syntax. However, this kind of "cheating" can be done to simplify expressions, not simply to avoid MC/DC complexities. For example, assignment of the number of days in a month (excluding leap years) could be achieved by using either a switch statement or by using a table with an enumeration value as an index. The number of tests required based on the source code could be considerably different depending upon the coverage required, although semantically we would want to test both approaches with a minimum number of tests.
Another example that could be considered as "cheating" to achieve higher MC/DC is:/* Function A */
void function_a (int a, bool b, bool c, bool d, bool e, bool f)
{
if (a == 100)
{
if (b || c)
// statement 1
if (d || e || f)
// statement 2
}
}/* Function B */
void function_b (int a, bool b, bool c, bool d, bool e, bool f)
{
bool a_is_equal_to_100 = a == 100 ;
bool b_or_c = b || c ;
bool d_or_e_or_f = d || e || f ;
if (a_is_equal_to_100)
{
if (b_or_c)
// statement 1
if (d_or_e_or_f)
// statement 2
}
}if the definition of a decision is treated as if it is a boolean expression that changes the control flow of the program (the text in brackets in an 'if' statement) then one may think that Function B is likely to have higher MC/DC than Function A for a given set of test cases (easier to test because it needs less tests to achieve 100% MC/DC coverage), even though functionally both are the same.
However, what is wrong in the previous statement is the definition of decision. A decision includes 'any' boolean expression, even for assignments to variables. In this case, the three assignments should be treated as a decision for MC/DC purposes and therefore the changed code needs exactly the same tests and number of tests to achieve MC/DC than the first one. Some code coverage tools do not use this strict interpretation of a decision and may produce false positives (reporting 100% code coverage when indeed this is not the case).
RC/DC
In 2002 Sergiy Vilkomir proposed reinforced condition/decision coverage (RC/DC) as a stronger version of the MC/DC coverage criterion that is suitable for safety-critical systems.
Jonathan Bowen and his co-author analyzed several variants of MC/DC and RC/DC and concluded that at least some MC/DC variants have superior coverage over RC/DC.
See also
Elementary comparison testing
References
External links
What is a "Decision" in Application of Modified Condition/Decision Coverage (MC/DC) and Decision Coverage (DC)? (May 1, 2020 archive)
An Investigation of Three Forms of the Modified Condition Decision Coverage (MCDC) Criterion
Software testing | Modified condition/decision coverage | Engineering | 1,286 |
935,841 | https://en.wikipedia.org/wiki/Albert%20W.%20Tucker | Albert William Tucker (28 November 1905 – 25 January 1995) was a Canadian mathematician who made important contributions in topology, game theory, and non-linear programming.
Early life and education
Albert Tucker was born in Oshawa, Ontario, Canada, and earned his B.A. at the University of Toronto in 1928 and his M.A. at the same institution in 1929. In 1932, he earned his Ph.D. at Princeton University under the supervision of Solomon Lefschetz, with a dissertation entitled An Abstract Approach to Manifolds. In 1932–33 he was a National Research Fellow at Cambridge, Harvard, and then University of Chicago.
Career
Tucker then returned to Princeton to join the faculty in 1933, where he stayed until 1974. He chaired the mathematics department for about twenty years, one of the longest tenures. His extensive relationships within the field made him a great source for oral histories of the mathematics community.
In 1950, Albert Tucker gave the name and interpretation "prisoner's dilemma" to Merrill M. Flood and Melvin Dresher's model of cooperation and conflict, resulting in the most well-known game theoretic paradox. He is also well known for the Karush–Kuhn–Tucker conditions, a basic result in non-linear programming, which was published in conference proceedings, rather than in a journal.
In the 1960s, he was heavily involved in mathematics education, as chair of the AP Calculus committee for the College Board (1960–1963), through work with the Committee on the Undergraduate Program in Mathematics (CUPM) of the MAA (he was president of the MAA in 1961–1962), and through many NSF summer workshops for high school and college teachers. George B. Thomas Jr. acknowledged Tucker's contribution of many exercises to Thomas's classic textbook, Calculus and Analytic Geometry.
In the early 1980s, Tucker recruited Princeton history professor Charles Coulston Gillispie to help him set up an oral history project to preserve stories about the Princeton mathematical community in the 1930s. With funding from the Sloan Foundation, this project later expanded its scope. Among those who shared their memories of such figures as Einstein, von Neumann, and Gödel were computer pioneer Herman Goldstine and Nobel laureates John Bardeen and Eugene Wigner.
Students and legacy
Tucker's Ph.D. students include Michel Balinski, David Gale, Alan J. Goldman, John Isbell, Stephen Maurer, Turing Award winner Marvin Minsky, Nobel Prize winner John Nash, Torrence Parsons, Nobel Prize winner Lloyd Shapley, Robert Singleton, and Marjorie Stein. Tucker advised and collaborated with Harold W. Kuhn on a number of papers and mathematical models.
Tucker noticed the leadership ability and talent of a young mathematics graduate student named John G. Kemeny, whose hiring Tucker suggested to Dartmouth College. Following Tucker's advice, Dartmouth recruited Kemeny, who became Chair of the Mathematics Department and later College President. Years later, Dartmouth College recognized Albert Tucker with an honorary degree.
Tucker died in Hightstown, N.J. in 1995 at age 89. His sons, Alan Tucker and Thomas W. Tucker, and his grandson Thomas J. Tucker are all also professional mathematicians.
Tucker Prize
At each (triennial) International Symposium of the Mathematical Optimization Society (MOS) the Tucker Prize, in honour of A. W. Tucker, is given for outstanding thesis in the area of discrete mathematics.
Works
with H. W. Kuhn (eds.): Contributions to the theory of games, Annals of Mathematical Studies 1950
with H. W. Kuhn (eds.): Linear inequalities and related systems, Annals of Mathematical Studies 1956
with Allan Gewirtz, Harry Sitomer: Constructive linear algebra, Englewood Cliffs 1974
with Evar Nering: Linear Programs and related problems, Academic Press 1993
References
Bibliography
Further reading
External links
News from PRINCETON UNIVERSITY
A Guide to Albert William Tucker Papers
Extract from an obituary
Kuhn-Tucker conditions
The Princeton Mathematics Community in the 1930s An oral history project initiated by Tucker, also contains a series of interviews with Tucker.
Oral History Interview with Albert W. Tucker, Charles Babbage Institute, University of Minnesota.
Biography of Albert W. Tucker from the Institute for Operations Research and the Management Sciences
1905 births
1995 deaths
20th-century Canadian mathematicians
Topologists
Game theorists
University of Toronto alumni
Harvard University staff
University of Chicago people
John von Neumann Theory Prize winners
People from Oshawa
Princeton University alumni
Princeton University faculty
Presidents of the Mathematical Association of America
Canadian emigrants to the United States
Oral history | Albert W. Tucker | Mathematics | 928 |
36,987,089 | https://en.wikipedia.org/wiki/Liliger | The liliger is the hybrid offspring of a male lion (Panthera leo) and a female liger (Panthera leo♂ × Panthera tigris♀). Thus, it is a second generation hybrid. In accordance with Haldane's rule, male tigons and ligers are sterile, but female ligers and tigons can produce cubs. The first such hybrid was born in 1943, at the Hellabrunn Zoo.
Description
Male liligers are slightly larger than the females, and also sport a mane, a characteristic they share with male lions. While a liger will often inherit the sandy coloring and stripes of its parentage, liligers often develop rosettes similar to a leopard.
History
According to Wild Cats of the World (1975) by C. A. W. Guggisberg, ligers and tigons were long thought to be sterile, but in 1943, a 15-year-old hybrid between a lion and an 'Island' tiger was successfully mated with a lion at the Munich Hellabrunn Zoo. The female cub was raised to adulthood despite its delicate health.
In September 2012, the Russian Novosibirsk Zoo announced the birth of a liliger. The cub was named Kiara, and was born to an 8-year-old female liger, Zita, and a male African lion, Sam. On May 16, 2013, the same couple produced three more female liligers: Luna, Sandra, and Eva.
A liliger was born in the United States from a lion named Simba and a ligress named Akaria at 6:18 AM on November 29, 2013, at The Garold Wayne Interactive Zoological Foundation in Oklahoma. At approximately 3:00 AM on November 30, 2013, the ligress gave birth to two more cubs.
Craig Packer, director of the Lion Research Center at the University of Minnesota has said "In terms of conservation, it's so far away from anything, it's kind of pointless to even say it's irrelevant." The Association of Zoos and Aquariums (AZA), the organization responsible for accrediting zoos in North America, neither approves of nor breeds the animals, because they focus on the conservation of wildlife and programs serving that purpose.
See also
Litigon
References
Panthera hybrids
Second-generation hybrids | Liliger | Biology | 481 |
45,503,186 | https://en.wikipedia.org/wiki/76P/West%E2%80%93Kohoutek%E2%80%93Ikemura | 76P/West–Kohoutek–Ikemura is a Jupiter-family periodic comet in the Solar System with a current orbital period of 6.48 years.
The comet was initially spotted on a photographic plate by Richard M. West at the European Southern Observatory Sky Atlas Laboratory, Geneva in January 1975, when it had a brightness of magnitude 12. Inability to predict its movement from a single image meant the comet had to be presumed lost.
In late February it was accidentally rediscovered by Lubos Kohoutek at the Hamburg Observatory, Germany and independently on 1 March by Toshihiko Ikemura in Shinshiro, Japan. After further observations the comets parabolic orbit was computed, which gave a perihelion date of 23 March 1975 and proved that all three sightings were of the same object, which was accordingly designated 76P/West–Kohoutek–Ikemura.
Further calculations by Brian G. Marsden determined the comet's elliptical orbit and revealed that it had passed only 0.012 AU from Jupiter on 22 March 1972. This close approach had reduced its orbital frequency from some 30 years to the current 6.48 years and its perihelion distance from 4.78 AU to 1.60 AU.
The comet has been observed at its successive returns in 1987, 1993, 2000, 2006 and 2013.
Its nucleus is estimated to have an effective radius of 0.31 ± 0.01 kilometers and its rotational period is estimated to be 6.6 ± 1 hours.
See also
List of numbered comets
References
External links
Orbital simulation from JPL (Java) / Horizons Ephemeris
76P/West-Kohoutek-Ikemura – Seiichi Yoshida @ aerith.net
Periodic comets
0076
076P
076P
Astronomical objects discovered in 1975
Recovered astronomical objects | 76P/West–Kohoutek–Ikemura | Astronomy | 370 |
11,938,133 | https://en.wikipedia.org/wiki/DailyMed | DailyMed is a website operated by the U.S. National Library of Medicine (NLM) to publish up-to-date and accurate drug labels (also called a "package insert") to health care providers and the general public. The contents of DailyMed is provided and updated daily by the U.S. Food and Drug Administration (FDA). The FDA in turn collects this information from the pharmaceutical industry.
The documents published use the HL7 version 3 Structured Product Labeling (SPL) standard, which is an XML format that combines the human readable text of the product label with structured data elements that describe the composition, form, packaging, and other properties of the drug products in detail according to the HL7 Reference Information Model (RIM).
, it contained information about 140,232 drug listings.
It includes an RSS feed for updated drug information.
History
In 2006 the FDA revised the drug label and also created DailyMed to keep prescription information up to date.
See also
Consumer Product Information Database, ingredients of household products
Environmental Working Group, which maintains a database of cosmetics ingredients
References
External links
labels.fda.gov Drug labels at FDA website
American medical websites
United States National Library of Medicine
Medical search engines
Medical databases
Online databases
Health informatics | DailyMed | Chemistry,Biology | 255 |
7,863,321 | https://en.wikipedia.org/wiki/HH%2046/47 | HH 46/47 is a complex of Herbig–Haro objects (HH objects), located around 450 parsecs (about 1,470 light-years) away in a Bok globule near the Gum nebula. Jets of partially ionized gas emerging from a young star produce visible shocks upon impact with the ambient medium. Discovered in 1977, it is one of the most studied HH objects and the first jet to be associated with young stars was found in HH 46/47. Four emission nebulae, HH 46, HH 47A, HH 47C and HH 47D and a jet, HH 47B, have been identified in the complex. It also contains a mostly unipolar molecular outflow, and two large bow shocks on opposite sides of the source star. The overall size of the complex is about 3 parsecs (10 light years).
History of observations
This object was discovered in 1977 by American astronomer, R. D. Schwartz. In accordance with the naming convention for HH objects, he named two nebulae he found HH 46 and HH 47, as they were the 46th and 47th HH objects to be discovered. The jet and other nebulae were soon identified in the complex. This was the first jet to be discovered near a protostar. Prior to this, it was unclear how Herbig–Haro objects are formed. One model at that time suggested that they reflect light from embedded stars and hence are reflection nebulae. Based on spectral similarities between supernova remnants and HH objects, Schwartz theorized in 1975 that HH objects are produced by radiative shocks. In this model stellar winds from T Tauri stars would collide with the surrounding medium and generate shocks leading to emission. With the discovery of the jet in HH 46/47, it became clear that HH objects were not reflection nebulae, but shock driven emission nebulae which were powered by jets ejected from protostars. Due to its impact on the field of HH objects, brightness and collimated jet, it is one of the most studied HH objects. An image of a Question Mark associated with the object was reported on 18 August 2023 in The New York Times.
Formation
During early stages of formation, stars launch bipolar outflows of partially ionized material along the rotation axis. It is generally believed that the interaction of accretion disk magnetic fields with stellar magnetic fields propels some of the accreting material in the form of outflows. In some cases, outflow is collimated into jets. The source of HH 46/47 is a binary class I protostar located inside a dark cloud of gas and dust, undetectable at visual wavelengths. It is ejecting material at about 150 km/s into a bipolar jet which emerges from the cloud. Upon impacting the surrounding medium, the jet drives shocks in it, which lead to emission in the visible spectrum. Variations in eruptions result in different velocities of ejected material. This leads to shocks within the jet, as fast moving material from later ejections collides with slow moving material from earlier ejections. These shocks produce emissions, rendering the jet visible.
Properties
Although the outflow is bipolar, only one jet is visible at visual wavelengths. The counterjet is invisible as it is moving away from Earth into the dark cloud that hosts the star inside it. At infrared wavelengths, however, it is clearly visible. It terminates in HH 47C, a bright bow shock, as it interacts with the surrounding gas. HH 46 is located near the source and is an emission/reflection nebula; it emits light due to impacting jet material and also reflects light from the source. Its brightness changes radically in the course of years, which is directly related to the variability of the parent star. From HH 46 emerges HH 47B, a long and twisted jet which is blueshifted. The bent and twisted appearance of the outflow is caused by variations in the ejection direction, i.e., precession of the source star. The jet ends in HH 47, also called HH 47A, the brightest nebula in the complex. A little further away is the somewhat fainter and more diffuse HH 47D. The complex stretches across 0.57 parsecs from HH 47C to HH 47D on the sky plane. Two relatively large bow shocks appear at even larger distances, with HH 47SW lying on the far side of the receding lobe and HH 47NE lying on the near side of the approaching blueshifted lobe. Each of them is about 1.3 parsecs from the source star, making whole complex appear 2.6 parsecs long in the sky plane. The whole structure is projected at approximately 30° with respect to the sky plane; this makes its actual length around 3 parsecs.
The combined luminosity of the source star and disk is about 24 . It is accreting mass at the rate of per year. Mass loss rate in the approaching jet has been determined to be about per year, which is approximately 7% of the total mass accreted in a year. Around 3.6% of total material in the jet is ionized and average jet density is roughly 1400 cm−3. Shock velocity in the jet is about 34 km/s.
Eruptions from the star are episodic. The current episode has been ongoing for about a thousand years, while the previous episode started about 6,000 years ago and lasted for 3,000 to 4,000 years. Large eruptions in the current episode occur every 400 years. Based on the extent of the complex, the age of the source star has been estimated to be 104 to 105 years.
Molecular outflow
The jet emanating from the star is transferring momentum into the molecular gas surrounding it, which lifts up the gas. This results in a 0.3 parsec long molecular outflow around the jet. This outflow, however, is largely unipolar and aligned with the receding jet. Approaching molecular outflow is extremely weak, which is probably because the jet breaks out of the cloud and there is little material outside to be lifted up in the form of molecular outflow. Speeds in molecular flows are much less than in jets. Several organic and inorganic compounds have been detected in the molecular outflow including methane, methanol, water ice, carbon monoxide, carbon dioxide (dry ice) and various silicates. The presence of ices implies that the dusty shroud of the star is cool as opposed to the jet and shock regions where temperatures reach thousands of degrees.
See also
Hayashi track
HH 34
Pre-main-sequence star
Protoplanetary disk
Notes
References
External links
SIMBAD objects in HH 46/47
Herbig–Haro objects
Vela (constellation) | HH 46/47 | Astronomy | 1,412 |
182,842 | https://en.wikipedia.org/wiki/Nernst%20heat%20theorem | The Nernst heat theorem was formulated by Walther Nernst early in the twentieth century and was used in the development of the third law of thermodynamics.
The theorem
The Nernst heat theorem says that as absolute zero is approached, the entropy change ΔS for a chemical or physical transformation approaches 0. This can be expressed mathematically as follows:
The above equation is a modern statement of the theorem. Nernst often used a form that avoided the concept of entropy.
Another way of looking at the theorem is to start with the definition of the Gibbs free energy (G), G = H - TS, where H stands for enthalpy. For a change from reactants to products at constant temperature and pressure the equation becomes .
In the limit of T = 0 the equation reduces to just ΔG = ΔH, as illustrated in the figure shown here, which is supported by experimental data. However, it is known from thermodynamics that the slope of the ΔG curve is -ΔS. Since the slope shown here reaches the horizontal limit of 0 as T → 0 then the implication is that ΔS → 0, which is the Nernst heat theorem.
The significance of the Nernst heat theorem is that it was later used by Max Planck to give the third law of thermodynamics, which is that the entropy of all pure, perfectly crystalline homogeneous materials in complete internal equilibrium is 0 at absolute zero.
See also
Theodore William Richards
Entropy
References and notes
Further reading
- See especially pages 421 – 424
External links
Nernst heat theorem
Thermochemistry
Walther Nernst
de:Nernst-Theorem | Nernst heat theorem | Chemistry | 342 |
37,252,648 | https://en.wikipedia.org/wiki/Moderne%20Algebra | Moderne Algebra is a two-volume German textbook on graduate abstract algebra by , originally based on lectures given by Emil Artin in 1926 and by from 1924 to 1928. The English translation of 1949–1950 had the title Modern algebra, though a later, extensively revised edition in 1970 had the title Algebra.
The book was one of the first textbooks to use an abstract axiomatic approach to groups, rings, and fields, and was by far the most successful, becoming the standard reference for graduate algebra for several decades. It "had a tremendous impact, and is widely considered to be the major text on algebra in the twentieth century."
In 1975 van der Waerden described the sources he drew upon to write the book.
In 1997 Saunders Mac Lane recollected the book's influence:
Upon its publication it was soon clear that this was the way that algebra should be presented.
Its simple but austere style set the pattern for mathematical texts in other subjects, from Banach algebras to topological group theory.
[Van der Waerden's] two volumes on modern algebra ... dramatically changed the way algebra is now taught by providing a decisive example of a clear and perspicacious presentation. It is, in my view, the most influential text of algebra of the twentieth century.
Publication history
Moderne Algebra has a rather confusing publication history, because it went through many different editions, several of which were extensively rewritten with chapters and major topics added, deleted, or rearranged. In addition the new editions of first and second volumes were issued almost independently and at different times, and the numbering of the English editions does not correspond to the numbering of the German editions. In 1955 the title was changed from "Moderne Algebra" to "Algebra" following a suggestion of Brandt, with the result that the two volumes of the third German edition do not even have the same title.
For volume 1, the first German edition was published in 1930, the second in 1937 (with the axiom of choice removed), the third in 1951 (with the axiom of choice reinstated, and with more on valuations). The fourth edition appeared in 1955 (with the title changed to Algebra), the fifth in 1960, the sixth in 1964, the seventh in 1966, the eighth in 1971, the ninth in 1993. For volume 2, the first edition was published in 1931, the second in 1940, the third in 1955 (with the title changed to Algebra), the fourth in 1959 (extensively rewritten, with elimination theory replaced by algebraic functions of 1 variable), the fifth in 1967, and the sixth in 1993. The German editions were all published by Springer.
The first English edition was published in 1949–1950 and was a translation of the second German edition. There was a second edition in 1953, and a third edition under the new title Algebra in 1970 translated from the 7th German edition of volume 1 and the 5th German edition of volume 2. The three English editions were originally published by Ungar, though the 3rd English edition was later reprinted by Springer.
There were also Russian editions published in 1976 and 1979, and Japanese editions published in 1959 and 1967–1971.
References
History of mathematics
Mathematics textbooks
1930 non-fiction books
Abstract algebra
Springer Science+Business Media books | Moderne Algebra | Mathematics | 665 |
78,442,020 | https://en.wikipedia.org/wiki/Sisunatovir | Sisunatovir is an investigational new drug that is being evaluated for the treatment of respiratory syncytial virus (RSV) infections. It functions as an orally administered RSV fusion inhibitor, targeting the RSV-F protein on the viral surface to prevent viral replication. Sisunatovir has been granted Fast Track designation by the U.S. Food and Drug Administration (FDA) due to its potential to address serious RSV infections, which can lead to severe respiratory conditions such as bronchiolitis and pneumonia.
References
Antiviral drugs
Amines
Benzimidazoles
Cyclopropanes
Organofluorides
Oxindoles
Spiro compounds | Sisunatovir | Chemistry,Biology | 138 |
43,948,738 | https://en.wikipedia.org/wiki/Promethium%28III%29%20chloride | Promethium(III) chloride is a chemical compound of promethium and chlorine with the formula PmCl3. It is an ionic, water soluble, crystalline salt that glows in the dark with a pale blue or green light due to promethium's intense radioactivity.
Preparation
Promethium(III) chloride is obtained from promethium(III) oxide by heating it in a stream of dry HCl at 580 °C.
Properties
Promethium(III) chloride is a purple solid with a melting point of 655 °C. It crystallizes in the hexagonal crystal system (NdCl3 type) with the lattice parameters a = 739 pm and c = 421 pm with two formula units per unit cell and thus a calculated density of 4.19 g·cm−3. When PmCl3 is heated in the presence of H2O, the pale pink colored promethium(III) oxychloride (PmOCl) is obtained.
Applications
Promethium(III) chloride (with 147Pm) has been used to generate long-lasting glow in signal lights and buttons. This application relied on the unstable nature of promethium, which emitted beta radiation (electrons) with a half-life of several years. The electrons were absorbed by a phosphor, generating visible glow. Unlike many other radioactive nuclides, promethium-147 does not emit alpha particles that would degrade the phosphor.
References
Promethium compounds
Chlorides
Lanthanide halides | Promethium(III) chloride | Chemistry | 318 |
63,790,350 | https://en.wikipedia.org/wiki/Eva%20Smolkov%C3%A1-Keulemansov%C3%A1 | Eva Smolková-Keulemansová, Weilová (27 April 1927 – 27 February 2024) was a survivor of Auschwitz, Neuengamme, and Bergen-Belsen concentration camps. After her liberation, she became a renowned Czech scientist and professor of analytical science at Charles University in Prague.
Early life
Smolková-Keulemansová was born on 27 April 1927, in Prague, the Czech Republic (then Czechoslovakia) to a Jewish family. She had a normal childhood in Czechoslovakia as an only child to her parents Alice and Oskar. She finished primary school and had started grammar school but was taken out of school by her father after anti-Jewish laws started applying to grammar schools. She was employed at various Jewish workshops after leaving school.
The Holocaust
On 6 March 1943, Eva and her parents were transported to the Theresienstadt Ghetto in Terezín, where her father was separated from her and her mother. In Terezín, she worked in agriculture, so she was able to go into the ghetto and managed to make contact with her father. In December 1943, she and her mother were transferred to Auschwitz concentration camp, where she stayed until June 1944. After six months of horrible conditions at Auschwitz, Eva and her mother were unexpectedly recognized as able to work and were relocated to the Dessauer Ufer camp of the Neuengamme concentration camp in Hamburg, where she experienced better conditions. Her final transport was without her mother to Bergen-Belsen concentration camp in April 1945, which was liberated the same month.
After Liberation
Smolková-Keulemansová suffered from dysentery, jaundice, typhus and tuberculosis after liberation. She could not give an address to anyone she knew in Prague, so the International Red Cross did not allow her to return to her country of origin. To receive medical treatment, she was selected to go to Sweden for a six-month recovery stay with 6,000 other prisoners.
Return to Prague
In November 1945, Smolková-Keulemansová's dream to return to Prague and continue her studies became a reality. She completed grammar school and realized that her biggest struggle in her supplementary exams was chemistry, so she began to study chemistry at Charles University in Prague, leading to her lifelong devotion and love for the subject. She graduated from the Faculty of Natural Sciences at Charles University in 1952.
"The First Lady of Chromatography"
After graduating, Smolková-Keulemansová joined the Faculty of Sciences at Charles University and focused on analytical chemistry. In the early 1950s, she built a team focused on modern analytical separation methods such as gas chromatography, high-performance liquid chromatography and electromigration. At this same time, she attended an analytical conference in Prague, leading to her finding a volumetric chromatographic device. Her team began to prepare its own device with volumetric detection, and constructed a more universal glass thermal conductivity detector, allowing them to analyze a larger variety of gas. Little did she know that this was a new idea and, soon after, this detector became part of a commercially available instrument. Because of her innovation and dedication to the field, people started telling her that she was "the first lady of chromatography".
Later life and recognition
Smolková-Keulemansová became one of the leading experts in the field of chromatography. She was the first professor of chemistry in the Czech Republic and one of the first in Europe. Not only did she continue her studies in chemistry, but she also focused on polarography, a PhD focused on gas chromatography and a DrSc concentrated on inclusion compounds in chromatography. In the early 1970s, inclusion complex formations in selective analytical separations became a major focus of Smolková-Keulemansová's, her first choice being cyclodextrins, but moving on with urea and thiourea for the separation of isomers. Her research on cyclodextrins started soon after her methods focused on gas chromatography, high-performance liquid chromatography and electromigration. Her research became more widespread and she was asked to add many monographs on cyclodextrins, one of them being for a compendium on supramolecular chemistry edited by Jean-Marie Lehn. She has written and co-written 140 original papers and numerous reviews and has contributed to many books, including her work in Journal of High-Resolution Chromatography, "A Few Milestones on the Journey of Chromatography", and an article in the journal Chromatographia, "Study of retention of isomeric aromatic hydrocarbons on GTCB and cyclodextrins".
Smolková-Keulemansová died on 27 February 2024, at the age of 96.
References
1927 births
2024 deaths
Czech Jews
Czechoslovak chemists
Czech women scientists
Academic staff of Charles University
Analytical chemists
Scientists from Prague | Eva Smolková-Keulemansová | Chemistry | 1,019 |
3,795,398 | https://en.wikipedia.org/wiki/192%20%28number%29 | 192 (one hundred [and] ninety-two) is the natural number following 191 and preceding 193.
In mathematics
192 has the prime factorization . Because it has so many small prime factors, it is the smallest number with exactly 14 divisors, namely 1, 2, 3, 4, 6, 8, 12, 16, 24, 32, 48, 64, 96 and 192. Because its only prime factors are 2 and 3, it is a 3-smooth number.
192 is a Leyland number of the second kind using 2 & 8 ().
See also
192 (disambiguation)
References
Integers | 192 (number) | Mathematics | 129 |
40,513,454 | https://en.wikipedia.org/wiki/Clohessy%E2%80%93Wiltshire%20equations | The Clohessy–Wiltshire equations describe a simplified model of orbital relative motion, in which the target is in a circular orbit, and the chaser spacecraft is in an elliptical or circular orbit. This model gives a first-order approximation of the chaser's motion in a target-centered coordinate system. It is used to plan the rendezvous of the chaser with the target.
History
Early results about relative orbital motion were published by George William Hill in 1878. Hill's paper discussed the orbital motion of the moon relative to the Earth.
In 1960, W. H. Clohessy and R. S. Wiltshire published the Clohessy–Wiltshire equations to describe relative orbital motion of a general satellite for the purpose of designing control systems to achieve orbital rendezvous.
System Definition
Suppose a target body is moving in a circular orbit and a chaser body is moving in an elliptical orbit.
Let be the relative position of the chaser relative to the target with radially outward from the target body, is along the orbit track of the target body, and is along the orbital angular momentum vector of the target body (i.e., form a right-handed triad).
Then, the Clohessy–Wiltshire equations are
where is the orbital rate (in units of radians/second) of the target body, is the radius of the target body's circular orbit, is the standard gravitational parameter,
If we define the state vector as , the Clohessy–Wiltshire equations can be written as a linear time-invariant (LTI) system,
where the state matrix is
For a satellite in low Earth orbit, and , implying , corresponding to an orbital period of about 93 minutes.
If the chaser satellite has mass and thrusters that apply a force then the relative dynamics are given by the LTI control system
where is the applied force per unit mass and
Solution
We can obtain closed form solutions of these coupled differential equations in matrix form, allowing us to find the position and velocity of the chaser at any time given the initial position and velocity.where:Note that and . Since these matrices are easily invertible, we can also solve for the initial conditions given only the final conditions and the properties of the target vehicle's orbit.
See also
Orbital maneuver
Orbital mechanics
Space rendezvous
References
Further reading
Prussing, John E. and Conway, Bruce A. (2012). Orbital Mechanics (2nd Edition), Oxford University Press, NY, pp. 179–196.
External links
The Clohessy-Wiltshire Equations of Relative Motion
Derivation Of Approximate Equations For Solving The Planar Rendezvous Problem
Orbits
Astrodynamics
Spaceflight | Clohessy–Wiltshire equations | Astronomy,Engineering | 538 |
10,660,257 | https://en.wikipedia.org/wiki/Pendant%20group | In IUPAC nomenclature of chemistry, a pendant group (sometimes spelled pendent) or side group is a group of atoms attached to a backbone chain of a long molecule, usually a polymer. Pendant groups are different from pendant chains, as they are neither oligomeric nor polymeric.
For example, the phenyl groups are the pendant groups on a polystyrene chain.
Large, bulky pendant groups such as adamantyl usually raise the glass transition temperature () of a polymer by preventing the chains from sliding past each other easily. Short alkyl pendant groups may lower the by a lubricant effect.
References
Organic chemistry | Pendant group | Chemistry | 131 |
11,084,989 | https://en.wikipedia.org/wiki/Gravitational-wave%20astronomy | Gravitational-wave astronomy is a subfield of astronomy concerned with the detection and study of gravitational waves emitted by astrophysical sources.
Gravitational waves are minute distortions or ripples in spacetime caused by the acceleration of massive objects. They are produced by cataclysmic events such as the merger of binary black holes, the coalescence of binary neutron stars, supernova explosions and processes including those of the early universe shortly after the Big Bang. Studying them offers a new way to observe the universe, providing valuable insights into the behavior of matter under extreme conditions. Similar to electromagnetic radiation (such as light wave, radio wave, infrared radiation and X-rays) which involves transport of energy via propagation of electromagnetic field fluctuations, gravitational radiation involves fluctuations of the relatively weaker gravitational field. The existence of gravitational waves was first suggested by Oliver Heaviside in 1893 and then later conjectured by Henri Poincaré in 1905 as the gravitational equivalent of electromagnetic waves before they were predicted by Albert Einstein in 1916 as a corollary to his theory of general relativity.
In 1978, Russell Alan Hulse and Joseph Hooton Taylor Jr. provided the first experimental evidence for the existence of gravitational waves by observing two neutron stars orbiting each other and won the 1993 Nobel Prize in physics for their work. In 2015, nearly a century after Einstein's forecast, the first direct observation of gravitational waves as a signal from the merger of two black holes confirmed the existence of these elusive phenomena and opened a new era in astronomy. Subsequent detections have included binary black hole mergers, neutron star collisions, and other violent cosmic events. Gravitational waves are now detected using laser interferometry, which measures tiny changes in the length of two perpendicular arms caused by passing waves. Observatories like LIGO (Laser Interferometer Gravitational-wave Observatory), Virgo and KAGRA (Kamioka Gravitational Wave Detector) use this technology to capture the faint signals from distant cosmic events. LIGO co-founders Barry C. Barish, Kip S. Thorne, and Rainer Weiss were awarded the 2017 Nobel Prize in Physics for their ground-breaking contributions in gravitational wave astronomy.
When distant astronomical objects are observed using electromagnetic waves, different phenomena like scattering, absorption, reflection, refraction, etc. causes information loss. There remains various regions in space only partially penetrable by photons, such as the insides of nebulae, the dense dust clouds at the galactic core, the regions near black holes, etc. Gravitational astronomy have the potential to be used parallelly with electromagnetic astronomy to study the universe at a better resolution. In an approach known as multi-messenger astronomy, gravitational wave data is combined with data from other wavelengths to get a more complete picture of astrophysical phenomena. Gravitational wave astronomy helps understand the early universe, test theories of gravity, and reveal the distribution of dark matter and dark energy. Particularly, it can help find the Hubble constant, which tells about the rate of accelerated expansion of the universe. All of these open doors to a physics beyond the Standard Model (BSM).
Challenges that remain in the field include noise interference, the lack of ultra-sensitive instruments, and the detection of low-frequency waves. Ground-based detectors face problems with seismic vibrations produced by environmental disturbances and the limitation of the arm length of detectors due to the curvature of the Earth’s surface. In the future, the field of gravitational wave astronomy will try develop upgraded detectors and next-generation observatories, along with possible space-based detectors such as LISA (Laser Interferometer Space Antenna). LISA will be able to listen to distant sources like compact supermassive black holes in the galactic core and primordial black holes, as well as low-frequency sensitive signals sources such as binary white dwarf merger and sources from the early universe.
Introduction
Gravitational waves are waves of the intensity of gravity generated by the accelerated masses of an orbital binary system that propagate as waves outward from their source at the speed of light. They were first proposed by Oliver Heaviside in 1893 and then later by Henri Poincaré in 1905 as waves similar to electromagnetic waves but the gravitational equivalent.
Gravitational waves were later predicted in 1916 by Albert Einstein on the basis of his general theory of relativity as ripples in spacetime. Later he refused to accept gravitational waves. Gravitational waves transport energy as gravitational radiation, a form of radiant energy similar to electromagnetic radiation. Newton's law of universal gravitation, part of classical mechanics, does not provide for their existence, since that law is predicated on the assumption that physical interactions propagate instantaneously (at infinite speed) – showing one of the ways the methods of Newtonian physics are unable to explain phenomena associated with relativity.
The first indirect evidence for the existence of gravitational waves came in 1974 from the observed orbital decay of the Hulse–Taylor binary pulsar, which matched the decay predicted by general relativity as energy is lost to gravitational radiation. In 1993, Russell A. Hulse and Joseph Hooton Taylor Jr. received the Nobel Prize in Physics for this discovery.
Direct observation of gravitational waves was not made until 2015, when a signal generated by the merger of two black holes was received by the LIGO gravitational wave detectors in Livingston, Louisiana, and in Hanford, Washington. The 2017 Nobel Prize in Physics was subsequently awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the direct detection of gravitational waves.
In gravitational-wave astronomy, observations of gravitational waves are used to infer data about the sources of gravitational waves. Sources that can be studied this way include binary star systems composed of white dwarfs, neutron stars, and black holes; events such as supernovae; and the formation of the early universe shortly after the Big Bang.
Instruments and challenges
Collaboration between detectors aids in collecting unique and valuable information, owing to different specifications and sensitivity of each.
There are several ground-based laser interferometers which span several miles/kilometers, including: the two Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors in WA and LA, USA; Virgo, at the European Gravitational Observatory in Italy; GEO600 in Germany, and the Kamioka Gravitational Wave Detector (KAGRA) in Japan. While LIGO, Virgo, and KAGRA have made joint observations to date, GEO600 is currently utilized for trial and test runs, due to lower sensitivity of its instruments, and has not participated in joint runs with the others recently.
High frequency
In 2015, the LIGO project was the first to directly observe gravitational waves using laser interferometers. The LIGO detectors observed gravitational waves from the merger of two stellar-mass black holes, matching predictions of general relativity. These observations demonstrated the existence of binary stellar-mass black hole systems, and were the first direct detection of gravitational waves and the first observation of a binary black hole merger. This finding has been characterized as revolutionary to science, because of the verification of our ability to use gravitational-wave astronomy to progress in our search and exploration of dark matter and the big bang.
Low frequency
An alternative means of observation is using pulsar timing arrays (PTAs). There are three consortia, the European Pulsar Timing Array (EPTA), the North American Nanohertz Observatory for Gravitational Waves (NANOGrav), and the Parkes Pulsar Timing Array (PPTA), which co-operate as the International Pulsar Timing Array. These use existing radio telescopes, but since they are sensitive to frequencies in the nanohertz range, many years of observation are needed to detect a signal and detector sensitivity improves gradually. Current bounds are approaching those expected for astrophysical sources.
In June 2023, four PTA collaborations, the three mentioned above and the Chinese Pulsar Timing Array, delivered independent but similar evidence for a stochastic background of nanohertz gravitational waves. Each provided an independent first measurement of the theoretical Hellings-Downs curve, i.e., the quadrupolar correlation between two pulsars as a function of their angular separation in the sky, which is a telltale sign of the gravitational wave origin of the observed background. The sources of this background remain to be identified, although binaries of supermassive black holes are the most likely candidates.
Intermediate frequencies
Further in the future, there is the possibility of space-borne detectors. The European Space Agency has selected a gravitational-wave mission for its L3 mission, due to launch 2034, the current concept is the evolved Laser Interferometer Space Antenna (eLISA). Also in development is the Japanese Deci-hertz Interferometer Gravitational wave Observatory (DECIGO).
Scientific value
Astronomy has traditionally relied on electromagnetic radiation. Originating with the visible band, as technology advanced, it became possible to observe other parts of the electromagnetic spectrum, from radio to gamma rays. Each new frequency band gave a new perspective on the Universe and heralded new discoveries. During the 20th century, indirect and later direct measurements of high-energy, massive particles provided an additional window into the cosmos. Late in the 20th century, the detection of solar neutrinos founded the field of neutrino astronomy, giving an insight into previously inaccessible phenomena, such as the inner workings of the Sun. The observation of gravitational waves provides a further means of making astrophysical observations.
Russell Hulse and Joseph Taylor were awarded the 1993 Nobel Prize in Physics for showing that the orbital decay of a pair of neutron stars, one of them a pulsar, fits general relativity's predictions of gravitational radiation. Subsequently, many other binary pulsars (including one double pulsar system) have been observed, all fitting gravitational-wave predictions. In 2017, the Nobel Prize in Physics was awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the first detection of gravitational waves.
Gravitational waves provide complementary information to that provided by other means. By combining observations of a single event made using different means, it is possible to gain a more complete understanding of the source's properties. This is known as multi-messenger astronomy. Gravitational waves can also be used to observe systems that are invisible (or almost impossible to detect) by any other means. For example, they provide a unique method of measuring the properties of black holes.
Gravitational waves can be emitted by many systems, but, to produce detectable signals, the source must consist of extremely massive objects moving at a significant fraction of the speed of light. The main source is a binary of two compact objects. Example systems include:
Compact binaries made up of two closely orbiting stellar-mass objects, such as white dwarfs, neutron stars or black holes. Wider binaries, which have lower orbital frequencies, are a source for detectors like LISA. Closer binaries produce a signal for ground-based detectors like LIGO. Ground-based detectors could potentially detect binaries containing an intermediate mass black hole of several hundred solar masses.
Supermassive black hole binaries, consisting of two black holes with masses of 105–109 solar masses. Supermassive black holes are found at the centre of galaxies. When galaxies merge, it is expected that their central supermassive black holes merge too. These are potentially the loudest gravitational-wave signals. The most massive binaries are a source for PTAs. Less massive binaries (about a million solar masses) are a source for space-borne detectors like LISA.
Extreme-mass-ratio systems of a stellar-mass compact object orbiting a supermassive black hole. These are sources for detectors like LISA. Systems with highly eccentric orbits produce a burst of gravitational radiation as they pass through the point of closest approach; systems with near-circular orbits, which are expected towards the end of the inspiral, emit continuously within LISA's frequency band. Extreme-mass-ratio inspirals can be observed over many orbits. This makes them excellent probes of the background spacetime geometry, allowing for precision tests of general relativity.
In addition to binaries, there are other potential sources:
Supernovae generate high-frequency bursts of gravitational waves that could be detected with LIGO or Virgo.
Rotating neutron stars are a source of continuous high-frequency waves if they possess axial asymmetry.
Early universe processes, such as inflation or a phase transition.
Cosmic strings could also emit gravitational radiation if they do exist. Discovery of these gravitational waves would confirm the existence of cosmic strings.
Gravitational waves interact only weakly with matter. This is what makes them difficult to detect. It also means that they can travel freely through the Universe, and are not absorbed or scattered like electromagnetic radiation. It is therefore possible to see to the center of dense systems, like the cores of supernovae or the Galactic Center. It is also possible to see further back in time than with electromagnetic radiation, as the early universe was opaque to light prior to recombination, but transparent to gravitational waves.
The ability of gravitational waves to move freely through matter also means that gravitational-wave detectors, unlike telescopes, are not pointed to observe a single field of view but observe the entire sky. Detectors are more sensitive in some directions than others, which is one reason why it is beneficial to have a network of detectors. Directionalization is also poor, due to the small number of detectors.
In cosmic inflation
Cosmic inflation, a hypothesized period when the universe rapidly expanded during the first 10−36 seconds after the Big Bang, would have given rise to gravitational waves; that would have left a characteristic imprint in the polarization of the CMB radiation.
It is possible to calculate the properties of the primordial gravitational waves from measurements of the patterns in the microwave radiation, and use those calculations to learn about the early universe.
Development
As a young area of research, gravitational-wave astronomy is still in development; however, there is consensus within the astrophysics community that this field will evolve to become an established component of 21st century multi-messenger astronomy.
Gravitational-wave observations complement observations in the electromagnetic spectrum. These waves also promise to yield information in ways not possible via detection and analysis of electromagnetic waves. Electromagnetic waves can be absorbed and re-radiated in ways that make extracting information about the source difficult. Gravitational waves, however, only interact weakly with matter, meaning that they are not scattered or absorbed. This should allow astronomers to view the center of a supernova, stellar nebulae, and even colliding galactic cores in new ways.
Ground-based detectors have yielded new information about the inspiral phase and mergers of binary systems of two stellar mass black holes, and merger of two neutron stars. They could also detect signals from core-collapse supernovae, and from periodic sources such as pulsars with small deformations. If there is truth to speculation about certain kinds of phase transitions or kink bursts from long cosmic strings in the very early universe (at cosmic times around 10−25 seconds), these could also be detectable. Space-based detectors like LISA should detect objects such as binaries consisting of two white dwarfs, and AM CVn stars (a white dwarf accreting matter from its binary partner, a low-mass helium star), and also observe the mergers of supermassive black holes and the inspiral of smaller objects (between one and a thousand solar masses) into such black holes. LISA should also be able to listen to the same kind of sources from the early universe as ground-based detectors, but at even lower frequencies and with greatly increased sensitivity.
Detecting emitted gravitational waves is a difficult endeavor. It involves ultra-stable high-quality lasers and detectors calibrated with a sensitivity of at least 2·10−22 Hz−1/2 as shown at the ground-based detector, GEO600. It has also been proposed that even from large astronomical events, such as supernova explosions, these waves are likely to degrade to vibrations as small as an atomic diameter.
Pinpointing the location of where the gravitational waves comes from is also a challenge. But deflected waves through gravitational lensing combined with machine learning could make it easier and more accurate. Just as the light from the SN Refsdal supernova was detected a second time almost a year after it was first discovered, due to gravitational lensing sending some of the light on a different path through the universe, the same approach could be used for gravitational waves. While still at an early stage, a technique similar to the triangulation used by cell phones to determine their location in relation to GPS satellites, will help astronomers tracking down the origin of the waves.
See also
Gravitational wave background
Gravitational-wave observatory
List of gravitational wave observations
Matched filter#Gravitational-wave astronomy
References
Further reading
External links
LIGO Scientific Collaboration
AstroGravS: Astrophysical Gravitational-Wave Sources Archive
Video (04:36) – Detecting a gravitational wave, Dennis Overbye, NYT (11 February 2016).
Video (71:29) – Press Conference announcing discovery: "LIGO detects gravitational waves", National Science Foundation (11 February 2016).
Gravitational Wave Astronomy
Gravity
General relativity
Observational astronomy
Astrophysics
Astronomical sub-disciplines | Gravitational-wave astronomy | Physics,Astronomy | 3,483 |
521,613 | https://en.wikipedia.org/wiki/Victorian%20architecture | Victorian architecture is a series of architectural revival styles in the mid-to-late 19th century. Victorian refers to the reign of Queen Victoria (1837–1901), called the Victorian era, during which period the styles known as Victorian were used in construction. However, many elements of what is typically termed "Victorian" architecture did not become popular until later in Victoria's reign, roughly from 1850 and later. The styles often included interpretations and eclectic revivals of historic styles (see Historicism). The name represents the British and French custom of naming architectural styles for a reigning monarch. Within this naming and classification scheme, it followed Georgian architecture and later Regency architecture and was succeeded by Edwardian architecture.
Although Victoria did not reign over the United States, the term is often used for American styles and buildings from the same period, as well as those from the British Empire.
Victorian architecture in the United Kingdom
Gothic Revival
During the early 19th century, the romantic medieval Gothic Revival style was developed as a reaction to the symmetry of Palladianism, and such buildings as Fonthill Abbey were built.
By the middle of the 19th century, as a result of new technology, construction was able to incorporate metal materials as building components. Structures were erected with cast iron and wrought iron frames. However, due to being weak in tension, these materials were effectively phased out in place for more structurally sound steel. One of the greatest exponents of iron frame construction was Joseph Paxton, architect of the Crystal Palace. Paxton also continued to build such houses as Mentmore Towers, in the still popular English Renaissance styles. New methods of construction were developed in this era of prosperity, but ironically the architectural styles, as developed by such architects as Augustus Pugin, were typically retrospective.
In Scotland, the architect Alexander Thomson who practised in Glasgow was a pioneer of the use of cast iron and steel for commercial buildings, blending neo-classical conventionality with Egyptian and Oriental themes to produce many truly original structures. Other notable Scottish architects of this period are Archibald Simpson and Alexander Marshall Mackenzie, whose stylistically varied work can be seen in the architecture of Aberdeen.
While Scottish architects pioneered this style it soon spread right across the United Kingdom and remained popular for another forty years. Its architectural value in preserving and reinventing the past is significant. Its influences were diverse but the Scottish architects who practiced it were inspired by unique ways to blend architecture, purpose, and everyday life in a meaningful way.
Other Revival styles
Jacobethan (1830–1870; the precursor to the British Queen Anne Revival style)
Renaissance Revival (1840–1890)
Neo-Grec (1845–1865)
Romanesque Revival
Second Empire (1855–1880; originated in France)
British Queen Anne Revival (1870–1910)
Scots Baronial (predominantly Scotland)
British Arts and Crafts movement (1880–1910)
Some styles, while not uniquely Victorian, are strongly associated with the 19th century owing to the large number of examples that were erected during that period:
Italianate
Neoclassical
International spread of Victorian styles
During the 18th century, a few English architects emigrated to the colonies, but as the British Empire became firmly established during the 19th century, many architects emigrated at the start of their careers. Some chose the United States, and others went to Canada, Australia , New Zealand, and South Africa. Normally, they applied architectural styles that were fashionable when they left England. By the latter half of the century, however, improving transport and communications meant that even remote parts of the Empire had access to publications such as the magazine The Builder, which helped colonial architects keep informed about current fashion. Thus, the influence of English architecture spread across the world. Several prominent architects produced English-derived designs around the world, including William Butterfield (St Peter's Cathedral, Adelaide) and Jacob Wrey Mould (Chief Architect of Public Works in New York City).
Australia
The Victorian period flourished in Australia and is generally recognised as being from 1840 to 1890, which saw a gold rush and population boom during the 1880s in the states of New South Wales and Victoria. There were fifteen styles that predominated:
The Arts and Crafts style and Queen Anne style are considered to be part of the Federation Period, from 1890 to 1915.
Hong Kong
Western influence in architecture was strong when Hong Kong was a British colony. Victorian architecture in Hong Kong:
Ireland
Georgian architecture is more prominent in Ireland than Victorian architecture. The cities of Dublin, Limerick, and Cork are famously dominated by Georgian squares and terraces. Though Victorian architecture flourished in certain quarters. Particularly around Dublin's Wicklow Street and Upper Baggot Street and in the suburbs of Phibsboro, Glasnevin, Rathmines, Ranelagh, Rathgar, Rathfarnham, and Terenure. The colourful Italianate buildings of Cobh are excellent examples of the regional Victorian style in Ireland. Further examples of Victorian architecture in the country include Dublin's George's Street Arcade, the Royal City of Dublin Hospital on Baggot Street and the Royal Victoria Eye and Ear Hospital on Adelaide Road.
Sri Lanka
During the British colonial period of British Ceylon:
Sri Lanka Law College,
Sri Lanka College of Technology,
Galle Face Hotel and the
Royal College Main Building.
North America
In the United States, 'Victorian' architecture generally describes styles that were most popular between 1860 and 1900. A list of these styles most commonly includes Second Empire (1855–85), Stick-Eastlake (1860–), Folk Victorian (1870–1910), Queen Anne (1880–1910), Richardsonian Romanesque (1880–1900), and Shingle (1880–1900). As in the United Kingdom, examples of Gothic Revival and Italianate continued to be constructed during this period and are therefore sometimes called Victorian. Some historians classify the later years of Gothic Revival as a distinctive Victorian style named High Victorian Gothic. Stick-Eastlake, a manner of geometric, machine-cut decorating derived from Stick and Queen Anne, is sometimes considered a distinct style. On the other hand, terms such as "Painted Ladies" or "gingerbread" may be used to describe certain Victorian buildings, but do not constitute a specific style. The names of architectural styles (as well as their adaptations) varied between countries. Many homes combined the elements of several different styles and are not easily distinguishable as one particular style or another.
Notable Victorian-inspired cities during this era include, Astoria in Oregon; Philadelphia and Pittsburgh in Pennsylvania; Washington, D.C.; Boston in Massachusetts; Alameda, Eureka, San Francisco, and Midtown Sacramento in California; The Brooklyn Heights and Victorian Flatbush sections of New York City, Garden City on Long Island, and Albany, Troy, Buffalo, and Rochester in Upstate New York; Asbury Park / Ocean Grove, Cape May, Deal, Flemington, Freehold, Hackettstown, Jersey City / Hoboken, Metuchen, Montclair, Ridgewood, Plainfield, Summit, and Westfield in New Jersey; Chicago, Galena, and Winnetka in Illinois; Detroit and Grand Rapids in Michigan; Cincinnati and Columbus in Ohio; Galveston in Texas; Baltimore in Maryland; Louisville in Kentucky; Atlanta in Georgia; Milwaukee in Wisconsin; New Orleans in Louisiana; Richmond in Virginia; St. Louis in Missouri; and Saint Paul in Minnesota. Los Angeles grew from a Pueblo (village) into a Victorian Downtown – now almost entirely demolished but with residential remnants in its Angelino Heights and Westlake neighborhoods. San Francisco is particularly well known for its extensive Victorian architecture, especially in the Haight-Ashbury, Lower Haight, Alamo Square, Western Addition, Mission, Duboce Triangle, Noe Valley, Castro, Nob Hill, and Pacific Heights neighborhoods.
The extent to which any one is the "largest surviving example" is debated, with numerous qualifications. The Distillery District in Toronto, Ontario contains the largest and best-preserved collection of Victorian-era industrial architecture in North America. Cabbagetown is the largest and most continuous Victorian residential area in North America. Other Toronto Victorian neighbourhoods include The Annex, Parkdale, and Rosedale. In the US, the South End of Boston is recognized by the National Register of Historic Places as the oldest and largest Victorian neighborhood in the country. Old Louisville in Louisville, Kentucky, also claims to be the nation's largest Victorian neighborhood. Richmond, Virginia is home to several large Victorian neighborhoods, the most prominent being The Fan. The Fan district is best known locally as Richmond's largest and most 'European' of Richmond's neighborhoods and nationally as the largest contiguous Victorian neighborhood in the United States. The Old West End neighborhood of Toledo, Ohio is recognized as the largest collection of late Victorian and Edwardian homes in the United States, east of the Mississippi. Summit Avenue in Saint Paul, Minnesota, has the longest line of Victorian homes in the country. Over-The-Rhine in Cincinnati, Ohio, has the largest collection of early Victorian Italianate architecture in the United States, and is an example of an intact 19th-century urban neighborhood. According to National Register of Historic Places, Cape May Historic District has one of the largest collections of late 19th century frame buildings left in the United States.
The photo album L'Architecture Americaine by Albert Levy published in 1886 is perhaps the first recognition in Europe of the new forces emerging in North American architecture.
Canada
Canada's chief dominion architects designed numerous federal buildings over the course of the Victorian era. Thomas Fuller's completion of the Canadian Parliament Buildings in 1866, in particular, established a High Victorian Gothic influence over Canadian architectural design for several consecutive decades, producing many public buildings, churches, residences, industrial buildings, and hotels.
India
Because India was a colony of Britain, Victorian Architecture is prevalent in India, especially in cities like Mumbai, Kolkata, Kerala and Chennai. In Mumbai (Formerly called Bombay) buildings like Municipal Corporation Building, Bombay University, Bombay High Court, Asiatic Society of Mumbai Building (Former Town Hall) and the David Sasoon Library are some examples of Victorian Architecture in Mumbai. In Kolkata (Formerly called Calcutta) buildings like the Victoria Memorial, Calcutta High Court, St Paul's Cathedral, The Asiatic Society of Bengal are some examples of Victorian Architecture in Kolkata. In Chennai (Formerly called Madras) some examples include Madras High court, State Bank of Madras and St. Mary's Church. Many churches and colleges such as Santa Cruz Cathedral Basilica Kochi, University College Trivandrum, Government College of Fine Arts Trivandrum, Napier Museum, State Central Library of Kerala, Government Victoria College Palakkad, CMS College Kottayam and SB College Changanasserry are some of finest examples of Victorian architecture in Kerala.
Preservation
Efforts to preserve landmarks of Victorian architecture are ongoing and are often led by the Victorian Society. A recent campaign the group has taken on is the preservation of Victorian gasometers after utility companies announced plans to demolish nearly 200 of the now-outdated structures.
See also
Victorian decorative arts
Victorian house
Victorian restoration
Folk Victorian
Albert Levy (photographer)
Georgian architecture
References and sources
Citations
Sources
, includes descriptions of different Victorian and early-20th-century architectural styles common in the San Francisco Bay Area, particularly Oakland, and detailed instructions for repair and restoration of details common to older house styles.
External links
Decorative Hardware of the Victorian Era: An American. Perspective, Raheel Ahmad
History and Style of Victorian Architecture and Hardware
Manchester, a Victorian City
Photographs of Victorian Homes in Hamilton, Ontario Canada
Victorian era architecture in San Francisco, California
Victorian era architecture and history in Buffalo, New York
Architectural influences on Victorian style
Victorian churches blog
19th-century architectural styles
19th-century architecture in the United Kingdom
19th-century architecture in the United States
American architectural styles
Architectural history
British architectural styles
Revival architectural styles
Victorian architectural styles
Victorian architecture in the United States | Victorian architecture | Engineering | 2,373 |
69,591,470 | https://en.wikipedia.org/wiki/Tengiz%20Beridze | Tengiz Beridze (Georgian: თენგიზ გიორგის ძე ბერიძე; 26 October 1939 – 3 December 2024) was a Georgian biochemist.
Life and career
In 1967 Beridze discovered satellite DNA in plants. Through his research from 1972 to 1975, it was found that closely related species of one genus differ in satellite DNA content. In 1986 he published the monograph Satellite DNA in Springer Edition. In 2013 this monograph was edited as an eBOOK.
In 2011-17 he established a complete nucleotide sequence of four Georgian grape varieties, nuclear, chloroplast and mitochondria.
In 2015–21 he established a complete chloroplast DNA sequence of Georgian wheat species.
In 1967, he defended his Candidate's Dissertation. In 1980, he defended his doctoral dissertation in the Bakh Institute of Biochemistry, Moscow. He was elected a corresponding member of the Academy of Sciences of Georgia in 1987 and a full member in 1993.
Beridze held various positions in Soviet and Georgian institutions since 1960s.
Beridze died on 3 December 2024, at the age of 85.
Positions
1968–2008 – Institute of Biochemistry and Biotechnology, Georgian Academy of Sciences
1968–1999 – Professor at Tbilisi State University, Georgia
2008–2010 – Professor at Ilia State University, Tbilisi, Georgia
2010–2011 – Professor at Free University of Tbilisi, Georgia
2012–2021 – Director of Institute of Molecular Genetics, Agricultural University of Georgia
2012–20?? – Professor at Agricultural University of Georgia, Tbilisi, Georgia
Awards
Beridze was awarded the Order of Honour of Georgia in 1999. He was awarded the Serge Durmishidze prize in Biochemistry in 2009.
Selected publications
Beridze TG, Odintsova MS, Sissakian NM (1967) Distribution of bean leaf DNA components in the cell organelle fractions. Molek.Biol.USSR. 1,142-153
Beridze TG (1972) DNA nuclear satellites of the genus Phaseolus. Biochim. Biophys. Acta 262,393-396
Beridze TG (1975) DNA nuclear satellites of the genus Brassica: variation between species. Biochim.Biophys.Acta. 395,274-279
Beridze T. Satellite DNA, 1986, Springer-Verlag, Berlin, Heidelberg, New York, Tokio
Beridze T, Pipia I, Beck J., Hsu S.-CT, Gamkrelidze M, Gogniashvili M, Tabidze V, This R,Bacilieri P, Gotsiridze V, Glonti M, Schaal B (2011). Plastid DNA sequence diversity in a worldwide set of grapevine cultivars (Vitis vinifera L. subsp. vinifera). Bulletin of the Georgian National Academy of Sciences. 5, 2011, 98–103.
Pipia I, Gogniashvili M, Tabidze V, Beridze T, Gamkrelidze M, Gotsiridze V, Melyan G, Musayev M, Salimov V, Beck J, Schaal B (2012) Plastid DNA sequence diversity in wild grapevine samples (Vitis vinifera subsp. sylvestris) from the Caucasus region. Vitis 51 (3), 119–124
Tabidze V, Baramidze G, Pipia I, Gogniashvili M, Ujmajuridze L, Beridze T, Hernandez AG, Schaal B (2014) The Complete Chloroplast DNA Sequence of Eleven Grape Cultivars. Simultaneous Resequencing Methodology. Journal International des Sciences de la Vigne et du Vin J Int Sci Vigne Vin. 48, 99-109
Tabidze V, Pipia I, Gogniashvili M, Kunelauri N, Ujmajuridze L, Pirtskhalava M, Vishnepolsky B, Hernandez AG, Fields CJ, BeridzeT (2017) Whole genome comparative analysis of four Georgian grape cultivars. Molecular Genetics and Genomics. 292, 1377-1389
Gogniashvili M., Naskidashvili P., Bedoshvili D., Kotorashvili A., Kotaria N., Beridze T. (2015) Complete chloroplast DNA sequences of Zanduri wheat (Triticum spp.) Genet Resour Crop Evol
Gogniashvili M, Jinjikhadze T, Maisaia M, Akhalkatsi M, Kotorashvili A, Kotaria N, Beridze T, Dudnikov AJ (2016) Complete chloroplast genomes of Aegilops tauschii Coss. and Ae.cylindrica Host sheds light on plasmon D evolution. Current Genetics.
Gogniashvili M, Maisaia I, Kotorashvili A, Kotaria N, Beridze T (2018) Complete chloroplast DNA sequences of Georgian indigenous polyploid wheats (Triticum spp.) and B plasmon evolution. Genet Resour Crop Evol 65:1995–2002
References
1939 births
2024 deaths
Georgian Soviet Socialist Republic people
Scientists from Tbilisi
Biochemists
Academic staff of Tbilisi State University
Academic staff of Ilia State University
Free University of Tbilisi people
Recipients of the Order of Honor (Georgia) | Tengiz Beridze | Chemistry,Biology | 1,133 |
17,290,814 | https://en.wikipedia.org/wiki/Quarry%20tub | A tub or quarry tub is a type of railway or tramway wagon used in quarries and other industrial locations for the transport of minerals (such as coal, sand, ore, clay and stone) from a quarry or mine face to processing plants or between various parts of an industrial site. This type of wagon may be small enough for one person to push, or designed for haulage by a horse, or for connection in a train hauled by a locomotive. The tubs are designed for ease of emptying, usually by a side-tipping action. This type of rail vehicle is now mainly obsolete, its function having been mostly replaced by conveyor belts.
See also
British narrow gauge railways
Chaldron
Corf
Decauville wagon
Mine car
Minecart
Mineral wagon
Mine railway
References
British railway wagons
Mining equipment
Freight rolling stock | Quarry tub | Engineering | 165 |
42,719,835 | https://en.wikipedia.org/wiki/Uruz%20Project | The Uruz Project had the goal of breeding back the extinct aurochs (Bos p. primigenius). Uruz is the old Germanic word for aurochs. The Uruz Project was initiated in 2013 by the True Nature Foundation and presented at TEDx DeExtinction, a day-long conference organised by the Long Now Foundation with the support of TED and in partnership with National Geographic Society, to showcase the prospects of bringing extinct species back to life. The de-extinction movement itself is spearheaded by the Long Now Foundation.
Technically, Bos primigenius is not wholly extinct. The wild subspecies B. p. primigenius, indicus and africanus are, but the species is still represented by domestic cattle. Most, or all, of the relevant Aurochs characteristics, and therefore the underlying DNA, needed to "breed back" an aurochs-like cattle type can be found in B. p. taurus. Domestic cattle originated in the middle east, and there also has been introgression of European aurochs into domestic cattle in ancient times. The Uruz Project's goal is to collect all relevant data and reunite scattered aurochs characteristics, and thus DNA, in one animal.
Background
Ecological restoration projects cannot be complete without bringing back those key elements that help shape and reshape wild landscapes. The European aurochs (Bos p. primigenius) was a large and long-horned wild bovine herbivore that existed from the most western tip of Europe until Siberia in present-day Russia. Aurochs have played a major role in human history. They are often depicted in rock-art, including the famous, well-conserved cave paintings made by Cro-Magnon people in the Lascaux Caves, estimated to be 17,300 years old. Aurochs and other large animals portrayed in Paleolithic cave art were often hunted for food. Hunting and habitat loss caused by humans, including agricultural land conversion, caused the aurochs to go extinct in 1627, when the last individual, a female, died in Poland’s Jaktorów Forest.
The aurochs is one of the keystone species that is missing in Europe. Their grazing and browsing patterns, trampling of the soil and faeces had a profound impact on the vegetation and landscapes it inhabited. Grazing results in a greater variety of plant species, structures and ecological niches in a landscape that benefit both biodiversity and production. Megaherbivores like the aurochs also controlled vegetation development.
Breeding strategy
The Uruz Project aims to breed an aurochs-like breed of cattle from a limited number of carefully selected primitive cattle breeds with known Aurochs characteristics. The project uses Sayaguesa cattle, Maremmana primitiva or Hungarian grey cattle, Chianina and Watusi. The genome of the Aurochs has been completely reconstructed and serves as the baseline for the reconstruction of the Aurochs.
See also
Breeding back
De-extinction
Breeding of aurochs-like cattle
Tauros Programme
References
External links
Aurochs at the True Nature Foundation
Long Now Foundation
Genetic engineering
Nature conservation organisations based in Europe
Animal breeding
Mammal conservation | Uruz Project | Chemistry,Engineering,Biology | 657 |
54,031,414 | https://en.wikipedia.org/wiki/Lysibody | Although cell wall carbohydrates are ideal immunotherapeutic targets due to their abundance in bacteria and high level of conservation, their poor immunogenicity compared with protein targets complicates their use for the development of protective antibodies. A lysibody is a chimeric antibody in which the Fab region is the binding domain from a bacteriophage lysin, or the binding domain from an autolysin or bacteriocin, all of which bind to bacterial cell wall carbohydrate epitopes. This is linked to the Fc of Immunoglobulin G (IgG). The chimera forms a stable homodimer held together by hinge-region disulfide bonds. Thus, lysibodies are homodimeric hybrid immunoglobulin G molecules that can bind with high affinity and specificity to a carbohydrate substrate in the bacterial cell wall peptidoglycan. Lysibodies behave like authentic IgG by binding at high affinity to their bacterial wall receptor, fix complement and therefore promote phagocytosis by macrophages and neutrophils, protecting mice from infection in model systems. Since cell wall hydrolases, autolysins and bacteriocins are ubiquitous in nature, production of lysibodies specific for difficult to treat pathogenic bacteria is possible.
Binding domains may be linked to either the N-terminus of the IgG Fc (as is the case for autolysins) or to the C-terminus (as seen with phage lysins - see figure). In both cases the binding domains are able to bind their substrates in the bacterial cell wall and the Fc is able to perform its effector functions (see ref 2 for more detail).
Lysibodies may be used prophylactically to help protect surgical patients from bacterial infections, particularly methicillin resistant Staphylococcus aureus (MRSA) and boost immune clearance in infected individuals.
References
Monoclonal antibodies
Glycoproteins
Immune system | Lysibody | Chemistry,Biology | 436 |
2,388,223 | https://en.wikipedia.org/wiki/Instrument%20error | Instrument error refers to the error of a measuring instrument, or the difference between the actual value and the value indicated by the instrument. There can be errors of various types, and the overall error is the sum of the individual errors.
Types of errors include
systematic errors
random errors
absolute error
other error
Systematic errors
The size of the systematic error is sometimes referred to as the accuracy. For example the instrument may always indicate a value 5% higher than the actual value; or perhaps the relationship between the indicated and actual values may be more complicated than that. A systematic error may arise because the instrument has been incorrectly calibrated, or perhaps because a defect has arisen in the instrument since it was calibrated. Instruments should be calibrated against a standard instrument that is known to be accurate, and ideally the calibration should be repeated at intervals. The most rigorous standards are those maintained by a standards organization such as NIST in the United States, or the ISO in Europe.
If the users know the amount of the systematic error, they may decide to adjust for it manually rather than having the instrument expensively adjusted to eliminate the error: e.g. in the above example they might manually reduce all the values read by about 4.8%.
Random errors
The range in amount of possible random errors is sometimes referred to as the precision. Random errors may arise because of the design of the instrument. In particular they may be subdivided between
errors in the amount shown on the display, and
how accurately the display can actually be read.
Amount shown on the display
Sometimes the effect of random error can be reduced by repeating the measurement a few times and taking the average result.
How accurately the display can be read
If the instrument has a needle which points to a scale graduated in steps of 0.1 units, then depending on the design of the instrument it is usually possible to estimate tenths between the successive marks on the scale, so it should be possible to read off the result to an accuracy of about 0.01 units.
Other errors
The act of taking the measurement may alter the quantity being measured. For example, an ammeter has its own built-in resistance, so if it is connected in series to an electrical circuit, it will slightly reduce the current flowing through the circuit.
References
Measuring instruments
Metrology | Instrument error | Technology,Engineering | 464 |
39,176,314 | https://en.wikipedia.org/wiki/Elias%20Gyftopoulos | Elias Panayiotis Gyftopoulos (; July 4, 1927June 23, 2012) was a Greek-American engineer who contributed to thermodynamics both in its general formulation and its quantum foundations.
Gyftopoulos received an undergraduate degree in mechanical and electrical engineering in 1953 at the National Technical University of Athens, and a Doctor of Science degree in electrical engineering at the Massachusetts Institute of Technology in 1958. At MIT, he initially focused on nuclear reactor safety and control. After meeting professors George N. Hatsopoulos and Joseph H. Keenan, his interests moved towards thermodynamics, in an attempt to give a consistent and rigorous exposition, free of the logical flaws and the limitations commonly associated with this discipline: his contribution culminated with reference textbook which completely reformulates the foundations of the subject, offering a general non-statistical definition of entropy applicable to both macroscopic and microscopic systems, both in equilibrium and in non-equilibrium states, and providing strong background and deep understanding of many applications in energy engineering for modern graduate curricula. His research also pioneered the subject of quantum thermodynamics with an early effort to give a quantum basis to thermodynamics by means a physical theory unifying mechanics and thermodynamics.
Works
ISBN 9780486439327
Elias P. Gyftopoulos complete collection of published scientific works
References
External links
Elias P. Gyftopoulos collected works and memorial tribute
Thermodynamicists
Greek emigrants to the United States
MIT School of Engineering faculty
National Technical University of Athens alumni
MIT School of Engineering alumni
1927 births
2012 deaths
People from Athens | Elias Gyftopoulos | Physics,Chemistry | 335 |
18,860,506 | https://en.wikipedia.org/wiki/Software%20ecosystem | Software Ecosystem is a book written by David G. Messerschmitt and Clemens Szyperski that explains the essence and effects of a "software ecosystem", defined as a set of businesses functioning as a unit and interacting with a shared market for software and services, together with relationships among them. These relationships are frequently underpinned by a common technological platform and operate through the exchange of information, resources, and artifacts.
The term in software analysis
In the context of software analysis, the term software ecosystem is defined by Lungu as “a collection of software projects, which are developed and co-evolve in the same environment”. The environment can be organizational (a company), social (an open-source community), or technical (the Ruby ecosystem). The ecosystem metaphor is used in order to denote an analysis which takes into account multiple software systems. The most frequent of such analyses is static analysis of the source code of the component systems of the ecosystem.
Software analysis is the process of systematically examining and evaluating software applications to assess their design, functionality, performance, and adherence to requirements. This involves reviewing code, testing the software for bugs or vulnerabilities, ensuring compliance with design specifications, and optimizing for efficiency. Software analysis helps identify potential issues early in the development cycle, improves overall quality, and ensures that the software meets the intended goals. It includes techniques like static code analysis, dynamic analysis, and performance profiling to provide insights for better software maintenance and improvement.
References
External links
European workshop on software ecosystems
Workshop on Ecosystem Architectures
Business process
Software industry | Software ecosystem | Technology,Engineering | 320 |
37,921,649 | https://en.wikipedia.org/wiki/Lithium%20iodate | Lithium iodate (LiIO3) is a negative uniaxial crystal for nonlinear, acousto-optical and piezoelectric applications. It has been utilized for 347 nm ruby lasers.
Properties
Mohs hardness of lithium iodate is 3.5–4. Its linear thermal expansion coefficient at is 2.8·10−5/°C (a-axis) and 4.8·10−5/°C (c-axis). Its transition to β-form begin at and it is irreversible.
References
Lithium compounds
Iodates
Nonlinear optical materials | Lithium iodate | Chemistry | 120 |
37,476,422 | https://en.wikipedia.org/wiki/Vladimir%20Kanovei | Vladimir G. Kanovei (born 1951) is a Russian mathematician working at the Institute for Information Transmission Problems in Moscow, Russia. His interests include mathematical logic and foundations, as well as mathematical history.
Selected publications
.
Kanovei, Vladimir; Reeken, Michael; Nonstandard analysis, axiomatically. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2004. xvi+408 pp.
Kanovei, Vladimir; Borel equivalence relations. Structure and classification. University Lecture Series, 44. American Mathematical Society, Providence, RI, 2008. x+240 pp.
Kanoveĭ, V.; Reeken, M.; On Ulam's problem concerning the stability of approximate homomorphisms. (Russian) Tr. Mat. Inst. Steklova 231 (2000), Din. Sist., Avtom. i Beskon. Gruppy, 249–283; translation in Proc. Steklov Inst. Math. 2000, no. 4 (231), 238–270
Kanoveĭ, V. G.; Lyubetskiĭ, V. A.; On some classical problems in descriptive set theory. (Russian) Uspekhi Mat. Nauk 58 (2003), no. 5(353), 3--88; translation in Russian Math. Surveys 58 (2003), no. 5, 839–927
Kanoveĭ, V. G.; Reeken, M.; Some new results on the Borel irreducibility of equivalence relations. (Russian) Izv. Ross. Akad. Nauk Ser. Mat. 67 (2003), no. 1, 59–82; translation in Izv. Math. 67 (2003), no. 1, 55–76 03E15 (54H05)
Kanovei, Vladimir; Reeken, Michael; Mathematics in a nonstandard world. II. Math. Japon. 45 (1997), no. 3, 555–571.
Kanovei, Vladimir; On non-wellfounded iterations of the perfect set forcing. Journal of Symbolic Logic 64 (1999), no. 2, 551–574.
Kanovei, Vladimir; Shelah, Saharon; A definable nonstandard model of the reals. Journal of Symbolic Logic 69 (2004), no. 1, 159–164.
Kanovei, Vladimir; Reeken, Michael. Internal approach to external sets and universes. I. Bounded set theory. Studia Logica 55 (1995), no. 2, 229–257.
Kanovei, Vladimir; Reeken, Michael. Internal approach to external sets and universes. II. External universes over the universe of bounded set theory. Studia Logica 55 (1995), no. 3, 347–376.
Kanovei, Vladimir; Reeken, Michael. Internal approach to external sets and universes. III. Partially saturated universes. Studia Logica 56 (1996), no. 3, 293–322.
This series of three papers was reviewed by Karel Hrbacek here.
External links
Home page
People from Odesa Oblast
Model theorists
1951 births
Living people | Vladimir Kanovei | Mathematics | 680 |
49,994,189 | https://en.wikipedia.org/wiki/Tashiro%27s%20indicator | Tashiro's indicator is a pH indicator (pH value: 4.4–6.2), mixed indicator composed of a solution of methylene blue (0.1%) and methyl red (0.03%) in ethanol or in methanol.
It can be used for the titration of ammonia in Kjeldahl analysis.
Colours
In acids: violet
At equivalence point (pH 5.2): grey
In bases: green
Methylene blue functions to change the red-yellow shift of methyl red to a more distinct violet-green shift.
See also
Litmus
pH Indicator
References
PH indicators | Tashiro's indicator | Chemistry,Materials_science | 128 |
52,846,747 | https://en.wikipedia.org/wiki/Mosaic%20coevolution | Mosaic coevolution is a theory in which geographic location and community ecology shape differing coevolution between strongly interacting species in multiple populations. These populations may be separated by space and/or time. Depending on the ecological conditions, the interspecific interactions may be mutualistic or antagonistic. In mutualisms, both partners benefit from the interaction, whereas one partner generally experiences decreased fitness in antagonistic interactions. Arms races consist of two species adapting ways to "one up" the other. Several factors affect these relationships, including hot spots, cold spots, and trait mixing. Reciprocal selection occurs when a change in one partner puts pressure on the other partner to change in response. Hot spots are areas of strong reciprocal selection, while cold spots are areas with no reciprocal selection or where only one partner is present. The three constituents of geographic structure that contribute to this particular type of coevolution are: natural selection in the form of a geographic mosaic, hot spots often surrounded by cold spots, and trait remixing by means of genetic drift and gene flow. Mosaic, along with general coevolution, most commonly occurs at the population level and is driven by both the biotic and the abiotic environment. These environmental factors can constrain coevolution and affect how far it can escalate.
The geographical mosaic theory was first described by Ehrlich and Raven in 1964 after studying butterflies that coevolve with plants. However, the idea of coevolution itself goes all the way back to Darwin.
Examples
Mutualisms
A commonly used example of mutualism in mosaic coevolution is that of the plant and pollinator. Anderson and Johnson studied the relationship between the length of the proboscis of the long-tongued fly (P. ganglbaueri) and the corolla tube length of Zaluzianskya microsiphon, a flowering plant endemic to South Africa. They suspected, as Darwin did in 1862, that flowers would adapt to become longer in order to force the fly to insert more of its body into the flower in order to reach the nectar. This causes the fly's body to come in contact with the flower's pollen. The two characteristics were measured at several different geographic locations and it was found that the length of the fly's proboscis caused strong selective pressures on the length of the corolla of the flower. An increase in proboscis length was selected for, when flowers were longer because it is their primary food source.
Coevolutionary arms races
Antagonistic interactions (e.g. host-parasite and predator-prey relationships) can often result in coevolutionary trait escalation (i.e. arms races). For example, prey and predator may both evolve faster running speed in order to maximize their fitness.
The plant species Camellia japonica (the Japanese camellia) and its seed predator Curculio camelliae (the camellia weevil) are an example of a coevolutionary arms race. The length of the weevil's rostrum and the thickness of the fruit's pericarp are correlated, meaning that a change in one character prompts a change in the other. The weevil will use its rostrum to burrow into the center of the camellia fruit seeking a place to lay eggs, as the weevil larva feed exclusively on the camellia seeds. This is a main cause of seed damage in the Japanese camellia and, in order to better protect its seeds, the plant will evolve to grow a thicker pericarp. In some areas, the pericarp of these fruits was found to be remarkably woody. The pericarp thickness of the camellia fruit was observed to be thicker in more southern locations than in the north. The areas of Hanyama and Yahazu, Japan are just under nine miles away from each other, but there was an 8 mm difference in pericarp thickness in the camellia populations sampled there. The length of the weevil's rostrum was found to be 5mm longer in the area with thicker fruit. This shows that the survival of the Japanese camellia seeds in the south is dependent upon the thick pericarp as a form of protection. However, northern areas were found to have fruit with infested seeds regardless of thickness of the pericarp. This suggests that the plants in the north were more susceptible to weevil attacks and the two traits are not as strongly correlated as they were in southern areas.
References
Evolutionary biology | Mosaic coevolution | Biology | 929 |
2,177,591 | https://en.wikipedia.org/wiki/Aft-crossing%20trajectory | In 2005, a new trajectory that an air-launched rocket could take to put satellites into orbit was tested. Until this time, launch vehicles such as the Pegasus rocket, or rocket planes such as the X-1, X-15, or SpaceShipOne, which were carried under an aircraft pointing in the same direction as the fuselage, would have their engines ignited either just before being air-dropped or a few seconds afterward. They would then be expected to accelerate and climb in front of the carrier aircraft, crossing its flight path. This was considered dangerous due to the potential for a crashes between the rocket and the launch vehicle.
The aft-crossing trajectory is an alternate flight path for a rocket. The rocket's rotation (induced by the deployment from the aircraft) is slowed by a small parachute attached to its tail, then ignited once the carrier aircraft has passed it. It is ignited before it is pointing fully vertically, however it will turn to do so, and accelerates to pass behind the carrier aircraft.
The principal advantage of this method is its safety for the crew of the carrier aircraft.
See also
AirLaunch LLC
t/Space
References
Aviation Week & Space Technology June 27, 2005, page 32.
Spaceflight | Aft-crossing trajectory | Astronomy | 246 |
59,442 | https://en.wikipedia.org/wiki/Baryte | Baryte, barite or barytes ( or ) is a mineral consisting of barium sulfate (BaSO4). Baryte is generally white or colorless, and is the main source of the element barium. The baryte group consists of baryte, celestine (strontium sulfate), anglesite (lead sulfate), and anhydrite (calcium sulfate). Baryte and celestine form a solid solution .
Names and history
The radiating form, sometimes referred to as Bologna Stone, attained some notoriety among alchemists for specimens found in the 17th century near Bologna by Vincenzo Casciarolo. These became phosphorescent upon being calcined.
Carl Scheele determined that baryte contained a new element in 1774, but could not isolate barium, only barium oxide. Johan Gottlieb Gahn also isolated barium oxide two years later in similar studies. Barium was first isolated by electrolysis of molten barium salts in 1808 by Sir Humphry Davy in England.
The American Petroleum Institute specification API 13/ISO 13500, which governs baryte for drilling purposes, does not refer to any specific mineral, but rather a material that meets that specification. In practice, however, this is usually the mineral baryte.
The term "primary barytes" refers to the first marketable product, which includes crude baryte (run of mine) and the products of simple beneficiation methods, such as washing, jigging, heavy media separation, tabling, and flotation. Most crude baryte requires some upgrading to minimum purity or density. Baryte that is used as an aggregate in a "heavy" cement is crushed and screened to a uniform size. Most baryte is ground to a small, uniform size before it is used as a filler or extender, an addition to industrial products, in the production of barium chemicals or as a weighting agent in petroleum well drilling mud.
Name
The name baryte is derived from the , 'heavy'. The American spelling is barite. The International Mineralogical Association initially adopted "barite" as the official spelling, but recommended adopting the older "baryte" spelling later. This move was controversial and was notably ignored by American mineralogists.
Other names have been used for baryte, including barytine, barytite, barytes, heavy spar, tiff, and blanc fixe.
Mineral associations and locations
Baryte occurs in many depositional environments, and is deposited through many processes including biogenic, hydrothermal, and evaporation, among others. Baryte commonly occurs in lead-zinc veins in limestones, in hot spring deposits, and with hematite ore. It is often associated with the minerals anglesite and celestine. It has also been identified in meteorites.
Baryte has been found at locations in Australia, Brazil, Nigeria, Canada, Chile, China, India, Pakistan, Germany, Greece, Guatemala, Iran, Ireland (where it was mined on Benbulben), Liberia, Mexico, Morocco, Peru, Romania (Baia Sprie), Turkey, South Africa (Barberton Mountain Land), Thailand, United Kingdom (Cornwall, Cumbria, Dartmoor/Devon, Derbyshire, Durham, Shropshire, Perthshire, Argyllshire, and Surrey) and in the US from Cheshire, Connecticut, De Kalb, New York, and Fort Wallace, New Mexico. It is mined in Arkansas, Connecticut, Virginia, North Carolina, Georgia, Tennessee, Kentucky, Nevada, and Missouri.
The global production of baryte in 2019 was estimated to be around 9.5 million metric tons, down from 9.8 million metric tons in 2012. The major barytes producers (in thousand tonnes, data for 2017) are as follows: China (3,600), India (1,600), Morocco (1,000), Mexico (400), United States (330), Iran (280), Turkey (250), Russia (210), Kazakhstan (160), Thailand (130) and Laos (120).
The main users of barytes in 2017 were (in million tonnes) US (2.35), China (1.60), Middle East (1.55), the European Union and Norway (0.60), Russia and CIS (0.5), South America (0.35), Africa (0.25), and Canada (0.20). 70% of barytes was destined for oil and gas well drilling muds. 15% for barium chemicals, 14% for filler applications in automotive, construction, and paint industries, and 1% other applications.
Natural baryte formed under hydrothermal conditions may be associated with quartz or silica. In hydrothermal vents, the baryte-silica mineralisation can also be accompanied by precious metals.
Information about the mineral resource base of baryte ores is presented in some scientific articles.
Uses
In oil and gas drilling
Worldwide, 69–77% of baryte is used as a weighting agent for drilling fluids in oil and gas exploration to suppress high formation pressures and prevent blowouts. As a well is drilled, the bit passes through various formations, each with different characteristics. The deeper the hole, the more baryte is needed as a percentage of the total mud mix. An additional benefit of baryte is that it is non-magnetic and thus does not interfere with magnetic measurements taken in the borehole, either during logging-while-drilling or in separate drill hole logging. Baryte used for drilling petroleum wells can be black, blue, brown or gray depending on the ore body. The baryte is finely ground so that at least 97% of the material, by weight, can pass through a 200-mesh (75 μm) screen, and no more than 30%, by weight, can be less than 6 μm diameter. The ground baryte also must be dense enough so that its specific gravity is 4.2 or greater, soft enough to not damage the bearings of a tricone drill bit, chemically inert, and containing no more than 250 milligrams per kilogram of soluble alkaline salts. In August 2010, the American Petroleum Institute published specifications to modify the 4.2 drilling grade standards for baryte to include 4.1 SG materials.
In oxygen and sulfur isotopic analysis
In the deep ocean, away from continental sources of sediment, pelagic baryte precipitates and forms a significant amount of the sediments. Since baryte has oxygen, systematics in the δ18O of these sediments have been used to help constrain paleotemperatures for oceanic crust.
The variations in sulfur isotopes (34S/32S) are being examined in evaporite minerals containing sulfur (e.g. baryte) and carbonate associated sulfates (CAS) to determine past seawater sulfur concentrations which can help identify specific depositional periods such as anoxic or oxic conditions. The use of sulfur isotope reconstruction is often paired with oxygen when a molecule contains both elements.
Geochronological dating
Dating the baryte in hydrothermal vents has been one of the major methods to determine their ages. Common methods to date hydrothermal baryte include radiometric dating and electron spin resonance dating.
Other uses
Baryte is used in added-value applications which include filler in paint and plastics, sound reduction in engine compartments, coat of automobile finishes for smoothness and corrosion resistance, friction products for automobiles and trucks, radiation shielding concrete, glass ceramics, and medical applications (for example, a barium meal before a contrast CT scan). Baryte is supplied in a variety of forms and the price depends on the amount of processing; filler applications commanding higher prices following intense physical processing by grinding and micronising, and there are further premiums for whiteness and brightness and color. It is also used to produce other barium chemicals, notably barium carbonate which is used for the manufacture of LED glass for television and computer screens (historically in cathode-ray tubes); and for dielectrics.
Historically, baryte was used for the production of barium hydroxide for sugar refining, and as a white pigment for textiles, paper, and paint.
Although baryte contains the toxic alkaline earth metal barium, it is not detrimental for human health, animals, plants and the environment because barium sulfate is extremely insoluble in water.
It is also sometimes used as a gemstone.
See also
Hokutolite
Rose rock
References
Further reading
Barium minerals
Sulfate minerals
Evaporite
Gemstones
Industrial minerals
Luminescent minerals
Orthorhombic minerals
Baryte group
Minerals in space group 62 | Baryte | Physics,Chemistry | 1,794 |
8,924,477 | https://en.wikipedia.org/wiki/Ralph%20Benjamin | Ralph Benjamin (17 November 1922 – 7 May 2019) was a British scientist and electrical engineer.
Biography
Benjamin was born in Darmstadt, Germany. He attended boarding school in Switzerland from 1937, and was sent to England in 1939 as a refugee. He studied at Ellesmere College and at Imperial College London where he graduated with a 1st class honours in Electronic Engineering. He joined the Royal Naval Scientific Service in 1944, beginning his career at the Admiralty Surface Weapons Establishment (ASWE).
Benjamin invented the first trackball called roller ball in 1946, patented in 1947. Between 1947 and 1957 he developed the first force-wide integrated Command and Control System. This included patenting the use of an interlaced cursor controlled by a tracker ball to link displays to stored digital information, the first ever digital compression of video data, and the creation of the navy's first digital data link and network which is still in use NATO-wide as "Link 11".
NATO
During the fifties and sixties he was a leading member of national Advanced Computer Techniques Project and in 1961 he was acting international chairman NATO "Von Karman" studies on "Man and Machine" and "Command and Control".
From 1961 to 1964 he was Head of Research and Deputy Director, Admiralty Surface Weapons Establishment then in 1964 he became Chief Scientist Admiralty Underwater Weapons Establishment (AUWE), combined with Director, AUWE, and MoD Director Underwater Weapons R&D – posts he held until 1971. Original publications during this time resulted in a DSc and he published a textbook on "Modulation, Resolution and Signal processing" that was later unofficially translated into Russian. He also trained as a navy diver to better understand some of the challenges faced by the Royal Navy.
GCHQ
In 1971 he became Chief Scientist, Chief Engineer and Superintending Director at GCHQ where he stayed until 1982. He was responsible for fast-track Research, Development, Procurement, and Deployment and use of equipment and techniques for Signals Intelligence. During most of this time he was also Chief Scientific Advisor to the Intelligence Services and national Co-ordinator Intelligence R&D.
At GCHQ, Benjamin played an important role in the original development of "non-secret cryptography", later independently discovered by Rivest, Shamir, and Adleman and termed public-key cryptography.
Teaching
As a visiting professor at the University of Surrey between 1972 and 1978 he helped to start the Surrey University mini-satellite programme.
Following retirement from the civil service he became Head of Communications Techniques & Networks at the Supreme Headquarters Allied Powers Europe (SHAPE) Technical Centre from 1982 to 1987. Graduate NATO Staff College, 1983.
On his return to England he became a visiting Research Professor at University College, London, and since 1993, Bristol University. Until recently he was also a visiting professor at Imperial College, the Open University, and the Royal Military College of Science, and Member of Court at Brunel University. He also had substantial involvement in Defence Scientific Advisory Council, DSAC. He was given an honorary DEng by Bristol University in 2000. He has won the IET Heinrich Hertz premium twice, and also the Marconi premium and the Clarke Maxwell premium. In 2006 he was given the Achievement in Electronics Award and also in 2006 the Oliver Lodge Medal for IT.
His autobiography, called Five Lives in One, was published in 1996.
He died on 7 May 2019 at the age of 96.
References
1922 births
2019 deaths
Engineers from Darmstadt
People from the People's State of Hesse
People educated at Ellesmere College
Alumni of Imperial College London
20th-century British engineers
Jewish emigrants from Nazi Germany to the United Kingdom
GCHQ people
Fellows of the Institution of Engineering and Technology
Fellows of the Royal Academy of Engineering
Companions of the Order of the Bath
Admiralty personnel of World War II | Ralph Benjamin | Engineering | 767 |
75,331,575 | https://en.wikipedia.org/wiki/Schliemann%27s%20Trench | Schliemann's Trench (sometimes referred to as Schliemann's Great Trench) is the name commonly given to a gash cut into the side of Hisarlik, Turkey, between 1871 and 1890 by Heinrich Schliemann in his quest to find the ruins of Troy. By digging this trench, Schliemann destroyed a large portion of the site.
Excavation of the trench
In OctoberNovember 1871, Heinrich Schliemann "officially" began excavating the site by digging into the northern side of Hisarlik. Schliemann returned to the site in April 1872 with battering rams and windlasses, excavating a wide area between the trench he had dug in 1871 and trenches dug earlier by Frank Calvert. Around this time, Schliemann also widened his north–south trench, extending it clear through the southern end of the hill. In the middle of this north–south trench, Schliemann dug further down until he hit bedrock, uncovering in the process the remnants of two separate citadel (walls IIb and IIc), which he believed were the "Tower of Ilion".
In February 1873, Schliemann continued excavations in the north-eastern part of Hisarlik and started new excavations on the hill's southeast side. During this season, Schliemann discovered the southwestern part of Troy II's citadel walls as well as Gate FM, its associated ramp, and buildings that Schliemann believed to be the remnants of Priam's palace. Schliemann would return to the site in 1878 and 1879 (during which he focused most of his attention on clearing the middle of the hill and deepening his north–south trench), 1882 (during which, among other things, he continued to deepen the north–south trench), and 1890 (when he focused most of his attention on excavating the exposed parts of the Troy II citadel).
After Schliemann's excavations ceased, the deep north–south trench became a notable feature of the site, and it is still visible to this day. The trench is often cited as an example of Schliemann's inexperience, for in digging through Hisarlik until he hit bedrock, Schliemann destroyed much of the site, thus "mak[ing] a hugely complex site even more so".
References
Bibliography
1890 establishments in the Ottoman Empire
Buildings and structures completed in 1890
Troy
Ancient Greek archaeological sites in Turkey
Archaeological sites in the Marmara region
Heinrich Schliemann
Earth structures | Schliemann's Trench | Engineering | 513 |
14,574,654 | https://en.wikipedia.org/wiki/OPN1MW | Green-sensitive opsin is a protein that in humans is encoded by the OPN1MW gene.
OPN1MW2 is a similar opsin.
The OPN1MW gene provides instructions for making an opsin pigment that is more sensitive to light in the middle of the visible spectrum (yellow/green light).
See also
Opsin
OPN1LW
References
Further reading
External links
GeneReviews/NIH/NCBI/UW entry on Red-Green Color Vision Defects
G protein-coupled receptors
Color vision | OPN1MW | Chemistry | 113 |
59,941,185 | https://en.wikipedia.org/wiki/Hill%20Top%20Colliery | The Hill Top Colliery in Sharneyford between Bacup and Todmorden was, until 2014, the last coal mine still in operation in Lancashire.
Foundation
The Hill Top Colliery was opened in 1948. In 1948, the National Coal Board built two drifts leading downwards into the average thick Union coal seam. Under the National Coal Board it employed from 1950 to 1965 on average 101 men underground and 9 above. At its peak there worked about 200 miners.
Drifts
The Intake Drift with a 1 to 1½ (66%) grade was 78 meters long, while the lesser steep Return Drift was 339 meters long at a 1 to 4 (25%) grade. The rails of the plateways consisted of L-shaped steel profiles with a gauge of probably . The cart of the winch-operated inclinehad flangeless metal disc wheels and were also used for passenger transport, because of the low height of the drifts. They were pushed in the plane by hand and driven on downhill slopes with winches. Some of the numerous cart were probably acquired second-hand from the disused Old Meadows Colliery.
Employees
The nearby Moorfield Colliery in Accrington was closed shortly after the nationalization in 1947, after which many of the formerly employed miners worked in the Hill Top Colliery. In less than 20 years they have produced 400 tons of coal per week before the mighty Union seam was exhausted in 1966. The coal had a relatively high sulfur content and was therefore sold mainly to the chemical industry in Widnes.
Drainage
Powerful pumps, running day and night, were supplied with electrical power via an overhead line belonging to the mine. They pumped just over 1,000 liters (250 gallons) per minute out of the Hill Top Colliery and across the watershed of the Heald Moor into the Irwell Valley and not via the Greens Clough into the Yorkshire Calder, which would have been the far cheaper alternative.
Temporary closure and reopening
In 1966, the mine was closed after the coal deposits seemed to be exhausted. The Grimebridge Colliery Co Ltd, led by miners William (Billy) Clayton and his business partner Rodney Mitchall, obtained a license to exploit Hill Top Colliery and opened the pit again. In the summer of 1997 two drifts were dug into the large seam between the previous tunnels and an open pit mine on the Heald Moor. The planning permission for the construction of the two new drifts was granted in August 1989. However, the start of construction was delayed, so that in 1997 an application for renewal of this permit was made and approved. In September 2005, a permit was granted for the continuation of mining until August 2011.
Coal deposits
The coal reserves, extend underground over a working area of about 9 hectares. In 1997, the coal authority granted a license to mine 110,000 tonnes of coal. Although about 150,000 tonnes of coal were still available, only less than 50 tonnes were mined in 2003 due to staff shortages. Four miners, who worked only in the morning in the mine, promoted until October 2011, only 2900 tonnes of coal.
Change of ownership
Billy Clayton died unexpectedly of a heart attack on 16 May 2008, when he brought his grandson back from school. Then his equally named son Billy Clayton son took over the operation of the mine until it was closed in 2014.
Health and safety
In December 2016, the state Health and Safety Executive issued a prohibition notice to warn about the risk of water leakage and an improvement notice suggestion regarding the requirement for regular expert assessments of the compressor.
Coal balls
In Lancashire, especially in the Burnley area, peat concretions are known as coal balls or colloquially as Burnley bobbers. They are particularly common in the seams of the Upper Foot Mine and Lower Mountain Mine in East Lancashire but also in the mines in Todmorden Moor on the eastern edge of this coal field. Due to their hardness, they often caused damage to the mining equipment, as well as the picks, drums and cutting jibs in the coal mines in North East Lancashire. In the mines of Todmorden Moor the coal balls were common. Some were collected by the locals from the waste heaps because of the petrifactions, others are still there.
References
External links
The End of an Era: Hilltop Colliery – Bacup, 1997–2014, The Last Coal Mine in Lancashire, b3tarev3, 15. April 2014
Hilltop Colliery – Lancashire: A rare glimpse into Lancashire's last working coal mine.
The Geograph: Hill Top Colliery
Lancashire Colliers (Grimebridge) Granada Video
Coal mines in England
Mining railways
Lancashire
Plateway | Hill Top Colliery | Engineering | 922 |
3,206,060 | https://en.wikipedia.org/wiki/Syntax%20%28programming%20languages%29 | In computer science, the syntax of a computer language is the rules that define the combinations of symbols that are considered to be correctly structured statements or expressions in that language. This applies both to programming languages, where the document represents source code, and to markup languages, where the document represents data.
The syntax of a language defines its surface form. Text-based computer languages are based on sequences of characters, while visual programming languages are based on the spatial layout and connections between symbols (which may be textual or graphical). Documents that are syntactically invalid are said to have a syntax error. When designing the syntax of a language, a designer might start by writing down examples of both legal and illegal strings, before trying to figure out the general rules from these examples.
Syntax therefore refers to the form of the code, and is contrasted with semantics – the meaning. In processing computer languages, semantic processing generally comes after syntactic processing; however, in some cases, semantic processing is necessary for complete syntactic analysis, and these are done together or concurrently. In a compiler, the syntactic analysis comprises the frontend, while the semantic analysis comprises the backend (and middle end, if this phase is distinguished).
Levels of syntax
Computer language syntax is generally distinguished into three levels:
Words – the lexical level, determining how characters form tokens;
Phrases – the grammar level, narrowly speaking, determining how tokens form phrases;
Context – determining what objects or variables names refer to, if types are valid, etc.
Distinguishing in this way yields modularity, allowing each level to be described and processed separately and often independently.
First, a lexer turns the linear sequence of characters into a linear sequence of tokens; this is known as "lexical analysis" or "lexing".
Second, the parser turns the linear sequence of tokens into a hierarchical syntax tree; this is known as "parsing" narrowly speaking. This ensures that the line of tokens conform to the formal grammars of the programming language. The parsing stage itself can be divided into two parts: the parse tree, or "concrete syntax tree", which is determined by the grammar, but is generally far too detailed for practical use, and the abstract syntax tree (AST), which simplifies this into a usable form. The AST and contextual analysis steps can be considered a form of semantic analysis, as they are adding meaning and interpretation to the syntax, or alternatively as informal, manual implementations of syntactical rules that would be difficult or awkward to describe or implement formally.
Thirdly, the contextual analysis resolves names and checks types. This modularity is sometimes possible, but in many real-world languages an earlier step depends on a later step – for example, the lexer hack in C is because tokenization depends on context. Even in these cases, syntactical analysis is often seen as approximating this ideal model.
The levels generally correspond to levels in the Chomsky hierarchy. Words are in a regular language, specified in the lexical grammar, which is a Type-3 grammar, generally given as regular expressions. Phrases are in a context-free language (CFL), generally a deterministic context-free language (DCFL), specified in a phrase structure grammar, which is a Type-2 grammar, generally given as production rules in Backus–Naur form (BNF). Phrase grammars are often specified in much more constrained grammars than full context-free grammars, in order to make them easier to parse; while the LR parser can parse any DCFL in linear time, the simple LALR parser and even simpler LL parser are more efficient, but can only parse grammars whose production rules are constrained. In principle, contextual structure can be described by a context-sensitive grammar, and automatically analyzed by means such as attribute grammars, though, in general, this step is done manually, via name resolution rules and type checking, and implemented via a symbol table which stores names and types for each scope.
Tools have been written that automatically generate a lexer from a lexical specification written in regular expressions and a parser from the phrase grammar written in BNF: this allows one to use declarative programming, rather than need to have procedural or functional programming. A notable example is the lex-yacc pair. These automatically produce a concrete syntax tree; the parser writer must then manually write code describing how this is converted to an abstract syntax tree. Contextual analysis is also generally implemented manually. Despite the existence of these automatic tools, parsing is often implemented manually, for various reasons – perhaps the phrase structure is not context-free, or an alternative implementation improves performance or error-reporting, or allows the grammar to be changed more easily. Parsers are often written in functional languages, such as Haskell, or in scripting languages, such as Python or Perl, or in C or C++.
Examples of errors
As an example, (add 1 1) is a syntactically valid Lisp program (assuming the 'add' function exists, else name resolution fails), adding 1 and 1. However, the following are invalid:
(_ 1 1) lexical error: '_' is not valid
(add 1 1 parsing error: missing closing ')'
The lexer is unable to identify the first error – all it knows is that, after producing the token LEFT_PAREN, '(' the remainder of the program is invalid, since no word rule begins with '_'. The second error is detected at the parsing stage: The parser has identified the "list" production rule due to the '(' token (as the only match), and thus can give an error message; in general it may be ambiguous.
Type errors and undeclared variable errors are sometimes considered to be syntax errors when they are detected at compile-time (which is usually the case when compiling strongly-typed languages), though it is common to classify these kinds of error as semantic errors instead.
As an example, the Python code
'a' + 1
contains a type error because it adds a string literal to an integer literal. Type errors of this kind can be detected at compile-time: They can be detected during parsing (phrase analysis) if the compiler uses separate rules that allow "integerLiteral + integerLiteral" but not "stringLiteral + integerLiteral", though it is more likely that the compiler will use a parsing rule that allows all expressions of the form "LiteralOrIdentifier + LiteralOrIdentifier" and then the error will be detected during contextual analysis (when type checking occurs). In some cases this validation is not done by the compiler, and these errors are only detected at runtime.
In a dynamically typed language, where type can only be determined at runtime, many type errors can only be detected at runtime. For example, the Python code
a + b
is syntactically valid at the phrase level, but the correctness of the types of a and b can only be determined at runtime, as variables do not have types in Python, only values do. Whereas there is disagreement about whether a type error detected by the compiler should be called a syntax error (rather than a static semantic error), type errors which can only be detected at program execution time are always regarded as semantic rather than syntax errors.
Syntax definition
The syntax of textual programming languages is usually defined using a combination of regular expressions (for lexical structure) and Backus–Naur form (a metalanguage for grammatical structure) to inductively specify syntactic categories (nonterminal) and terminal symbols. Syntactic categories are defined by rules called productions, which specify the values that belong to a particular syntactic category. Terminal symbols are the concrete characters or strings of characters (for example keywords such as define, if, let, or void) from which syntactically valid programs are constructed.
Syntax can be divided into context-free syntax and context-sensitive syntax. Context-free syntax are rules directed by the metalanguage of the programming language. These would not be constrained by the context surrounding or referring that part of the syntax, whereas context-sensitive syntax would.
A language can have different equivalent grammars, such as equivalent regular expressions (at the lexical levels), or different phrase rules which generate the same language. Using a broader category of grammars, such as LR grammars, can allow shorter or simpler grammars compared with more restricted categories, such as LL grammar, which may require longer grammars with more rules. Different but equivalent phrase grammars yield different parse trees, though the underlying language (set of valid documents) is the same.
Example: Lisp S-expressions
Below is a simple grammar, defined using the notation of regular expressions and Extended Backus–Naur form. It describes the syntax of S-expressions, a data syntax of the programming language Lisp, which defines productions for the syntactic categories expression, atom, number, symbol, and list:
expression = atom | list
atom = number | symbol
number = [+-]?['0'-'9']+
symbol = ['A'-'Z']['A'-'Z''0'-'9'].*
list = '(', expression*, ')'
This grammar specifies the following:
an expression is either an atom or a list;
an atom is either a number or a symbol;
a number is an unbroken sequence of one or more decimal digits, optionally preceded by a plus or minus sign;
a symbol is a letter followed by zero or more of any characters (excluding whitespace); and
a list is a matched pair of parentheses, with zero or more expressions inside it.
Here the decimal digits, upper- and lower-case characters, and parentheses are terminal symbols.
The following are examples of well-formed token sequences in this grammar: '12345', '()', '(A B C232 (1))'
Complex grammars
The grammar needed to specify a programming language can be classified by its position in the Chomsky hierarchy. The phrase grammar of most programming languages can be specified using a Type-2 grammar, i.e., they are context-free grammars, though the overall syntax is context-sensitive (due to variable declarations and nested scopes), hence Type-1. However, there are exceptions, and for some languages the phrase grammar is Type-0 (Turing-complete).
In some languages like Perl and Lisp the specification (or implementation) of the language allows constructs that execute during the parsing phase. Furthermore, these languages have constructs that allow the programmer to alter the behavior of the parser. This combination effectively blurs the distinction between parsing and execution, and makes syntax analysis an undecidable problem in these languages, meaning that the parsing phase may not finish. For example, in Perl it is possible to execute code during parsing using a BEGIN statement, and Perl function prototypes may alter the syntactic interpretation, and possibly even the syntactic validity of the remaining code. Colloquially this is referred to as "only Perl can parse Perl" (because code must be executed during parsing, and can modify the grammar), or more strongly "even Perl cannot parse Perl" (because it is undecidable). Similarly, Lisp macros introduced by the defmacro syntax also execute during parsing, meaning that a Lisp compiler must have an entire Lisp run-time system present. In contrast, C macros are merely string replacements, and do not require code execution.
Syntax versus semantics
The syntax of a language describes the form of a valid program, but does not provide any information about the meaning of the program or the results of executing that program. The meaning given to a combination of symbols is handled by semantics (either formal or hard-coded in a reference implementation). Valid syntax must be established before semantics can make meaning out of it. Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibit undefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it.
Using natural language as an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false:
"Colorless green ideas sleep furiously." is grammatically well formed but has no generally accepted meaning.
"John is a married bachelor." is grammatically well formed but expresses a meaning that cannot be true.
The following C language fragment is syntactically correct, but performs an operation that is not semantically defined (because is a null pointer, the operations and have no meaning):
complex *p = NULL;
complex abs_p = sqrt (p->real * p->real + p->im * p->im);
As a simpler example,
int x;
printf("%d", x);
is syntactically valid, but not semantically defined, as it uses an uninitialized variable. Even though compilers for some programming languages (e.g., Java and C#) would detect uninitialized variable errors of this kind, they should be regarded as semantic errors rather than syntax errors.
See also
Naming convention (programming)
Comparison of programming languages (syntax)
To quickly compare syntax of various programming languages, take a look at the list of "Hello, World!" program examples:
Prolog syntax and semantics
Perl syntax
PHP syntax and semantics
C syntax
C++ syntax
Java syntax
JavaScript syntax
Python syntax and semantics
Lua syntax
Haskell syntax
References
External links
Various syntactic constructs used in computer programming languages
Python error “ImportError: No module named” Why? How? Command-Line? [Solved2021]
Programming language topics
Source code | Syntax (programming languages) | Engineering | 2,953 |
17,710,932 | https://en.wikipedia.org/wiki/WASP-14 | WASP-14 or BD+22 2716 is a star in the constellation Boötes. The SuperWASP project has observed and classified this star as a variable star, perhaps due to the eclipsing planet.
Planetary system
WASP-14b is an extrasolar planet discovered in 2008. This is one of the densest exoplanets known.
Its radius best fits the model of Fortney.
See also
SuperWASP
List of extrasolar planets
References
External links
Image WASP-14
Boötes
F-type main-sequence stars
Planetary transit variables
Planetary systems with one confirmed planet
J14330635+2153409
14
Durchmusterung objects | WASP-14 | Astronomy | 137 |
14,836,599 | https://en.wikipedia.org/wiki/Hydroextractor | Hydroextractors are machines which are used in the textile processing industry. These are mainly centrifuges. The wet material is placed in the extractor, which has a wall of perforated metal, generally stainless steel. The internal drum rotates at high speed, thus throwing out the water contained in it. The use of the hydroextractor significantly reduces the energy required to dry any material. Hydroextractors usually work on centrifugal force creating a high gravitational force, enhancing water extraction. Hence the water is separated and the product is obtained in a dry form.
Centrifuges | Hydroextractor | Chemistry,Engineering | 126 |
65,355,102 | https://en.wikipedia.org/wiki/Cure%20Rare%20Disease | Cure Rare Disease is a non-profit biotechnology company based in Boston, Massachusetts that is working to create novel therapeutics using gene therapy, gene editing (CRISPR technology) and antisense oligonucleotides to treat people impacted by rare and ultra-rare genetic neuromuscular conditions.
History
Richard Horgan founded Terry's Foundation for Muscular Dystrophy in 2017, which became Cure Rare Disease in 2018, in order to develop a cure for Duchenne muscular dystrophy for his brother who has been battling the disease since childhood. Leveraging his network from Harvard Business School, Horgan formed a collaboration consisting of leading researchers and clinicians around the country to develop this cure for his brother, and eventually founded Cure Rare Disease.
Horgan connected first with a scientist at Boston Children's Hospital, Dr. Timothy Yu, who had just successfully created a custom drug for a girl with the neurodegenerative condition, Batten disease using antisense oligonucleotide (ASO) technology. Horgan's brother's mutation is not amenable to ASO technology, so Horgan adopted the process and instead used CRISPR as the technology to attempt to cure his brother.
This collaboration has expanded over the past three years and has led to the addition of notable researchers and institutions collaborating with Cure Rare Disease on their mission to treat rare disease.
Research
There are currently three drugs approved by the FDA for Duchenne muscular dystrophy to treat the patients with mutations on the dystrophin gene encompassing exon 51, 53, and 45. However, people with DMD have mutations impacting different exons of the gene, so these do not work to treat all patients.
Cure Rare Disease is developing novel therapeutics using gene replacement, gene editing (CRISPR gene-editing) and antisense oligonucleotide technologies. To systemically deliver a subset of therapeutics, including CRISPR, the therapeutic is inserted into the adeno-associated virus (AAV). The drug developed for Horgan’s brother used a CRISPR transcriptional activator which functions to upregulate an alternative isoform of dystrophin. Because the CRISPR activation technology does not induce a double stranded cut, rather it acts to upregulate the target of interest, there is less risk of introducing an off-target genetic mutation. Through the collaboration with Cure Rare Disease, researchers at Charles River Laboratories, headquartered in Wilmington, Massachusetts, have develop animal models with the same genetic mutation as the person to be treated with the drug so that therapeutic efficacy and safety can be shown.
After extensive efficacy and safety testing, Cure Rare Disease secured approval of the Investigational New Drug (IND) application from the United States Food and Drug Administration (FDA) to dose Terry with the first-in-human CRISPR transcriptional activator in July 2022.
Finding success in developing a novel framework to treat the first patient, Cure Rare Disease expanded the development of additional therapeutics. Currently, there are 18 mutations and conditions in the Cure Rare Disease pipeline, including Duchenne muscular dystrophy, various subtypes of Limb-girdle muscular dystrophy, spinocerebellar ataxia type 3 (SCA3), and ADSSL1 distal myopathy. As of 2022, none of these conditions have a viable treatment available for the population impacted. To better plan for future therapeutic endeavors, Cure Rare Disease established a patient registry where patients and patient families can input their mutation information.
Partners
Charles River Lab
University of Massachusetts Medical School
Yale School of Medicine
Hospital for Sick Children (SickKids) Toronto, Canada
The Ohio State University, Columbus, Ohio
Leiden University Medical Center, Leiden, Netherlands
Virginia Commonwealth University, Richmond, Virginia
Andelyn Biosciences, Columbus, Ohio
The cross-functional collaboration includes researchers and clinicians from across the Northern Hemisphere and is focused on developing therapeutics for rare and ultra-rare diseases for which there are no effective treatments.
References
External links
Cure Rare Disease's Website
Cure Rare Disease on the TODAY Show
Non-profit corporations
Biotechnology
Rare disease organizations | Cure Rare Disease | Biology | 847 |
40,521,255 | https://en.wikipedia.org/wiki/Naz%C4%B1m%20Terzioglu | Nazım Terzioğlu (1912–September 20, 1976) was one of the first mathematicians in Turkish academia.
His son, Tosun Terzioğlu was also a mathematician.
Early life
Nazım Terzioğlu completed his primary education in his place of birth, Kayseri. He started his secondary education in Istanbul and then continued in Izmir until his graduation from Izmir High School in 1930. At that time, some of Turkey's most qualified mathematics teachers worked at Izmir High School. Alumni of that school included mathematicians such as Cahit Arf (1910–1997) and Tevfik Oktay Kabakcıoğlu (1910–1971).
In those years, the successful young people were sent abroad by the government to be trained as qualified
workforce in various fields needed for the country. Terzioglu passed the relevant exam and left for Germany to study mathematics on behalf of the Ministry of Education of Turkey. He pursued his higher education in the University of Göttingen and Munich University.
He completed his Ph.D. under the supervision of the famous mathematician of that period, Prof. Dr. Constantin Carathéodory (1873–1950), who was a member of a Greek family in Fener, Istanbul.
Career
Upon completion of his education in Germany, Terzioglu began to work as an assistant of Mathematical Mechanics and Advanced Geometry in the Institute of Mathematics of the Faculty of Science of Istanbul University in 1937. He became associate professor in 1942 and the following year, he was appointed to professorship in the newly established Institute of Mathematics of the Faculty of Science of Ankara University (1943). After spending two years in this faculty, he returned to Istanbul University as a professor (1944).
At Istanbul University, he worked as the Dean of the Faculty of Science in 1950–1952. During the same period, Terzioglu established some of the scientific institutions for which Turkey had felt the major need until those years. These are the Institute for Geophysics of Istanbul University, the Institute for Hydrobiology in Istanbul Baltalimani and the Cosmic Ray Institute which Terzioglu founded at Uludag, Bursa in cooperation with Prof. Dr. Adnan Sokullu and Prof. Dr. Sait Akpinar. After his deanship in the Faculty of Science, he became the Chairman of the Analysis Division of the Institute of Mathematics in the same faculty (1953).
In 1965–1967, Terzioglu, in addition to his responsibilities in Istanbul University, worked first by proxy then acting as the principal founder-rector of Karadeniz (Black Sea) Technical University (KTU). It is his honour to establish the first Faculty of Fundamental Sciences of Turkey in KTU. In 1967, Terzioglu returned to his mission in the Faculty of Science of Istanbul University. In 1969 and 1971, he was elected as the rector of Istanbul University. He had maintained this position for two periods (28 October 1969 – 28 October 1971 and 28 October 1971 – 31 May 1974). In his first years as a rector, he restored the building of a historical soup kitchen which was assigned by Wakfs to the university as a part of the Sehzade Mosque. On 6 August 1971, by setting up a new printing system in it, he put the same building into service with the name of Research Institute for Mathematics of Faculty of Science. Terzioglu also established a mathematics library within this institute with a capacity of 2000 books which he provided through donations and purchase from foreign countries. After his death, the institute was named on the proposal of the Faculty of Science as Nazim Terzioglu Mathematics Research Institute.
As an outcome of the negotiations with Silivri Municipality, Terzioglu provided Istanbul University with 35 acres of land to be donated in Silivri. In a part of this land, 18 study rooms, 3 large conference halls, a library and a guest house to accommodate scientists coming from abroad were constructed in accord with his order. Terzioglu considered graduate education very seriously. He believed that talented young people ought to be trained in a particular way. To provide such an environment, he invited foreign scientists and organized congresses, seminars, colloquia, summer and progress courses in Silivri facilities which was opened into use on September 3, 1973. Thanks to these activities, he made significant contributions to the education of young generations. The scientific meetings organized by Terzioglu in Silivri facilities are:
February 10–14, 1973: First National Meeting of Mathematicians;
July 9–14, 1973: the preparatory course related to the Summer Seminar on International Display Theory of Finite Groups;
July 15–28, 1973: Summer Seminar on International Display Theory of Finite Groups;
August 20 – September 9, 1973: International Symposium on Functional Analysis;
September 8–21, 1975: the preparatory course related to the International Symposium on Algebraic Number Theory;
September 22–27, 1975: International Symposium on Algebraic Number Theory;
April 23–26, 1976: Second National Meeting of Mathematicians;
August 1976: Ultrasound Congress (joint with physicists);
September 5–11, 1976: International Congress of Functional Analysis;
September 20–25, 1976: Rolf Nevanlinna International Symposium.
Death
Terzioglu died as a result of a heart attack in the morning of the opening day of the International Symposium organized to tribute Prof. Dr. Rolf Nevanlinna, who had been a teacher of Terzioglu. Albeit his unexpected loss, the symposium was completed after some rearrangements were made in the program. The guest mathematicians also attended the funeral ceremony on September 22 and the symposium began on September 23. Terzioglu was elected as the honorary guest of this symposium and the title doctoris honoris causa was awarded to Prof. Dr. Rolf Nevanlinna by Istanbul University.
Legacy
One of the contributions of Terzioglu as the director of the Mathematics Research Institute to Turkey's mathematical culture and the history of science was the systematic scan of the Islamic literature relevant to mathematics and the presentation of the information related to conic sections in ancient mathematics to the scientific community. As a result of these efforts, the facsimile of two ancient texts of mathematics originally written in Arabic were realized. The first one is the preface of Mecmuatu'r-risail, the Arabic translation by Beni Musa b. Sakir (died in 873) of Conica, which is the work of Apollonius of Perga (BC 262–190) on the conic sections. This preface, published with the title Das Vorwort des Astronomen Bani Musa b. Sakir, describes how the Apollonius' Conica was acquired by the Islamic world. After that, Terzioglu published the facsimile of the copy of the lost 8th book of Apollonius' Conica which was rewritten by Ibnu'l-Heysem (965–1039) with the help from other sources. In the introduction part of this book with the title Das Achte Buch zu den Conica des Apollonios von Perge, the following information is provided in summary:
In ancient mathematics, the interest for conics starts with Menaechmus (BC IV. Century) and reaches the summit with Apollonius of Perga. Apollonius wrote his famous work Conica by processing previous information and adding up his own inventions. The first 7 volumes of this work consisting of 8 volumes in total are known whereas the 8th volume is missing. The Islamic and Western mathematicians working in this field took place in the reconstruction of the 8th volume. The most successful one of these works is that of Edmund Halley's (1656–1742) Apollonii Per-gaei conicorum (Oxoniae, 1710). The 8th book of Conica reconstructed by Ibn el-Heysem is the 4th manuscript with the name Makalatu'l-Hasan b.el-Hasan b.el Heysem fi el-kitabu'l-mahrutat in the Mecmu'atu'r-risail, which is recorded under no. 1796 in Manisa Library. The fact that Ibn el-Heysem completed this work nearly 700 years before Halley is interesting.
Within the framework of this program, Terzioglu was preparing for publication the first 7 books of Conica, which were translated into Arabic in 415/1024 AD by Ibnu'l-Heysem who had also examined the previous translations of his time. Terzioglu's death coincides with the time when the facsimile of the manuscript located at No. 2762 of Suleymaniye Library, Ayasofya had been completed. As the part of the book he wanted to include related to the history of conics remained incomplete, it was removed from press and was published later with the title Kitab al-Mahrutat Das Buch der Kegelschnitte des Apollonios von Perge by the Research Institute for Mathematics. It includes a part in which the description of the manuscript and the direct translation of its preface are given in Turkish and German.
One of the most important services of Terzioglu to the Turkish history of science is to make translate into Turkish in Latin letters the published first two volumes and the third volume as a manuscript (see Istanbul University Library TY. 903, 904, 905 for copies of manuscripts) of Asar-i Bakiye} (Vol. I–II, Istanbul, 1329/1913) by Salih Zeki Bey (1863–1921) during his presidency of the Turkish Mathematics Association. His aim was to offer such an old source to the benefit of young generations.
Positions and awards
Terzioglu, who had an important role in the revival of the Union of Balkan Mathematicians (Union Balkanique des Mathematiciens) which was founded before the World War II, had been the president of that organization for two periods (1966–1971). He was also selected as the chairman of the IV. Congress of Balkan Mathematicians organized in Istanbul on August 29, 1972. Among his other international activities, the role he played in providing Turkey with the membership of the International Mathematical Union is an unforgettable service.
In 1973, Terzioglu was selected as a member of Hahnemann Medical Society of America. In 1974, he has been awarded the Medal of Merit of Federal Republic of Germany by the German President on his endeavor for the development of Turkish–German relations. He also has two medals given by the Prague University and the Finland University of Jyväskylä.
Nazim Terzioglu has been awarded on December 2, 1982 the TÜBITAK Service Award thanks to his contributions to the development of mathematics in our country.
His family established a Mathematics Research Award on behalf of Terzioglu who gave efforts during his life for the development of mathematics and the creation of a research potential. For the first time, this award had been given to three young mathematicians in a ceremony at the Faculty of Science of Istanbul University on September 20, 1981, which is the fifth year of his death. The second award in 1982 was given to a young mathematician in the opening ceremony of the International Symposium on Mathematics that was held on 14–24 September 1982 at the Karadeniz Technical University where Terzioglu served as the founder-rector.
His son, Tosun Terzioglu (born 1942) is a Turkish mathematician and academic administrator.
Books
The books written by Terzioglu, who has many published articles in his own field, are:
Über Finslersche Raume (Doktorarbeit), München, 1936 (On Finsler Spaces (Ph.D. Thesis), Munich, 1936.)
Fonksiyonlar Teorisine Baslangic. Fonksiyonlar Teorisi. 2 Cilt. (Konrad Knopp'dan ceviri), Istanbul, 1938–1939. (Introduction to the Theory of Functions. Theory of Functions by Konrad Knopp, 2 Volumes (translated), Istanbul, 1938–1939.)
Finsler Uzay\i nda Gauss–Bonnet Teoremi, Istanbul 1948. (Gauss–Bonnet theorem in Finsler Spaces, Istanbul 1948.)
Lise Fen Kolu Icin Modern Geometri: Konikler, (Ahmet Nazmi Ilker ile), Istanbul, 1960. (Modern Geometry for the Science Sections of High Schools: Conics, (with Ahmet Nazmi Ilker), Istanbul, 1960.)
Liseler Icin Cebir Temrinleri (P. Aubert ve G. Papelier'den ceviri), Istanbul, 1960. (Exercises in Algebra for High Schools by P. Aubert and G. Papelier (translated), Istanbul, 1960.)
Diferansiyel ve Integral Hesap, (Edmund Landau'dan ceviri), Istanbul, 1961. (Differential and Integral Calculus by Edmund Landau (translated), Istanbul, 1961.)
Lise Fen Kolu Icin Modern Geometri. Fasikül I-Kesenler; Fasikül II-Harmonik Bolme, Harmonik Demet, Daireye Göre Kuvvet; Fasikül III-Daireye Göre Kutup ve Kutup Dogrusu (G. Papelier'den ceviri), Istanbul, 1968. (Modern Geometry for the Science Sections of High Schools. Fascicle I: Secants; Fascicle II: Harmonic Division, Harmonic Pencil, Power with respect to the sphere; Fascicle III: Pole and polar line with respect to the sphere by G. Papelier (translated), Istanbul, 1968.)
Analiz Problemleri, Istanbul, 1973. (Problems in Analysis, Istanbul, 1973.)
Das Vorwort des Astronomen Bani Musa b. Sakir zu den Conica des Apollonios von Perge, Istanbul, 1974. (The foreword of the Astronomer Bani Musa b. Sakir to the Conics of Apollonius of Perga, Istanbul, 1974.)
Das achte Buch zu den Conica des Apollonios von Perge re-konstruiert von Ibn al-Haysam, Istanbul, 1974. (The Eighth Book to the Conics of Apollonius of Perga Reconstructed by Ibn-Haysam, Istanbul, 1974.)
Kitab al-Mahrutat. Das Buch der Kegelschnitte des Apollonios von Perge, Istanbul, 1981. (Kitab al-Mahrutat. The Book of Conic Sections of Apollonius of Perga, Istanbul, 1981.)
References
Nazim Terzioglu, History of Science (Monthly Journal), February 1993, Number 16, 11–19.
International Symposium on Analysis and Theory of Functions, ATF2009 (Dedicated to Nazim Terzioglu) Abstract Book.
1912 births
1976 deaths
20th-century Turkish mathematicians
Academic staff of Istanbul University
University of Göttingen alumni
Mathematical analysts
Academic staff of Ankara University
Istanbul University people
Rectors of universities and colleges in Turkey | Nazım Terzioglu | Mathematics | 3,139 |
61,625,044 | https://en.wikipedia.org/wiki/Hair%20oil | Hair oil is an oil-based cosmetic product intended to improve the condition of hair. Various types of oils may be included in hair oil products. These often purport to aid with hair growth, dryness, or damage.
History
Ancient Egyptians paid special attention to hair and images of hairdressers are depicted in ancient relics found by archaeologists. Archaic texts found during this era had information about “recipes” used by the Egyptians to tackle baldness. During this time period, people used combs and ointments to groom and style their hair.
It is also a popular Ancient Indian technique, it was often used as a predecessor of the modern shampoo.
Uses
Many cosmetic products including shampoo, heat protectants, hair drops, or hair masks contain oils.
Humans produce natural hair oil called sebum from glands around each follicle. Other mammals produce similar oils such as lanolin. Similar to natural oils, artificial hair oils can decrease scalp dryness by forming hydrophobic films that decrease transepidermal water loss, reducing evaporation of water from the skin. Oils on the hair can reduce the absorption of water that damages hair strands through repeated hygral stress as the hair swells when wet, then shrinks as it dries. Oils also protect cuticle cells in the hair follicle and prevent the penetration of substances like surfactants. Saturated and monounsaturated oils diffuse into hair better than polyunsaturated ones.
Oil types
Mineral and vegetable oils are used to make a variety of commercial and traditional hair oils. Coconut oil is a common ingredient. Other vegetable sources include almond, argan, babassu, burdock, Castor, and tea seed.
Natural oils are used more commonly as cosmetic products on the scalp. Natural oils come from natural resources that are very high in nutrients such as vitamins and fatty acids.
Coconut oil
Coconut oil has properties that reduce protein loss in hair when used before and after wash. Coconut oil is known to have lauric acid, which is a type of fatty acid that may penetrate the hair shaft due to a low molecular weight and linear conformation.
Argan oil
Argan oil originates from Morocco and is known for a conditioning effect that leaves hair soft and relieves frizz.
Avocado oil
Avocado oil is rich in nutrients. It has a high concentration of vitamin E, which is an antioxidant that may decrease hair loss and encourages hair growth.
Other oils
Oils including almond oil, grapeseed oil, jojoba oil, olive oil may promote hair elasticity and help prevent dryness and hair damage.
See also
Beard oil
Pomade
Shaving oil
References
Human hair
Oils | Hair oil | Physics,Chemistry | 547 |
52,332,113 | https://en.wikipedia.org/wiki/Gr%C3%A9gory%20Chatonsky | Grégory Chatonsky (May 4, 1971) is a French and Canadian Artist who works with interactive installations, networked devices, photographs and sculptures. He explores the relationship between technologies and affectivity creating new forms of fiction.
Early life and education
After completing studies in Visual Arts and Philosophy at La Sorbonne Pantheon-Sorbonne University, Grégory Chatonsky began his master's degree at the ENSBA and at ENST finishing in 1999. In March 2016 he was awarded his PhD from Université du Québec à Montréal for his dissertation: Aesthetics of flows (after digital).
Work
Grégory Chatonsky is one of the early practitioners of Internet art having founded in 1994 incident.net, an Internet art collective exploring the notions of the accidental, the glitch and unpredictability. During this early period of internet he created the website of Centre Georges Pompidou. From 1994 to 1997, Chatonsky, researched, wrote the screenplay and produced the CD-ROM, Mémoires de la Déportation, about deportations and the Holocaust in France. From the mid-nineties Chatonsky produced films, audios and code-based work for the internet, including Counter (1995), a website that existed solely to count visitors, 2fresh (1997), a snippet of HTML script, and La Vitesse du Silence (1999), a netart performance piece.
Produced at the C³ Center for Culture & Communication Foundation (with Reynald Drouhin) and at Biennale d’art contemporain de Montréal, Revenances (1999) was an online performance work that invited the viewer to slow down their browsing and consider glitches and pauses as spaces where "ghosts" may interact.
In 2003, he became interested in the themes of ruins and the materiality of digital flows. In 2009, he ventured into the world of artificial intelligence, which he renames artificial imagination, which became over the years an object of research and creation. He wrote the first French language novel co-written with a modified version of GPT-2. Chatonsky divides his time between France and Canada. He has taught at Le Fresnoy-Studio national des arts contemporains, France, at Université du Québec à Montréal’s School of visual and media artn he is artist-researcher at École Normale Supérieure in Paris and teacher of artificial imagination in EUR Artec.
Bibliography
Boutet de Monvel, Violaine (2008) "La destruction comme point de départ à une sémiotique libérée". Paris Art.
Doyon, Frédérique (11 October 2007) "Le septième art de demain" Le Devoir, Montréal.
Fan, Ruan (28 April 2015) "French artist explores future archeology". China Daily. Beijing
Lechner, Marie (15 April 2014) "Entretien croisé entre l’artiste Grégory Chatonsky, concepteur de Capture, et le musicologue Peter Szendy". Libération. Paris
Miguirditchian, Julie (2010) "Artist in the flow". Digitalarti Mag
Mufson, Beckett (15 May 2015) "Here Are Imaginary Fossils from a Post-Human Earth". The Creators Project
Murphy, Jay (2009) "A fiction without narration", thing.net
Palmiéri, Christine (2013) "Fossilisation du futur", Archée
Popper, Frank (2006) From Technological to Virtual Art. MIT Press. Boston
Thome de Souza, Kevin (25 March 2014) "Grégory Chatonsky’s Capture: generative art pushed to its limits". Amusement. Paris
Denson, Shane (2020) "Discorrelated Images", Duke University Press
Cavanna, Aurelie (october 2021) "Grégory Chatonsky, a realism without reality", Art Press. Paris
Larsonneur, Claire (2021) "Challenging the Selfie: Perfect Skin by Chatonsky", Interfaces : Jeux de Formats
Somaini, Antonio (2022) "On the altered states of machine vision. Trevor Paglen, Hito Steyerl, Grégory Chatonsky", N-ICON. Studies in Environmental Images
Cavanna, Aurelie (2022) "Gregory Chatonsky : Capturing imaginations", Art Press. Paris
See also
Generative art
Artificial intelligence art
References
External links
Official website of Grégory Chatonsky
Incident.net
Postdigital research seminar about artificial imagination at ENS Paris
French artists
Canadian digital artists
Net.artists
Artists from Montreal
1971 births
Living people | Grégory Chatonsky | Technology | 943 |
1,999,630 | https://en.wikipedia.org/wiki/Benzoin%20%28resin%29 | Benzoin or benjamin (corrupted pronunciation) is a balsamic resin obtained from the bark of several species of trees in the genus Styrax. It is used in perfumes and some kinds of incense and as a flavoring and medicine (see tincture of benzoin). It is distinct from the chemical compound benzoin, which is ultimately derived chemically from benzoin resin; the primary active ingredient of benzoin resin is actually benzoic acid, not benzoin.
Benzoin is sometimes called gum benzoin or gum benjamin, and in India Sambrani or loban, though loban is, via Arabic lubān, a generic term for frankincense-type incense, e.g., fragrant tree resin. The syllable "benz" ultimately derives from the Arabic lubān jāwī (لبان جاوي, "frankincense from Java").
Benzoin is also called storax, not to be confused with the balsam of the same name obtained from the Hamamelidaceae family.
Benzoin is a common ingredient in incense-making and perfumery because of its sweet vanilla-like aroma and fixative properties. Gum benzoin is a major component of the type of church incense used in Russia and some other Eastern Orthodox Christian societies, as well as Latin Catholic churches. Benzoin is used in the Arabian Peninsula and Hindu temples of India, where it is burned on charcoal as an incense. It is also used in the production of Bakhoor (Arabic بخور - scented wood chips) as well as various mixed resin incense in the Arab countries and the Horn of Africa. Benzoin is also used in blended types of Japanese incense, Indian incense, Chinese incense (known as Anxi xiang; 安息香), and Papier d'Arménie as well as incense sticks.
There are two common kinds of benzoin, benzoin Siam and benzoin Sumatra. Benzoin Siam is obtained from Styrax tonkinensis, found across Thailand, Laos, Cambodia, and Vietnam. Benzoin Sumatra is obtained from Styrax paralleloneurus (syn. Styrax sumatranus) and Styrax benzoin, which grows predominantly on the island of Sumatra. Unlike Siamese benzoin, Sumatran benzoin contains cinnamic acid in addition to benzoic acid. In the United States, Sumatra benzoin is more customarily used in pharmaceutical preparations, Siam benzoin in the flavor and fragrance industries.
In perfumery, benzoin is used as a fixative, slowing the dispersion of essential oils and other fragrance materials into the air. Benzoin is used in cosmetics, veterinary medicine, and scented candles. It is used as a flavoring in alcoholic and nonalcoholic beverages, baked goods, chewing gum, frozen dairy, gelatins, puddings, and soft candy.
In anesthesia and surgery, it is used as an adhesive to secure wound and catheter dressing and is available as a sterile preparation.
See also
References
Resins
Perfume ingredients
Incense material
Essential oils
Non-timber forest products | Benzoin (resin) | Physics,Chemistry | 632 |
14,895 | https://en.wikipedia.org/wiki/Insulin | Insulin (, from Latin insula, 'island') is a peptide hormone produced by beta cells of the pancreatic islets encoded in humans by the insulin (INS) gene. It is the main anabolic hormone of the body. It regulates the metabolism of carbohydrates, fats, and protein by promoting the absorption of glucose from the blood into cells of the liver, fat, and skeletal muscles. In these tissues the absorbed glucose is converted into either glycogen, via glycogenesis, or fats (triglycerides), via lipogenesis; in the liver, glucose is converted into both. Glucose production and secretion by the liver are strongly inhibited by high concentrations of insulin in the blood. Circulating insulin also affects the synthesis of proteins in a wide variety of tissues. It is thus an anabolic hormone, promoting the conversion of small molecules in the blood into large molecules in the cells. Low insulin in the blood has the opposite effect, promoting widespread catabolism, especially of reserve body fat.
Beta cells are sensitive to blood sugar levels so that they secrete insulin into the blood in response to high level of glucose, and inhibit secretion of insulin when glucose levels are low. Insulin production is also regulated by glucose: high glucose promotes insulin production while low glucose levels lead to lower production. Insulin enhances glucose uptake and metabolism in the cells, thereby reducing blood sugar. Their neighboring alpha cells, by taking their cues from the beta cells, secrete glucagon into the blood in the opposite manner: increased secretion when blood glucose is low, and decreased secretion when glucose concentrations are high. Glucagon increases blood glucose by stimulating glycogenolysis and gluconeogenesis in the liver. The secretion of insulin and glucagon into the blood in response to the blood glucose concentration is the primary mechanism of glucose homeostasis.
Decreased or absent insulin activity results in diabetes, a condition of high blood sugar level (hyperglycaemia). There are two types of the disease. In type 1 diabetes, the beta cells are destroyed by an autoimmune reaction so that insulin can no longer be synthesized or be secreted into the blood. In type 2 diabetes, the destruction of beta cells is less pronounced than in type 1, and is not due to an autoimmune process. Instead, there is an accumulation of amyloid in the pancreatic islets, which likely disrupts their anatomy and physiology. The pathogenesis of type 2 diabetes is not well understood but reduced population of islet beta-cells, reduced secretory function of islet beta-cells that survive, and peripheral tissue insulin resistance are known to be involved. Type 2 diabetes is characterized by increased glucagon secretion which is unaffected by, and unresponsive to the concentration of blood glucose. But insulin is still secreted into the blood in response to the blood glucose. As a result, glucose accumulates in the blood.
The human insulin protein is composed of 51 amino acids, and has a molecular mass of 5808 Da. It is a heterodimer of an A-chain and a B-chain, which are linked together by disulfide bonds. Insulin's structure varies slightly between species of animals. Insulin from non-human animal sources differs somewhat in effectiveness (in carbohydrate metabolism effects) from human insulin because of these variations. Porcine insulin is especially close to the human version, and was widely used to treat type 1 diabetics before human insulin could be produced in large quantities by recombinant DNA technologies.
Insulin was the first peptide hormone discovered. Frederick Banting and Charles Best, working in the laboratory of John Macleod at the University of Toronto, were the first to isolate insulin from dog pancreas in 1921. Frederick Sanger sequenced the amino acid structure in 1951, which made insulin the first protein to be fully sequenced. The crystal structure of insulin in the solid state was determined by Dorothy Hodgkin in 1969. Insulin is also the first protein to be chemically synthesised and produced by DNA recombinant technology. It is on the WHO Model List of Essential Medicines, the most important medications needed in a basic health system.
Evolution and species distribution
Insulin may have originated more than a billion years ago. The molecular origins of insulin go at least as far back as the simplest unicellular eukaryotes. Apart from animals, insulin-like proteins are also known to exist in fungi and protists.
Insulin is produced by beta cells of the pancreatic islets in most vertebrates and by the Brockmann body in some teleost fish. Cone snails: Conus geographus and Conus tulipa, venomous sea snails that hunt small fish, use modified forms of insulin in their venom cocktails. The insulin toxin, closer in structure to fishes' than to snails' native insulin, slows down the prey fishes by lowering their blood glucose levels.
Production
Insulin is produced exclusively in the beta cells of the pancreatic islets in mammals, and the Brockmann body in some fish. Human insulin is produced from the INS gene, located on chromosome 11. Rodents have two functional insulin genes; one is the homolog of most mammalian genes (Ins2), and the other is a retroposed copy that includes promoter sequence but that is missing an intron (Ins1). Transcription of the insulin gene increases in response to elevated blood glucose. This is primarily controlled by transcription factors that bind enhancer sequences in the ~400 base pairs before the gene's transcription start site.
The major transcription factors influencing insulin secretion are PDX1, NeuroD1, and MafA.
During a low-glucose state, PDX1 (pancreatic and duodenal homeobox protein 1) is located in the nuclear periphery as a result of interaction with HDAC1 and 2, which results in downregulation of insulin secretion. An increase in blood glucose levels causes phosphorylation of PDX1, which leads it to undergo nuclear translocation and bind the A3 element within the insulin promoter. Upon translocation it interacts with coactivators HAT p300 and SETD7. PDX1 affects the histone modifications through acetylation and deacetylation as well as methylation. It is also said to suppress glucagon.
NeuroD1, also known as β2, regulates insulin exocytosis in pancreatic β cells by directly inducing the expression of genes involved in exocytosis. It is localized in the cytosol, but in response to high glucose it becomes glycosylated by OGT and/or phosphorylated by ERK, which causes translocation to the nucleus. In the nucleus β2 heterodimerizes with E47, binds to the E1 element of the insulin promoter and recruits co-activator p300 which acetylates β2. It is able to interact with other transcription factors as well in activation of the insulin gene.
MafA is degraded by proteasomes upon low blood glucose levels. Increased levels of glucose make an unknown protein glycosylated. This protein works as a transcription factor for MafA in an unknown manner and MafA is transported out of the cell. MafA is then translocated back into the nucleus where it binds the C1 element of the insulin promoter.
These transcription factors work synergistically and in a complex arrangement. Increased blood glucose can after a while destroy the binding capacities of these proteins, and therefore reduce the amount of insulin secreted, causing diabetes. The decreased binding activities can be mediated by glucose induced oxidative stress and antioxidants are said to prevent the decreased insulin secretion in glucotoxic pancreatic β cells. Stress signalling molecules and reactive oxygen species inhibits the insulin gene by interfering with the cofactors binding the transcription factors and the transcription factors itself.
Several regulatory sequences in the promoter region of the human insulin gene bind to transcription factors. In general, the A-boxes bind to Pdx1 factors, E-boxes bind to NeuroD, C-boxes bind to MafA, and cAMP response elements to CREB. There are also silencers that inhibit transcription.
Synthesis
Insulin is synthesized as an inactive precursor molecule, a 110 amino acid-long protein called "preproinsulin". Preproinsulin is translated directly into the rough endoplasmic reticulum (RER), where its signal peptide is removed by signal peptidase to form "proinsulin". As the proinsulin folds, opposite ends of the protein, called the "A-chain" and the "B-chain", are fused together with three disulfide bonds. Folded proinsulin then transits through the Golgi apparatus and is packaged into specialized secretory vesicles. In the granule, proinsulin is cleaved by proprotein convertase 1/3 and proprotein convertase 2, removing the middle part of the protein, called the "C-peptide". Finally, carboxypeptidase E removes two pairs of amino acids from the protein's ends, resulting in active insulin – the insulin A- and B- chains, now connected with two disulfide bonds.
The resulting mature insulin is packaged inside mature granules waiting for metabolic signals (such as leucine, arginine, glucose and mannose) and vagal nerve stimulation to be exocytosed from the cell into the circulation.
Insulin and its related proteins have been shown to be produced inside the brain, and reduced levels of these proteins are linked to Alzheimer's disease.
Insulin release is stimulated also by beta-2 receptor stimulation and inhibited by alpha-1 receptor stimulation. In addition, cortisol, glucagon and growth hormone antagonize the actions of insulin during times of stress. Insulin also inhibits fatty acid release by hormone-sensitive lipase in adipose tissue.
Structure
Contrary to an initial belief that hormones would be generally small chemical molecules, as the first peptide hormone known of its structure, insulin was found to be quite large. A single protein (monomer) of human insulin is composed of 51 amino acids, and has a molecular mass of 5808 Da. The molecular formula of human insulin is C257H383N65O77S6. It is a combination of two peptide chains (dimer) named an A-chain and a B-chain, which are linked together by two disulfide bonds. The A-chain is composed of 21 amino acids, while the B-chain consists of 30 residues. The linking (interchain) disulfide bonds are formed at cysteine residues between the positions A7-B7 and A20-B19. There is an additional (intrachain) disulfide bond within the A-chain between cysteine residues at positions A6 and A11. The A-chain exhibits two α-helical regions at A1-A8 and A12-A19 which are antiparallel; while the B chain has a central α -helix (covering residues B9-B19) flanked by the disulfide bond on either sides and two β-sheets (covering B7-B10 and B20-B23).
The amino acid sequence of insulin is strongly conserved and varies only slightly between species. Bovine insulin differs from human in only three amino acid residues, and porcine insulin in one. Even insulin from some species of fish is similar enough to human to be clinically effective in humans. Insulin in some invertebrates is quite similar in sequence to human insulin, and has similar physiological effects. The strong homology seen in the insulin sequence of diverse species suggests that it has been conserved across much of animal evolutionary history. The C-peptide of proinsulin, however, differs much more among species; it is also a hormone, but a secondary one.
Insulin is produced and stored in the body as a hexamer (a unit of six insulin molecules), while the active form is the monomer. The hexamer is about 36000 Da in size. The six molecules are linked together as three dimeric units to form symmetrical molecule. An important feature is the presence of zinc atoms (Zn2+) on the axis of symmetry, which are surrounded by three water molecules and three histidine residues at position B10.
The hexamer is an inactive form with long-term stability, which serves as a way to keep the highly reactive insulin protected, yet readily available. The hexamer-monomer conversion is one of the central aspects of insulin formulations for injection. The hexamer is far more stable than the monomer, which is desirable for practical reasons; however, the monomer is a much faster-reacting drug because diffusion rate is inversely related to particle size. A fast-reacting drug means insulin injections do not have to precede mealtimes by hours, which in turn gives people with diabetes more flexibility in their daily schedules. Insulin can aggregate and form fibrillar interdigitated beta-sheets. This can cause injection amyloidosis, and prevents the storage of insulin for long periods.
Function
Secretion
Beta cells in the islets of Langerhans release insulin in two phases. The first-phase release is rapidly triggered in response to increased blood glucose levels, and lasts about 10 minutes. The second phase is a sustained, slow release of newly formed vesicles triggered independently of sugar, peaking in 2 to 3 hours. The two phases of the insulin release suggest that insulin granules are present in diverse stated populations or "pools". During the first phase of insulin exocytosis, most of the granules predispose for exocytosis are released after the calcium internalization. This pool is known as Readily Releasable Pool (RRP). The RRP granules represent 0.3-0.7% of the total insulin-containing granule population, and they are found immediately adjacent to the plasma membrane. During the second phase of exocytosis, insulin granules require mobilization of granules to the plasma membrane and a previous preparation to undergo their release. Thus, the second phase of insulin release is governed by the rate at which granules get ready for release. This pool is known as a Reserve Pool (RP). The RP is released slower than the RRP (RRP: 18 granules/min; RP: 6 granules/min). Reduced first-phase insulin release may be the earliest detectable beta cell defect predicting onset of type 2 diabetes. First-phase release and insulin sensitivity are independent predictors of diabetes.
The description of first phase release is as follows:
Glucose enters the β-cells through the glucose transporters, GLUT 2. At low blood sugar levels little glucose enters the β-cells; at high blood glucose concentrations large quantities of glucose enter these cells.
The glucose that enters the β-cell is phosphorylated to glucose-6-phosphate (G-6-P) by glucokinase (hexokinase IV) which is not inhibited by G-6-P in the way that the hexokinases in other tissues (hexokinase I – III) are affected by this product. This means that the intracellular G-6-P concentration remains proportional to the blood sugar concentration.
Glucose-6-phosphate enters glycolytic pathway and then, via the pyruvate dehydrogenase reaction, into the Krebs cycle, where multiple, high-energy ATP molecules are produced by the oxidation of acetyl CoA (the Krebs cycle substrate), leading to a rise in the ATP:ADP ratio within the cell.
An increased intracellular ATP:ADP ratio closes the ATP-sensitive SUR1/Kir6.2 potassium channel (see sulfonylurea receptor). This prevents potassium ions (K+) from leaving the cell by facilitated diffusion, leading to a buildup of intracellular potassium ions. As a result, the inside of the cell becomes less negative with respect to the outside, leading to the depolarization of the cell surface membrane.
Upon depolarization, voltage-gated calcium ion (Ca2+) channels open, allowing calcium ions to move into the cell by facilitated diffusion.
The cytosolic calcium ion concentration can also be increased by calcium release from intracellular stores via activation of ryanodine receptors.
The calcium ion concentration in the cytosol of the beta cells can also, or additionally, be increased through the activation of phospholipase C resulting from the binding of an extracellular ligand (hormone or neurotransmitter) to a G protein-coupled membrane receptor. Phospholipase C cleaves the membrane phospholipid, phosphatidyl inositol 4,5-bisphosphate, into inositol 1,4,5-trisphosphate and diacylglycerol. Inositol 1,4,5-trisphosphate (IP3) then binds to receptor proteins in the plasma membrane of the endoplasmic reticulum (ER). This allows the release of Ca2+ ions from the ER via IP3-gated channels, which raises the cytosolic concentration of calcium ions independently of the effects of a high blood glucose concentration. Parasympathetic stimulation of the pancreatic islets operates via this pathway to increase insulin secretion into the blood.
The significantly increased amount of calcium ions in the cells' cytoplasm causes the release into the blood of previously synthesized insulin, which has been stored in intracellular secretory vesicles.
This is the primary mechanism for release of insulin. Other substances known to stimulate insulin release include the amino acids arginine and leucine, parasympathetic release of acetylcholine (acting via the phospholipase C pathway), sulfonylurea, cholecystokinin (CCK, also via phospholipase C), and the gastrointestinally derived incretins, such as glucagon-like peptide-1 (GLP-1) and glucose-dependent insulinotropic peptide (GIP).
Release of insulin is strongly inhibited by norepinephrine (noradrenaline), which leads to increased blood glucose levels during stress. It appears that release of catecholamines by the sympathetic nervous system has conflicting influences on insulin release by beta cells, because insulin release is inhibited by α2-adrenergic receptors and stimulated by β2-adrenergic receptors. The net effect of norepinephrine from sympathetic nerves and epinephrine from adrenal glands on insulin release is inhibition due to dominance of the α-adrenergic receptors.
When the glucose level comes down to the usual physiologic value, insulin release from the β-cells slows or stops. If the blood glucose level drops lower than this, especially to dangerously low levels, release of hyperglycemic hormones (most prominently glucagon from islet of Langerhans alpha cells) forces release of glucose into the blood from the liver glycogen stores, supplemented by gluconeogenesis if the glycogen stores become depleted. By increasing blood glucose, the hyperglycemic hormones prevent or correct life-threatening hypoglycemia.
Evidence of impaired first-phase insulin release can be seen in the glucose tolerance test, demonstrated by a substantially elevated blood glucose level at 30 minutes after the ingestion of a glucose load (75 or 100 g of glucose), followed by a slow drop over the next 100 minutes, to remain above 120 mg/100 mL after two hours after the start of the test. In a normal person the blood glucose level is corrected (and may even be slightly over-corrected) by the end of the test. An insulin spike is a 'first response' to blood glucose increase, this response is individual and dose specific although it was always previously assumed to be food type specific only.
Oscillations
Even during digestion, in general, one or two hours following a meal, insulin release from the pancreas is not continuous, but oscillates with a period of 3–6 minutes, changing from generating a blood insulin concentration more than about 800 p mol/l to less than 100 pmol/L (in rats). This is thought to avoid downregulation of insulin receptors in target cells, and to assist the liver in extracting insulin from the blood. This oscillation is important to consider when administering insulin-stimulating medication, since it is the oscillating blood concentration of insulin release, which should, ideally, be achieved, not a constant high concentration. This may be achieved by delivering insulin rhythmically to the portal vein, by light activated delivery, or by islet cell transplantation to the liver.
Blood insulin level
The blood insulin level can be measured in international units, such as μIU/mL or in molar concentration, such as pmol/L, where 1 μIU/mL equals 6.945 pmol/L. A typical blood level between meals is 8–11 μIU/mL (57–79 pmol/L).
Signal transduction
The effects of insulin are initiated by its binding to a receptor, the insulin receptor (IR), present in the cell membrane. The receptor molecule contains an α- and β subunits. Two molecules are joined to form what is known as a homodimer. Insulin binds to the α-subunits of the homodimer, which faces the extracellular side of the cells. The β subunits have tyrosine kinase enzyme activity which is triggered by the insulin binding. This activity provokes the autophosphorylation of the β subunits and subsequently the phosphorylation of proteins inside the cell known as insulin receptor substrates (IRS). The phosphorylation of the IRS activates a signal transduction cascade that leads to the activation of other kinases as well as transcription factors that mediate the intracellular effects of insulin.
The cascade that leads to the insertion of GLUT4 glucose transporters into the cell membranes of muscle and fat cells, and to the synthesis of glycogen in liver and muscle tissue, as well as the conversion of glucose into triglycerides in liver, adipose, and lactating mammary gland tissue, operates via the activation, by IRS-1, of phosphoinositol 3 kinase (PI3K). This enzyme converts a phospholipid in the cell membrane by the name of phosphatidylinositol 4,5-bisphosphate (PIP2), into phosphatidylinositol 3,4,5-triphosphate (PIP3), which, in turn, activates protein kinase B (PKB). Activated PKB facilitates the fusion of GLUT4 containing endosomes with the cell membrane, resulting in an increase in GLUT4 transporters in the plasma membrane. PKB also phosphorylates glycogen synthase kinase (GSK), thereby inactivating this enzyme. This means that its substrate, glycogen synthase (GS), cannot be phosphorylated, and remains dephosphorylated, and therefore active. The active enzyme, glycogen synthase (GS), catalyzes the rate limiting step in the synthesis of glycogen from glucose. Similar dephosphorylations affect the enzymes controlling the rate of glycolysis leading to the synthesis of fats via malonyl-CoA in the tissues that can generate triglycerides, and also the enzymes that control the rate of gluconeogenesis in the liver. The overall effect of these final enzyme dephosphorylations is that, in the tissues that can carry out these reactions, glycogen and fat synthesis from glucose are stimulated, and glucose production by the liver through glycogenolysis and gluconeogenesis are inhibited. The breakdown of triglycerides by adipose tissue into free fatty acids and glycerol is also inhibited.
After the intracellular signal that resulted from the binding of insulin to its receptor has been produced, termination of signaling is then needed. As mentioned below in the section on degradation, endocytosis and degradation of the receptor bound to insulin is a main mechanism to end signaling. In addition, the signaling pathway is also terminated by dephosphorylation of the tyrosine residues in the various signaling pathways by tyrosine phosphatases. Serine/Threonine kinases are also known to reduce the activity of insulin.
The structure of the insulin–insulin receptor complex has been determined using the techniques of X-ray crystallography.
Physiological effects
The actions of insulin on the global human metabolism level include:
Increase of cellular intake of certain substances, most prominently glucose in muscle and adipose tissue (about two-thirds of body cells)
Increase of DNA replication and protein synthesis via control of amino acid uptake
Modification of the activity of numerous enzymes.
The actions of insulin (indirect and direct) on cells include:
Stimulates the uptake of glucose – Insulin decreases blood glucose concentration by inducing intake of glucose by the cells. This is possible because Insulin causes the insertion of the GLUT4 transporter in the cell membranes of muscle and fat tissues which allows glucose to enter the cell.
Increased fat synthesis – insulin forces fat cells to take in blood glucose, which is converted into triglycerides; decrease of insulin causes the reverse.
Increased esterification of fatty acids – forces adipose tissue to make neutral fats (i.e., triglycerides) from fatty acids; decrease of insulin causes the reverse.
Decreased lipolysis in – forces reduction in conversion of fat cell lipid stores into blood fatty acids and glycerol; decrease of insulin causes the reverse.
Induced glycogen synthesis – When glucose levels are high, insulin induces the formation of glycogen by the activation of the hexokinase enzyme, which adds a phosphate group in glucose, thus resulting in a molecule that cannot exit the cell. At the same time, insulin inhibits the enzyme glucose-6-phosphatase, which removes the phosphate group. These two enzymes are key for the formation of glycogen. Also, insulin activates the enzymes phosphofructokinase and glycogen synthase which are responsible for glycogen synthesis.
Decreased gluconeogenesis and glycogenolysis – decreases production of glucose from noncarbohydrate substrates, primarily in the liver (the vast majority of endogenous insulin arriving at the liver never leaves the liver); decrease of insulin causes glucose production by the liver from assorted substrates.
Decreased proteolysis – decreasing the breakdown of protein
Decreased autophagy – decreased level of degradation of damaged organelles. Postprandial levels inhibit autophagy completely.
Increased amino acid uptake – forces cells to absorb circulating amino acids; decrease of insulin inhibits absorption.
Arterial muscle tone – forces arterial wall muscle to relax, increasing blood flow, especially in microarteries; decrease of insulin reduces flow by allowing these muscles to contract.
Increase in the secretion of hydrochloric acid by parietal cells in the stomach.
Increased potassium uptake – forces cells synthesizing glycogen (a very spongy, "wet" substance, that increases the content of intracellular water, and its accompanying K+ ions) to absorb potassium from the extracellular fluids; lack of insulin inhibits absorption. Insulin's increase in cellular potassium uptake lowers potassium levels in blood plasma. This possibly occurs via insulin-induced translocation of the Na+/K+-ATPase to the surface of skeletal muscle cells.
Decreased renal sodium excretion.
In hepatocytes, insulin binding acutely leads to activation of protein phosphatase 2A (PP2A), which dephosphorylates the bifunctional enzyme fructose bisphosphatase-2 (PFKB1), activating the phosphofructokinase-2 (PFK-2) active site. PFK-2 increases production of fructose 2,6-bisphosphate. Fructose 2,6-bisphosphate allosterically activates PFK-1, which favors glycolysis over gluconeogenesis. Increased glycolysis increases the formation of malonyl-CoA, a molecule that can be shunted into lipogenesis and that allosterically inhibits of carnitine palmitoyltransferase I (CPT1), a mitochondrial enzyme necessary for the translocation of fatty acids into the intermembrane space of the mitochondria for fatty acid metabolism.
Insulin also influences other body functions, such as vascular compliance and cognition. Once insulin enters the human brain, it enhances learning and memory and benefits verbal memory in particular. Enhancing brain insulin signaling by means of intranasal insulin administration also enhances the acute thermoregulatory and glucoregulatory response to food intake, suggesting that central nervous insulin contributes to the co-ordination of a wide variety of homeostatic or regulatory processes in the human body. Insulin also has stimulatory effects on gonadotropin-releasing hormone from the hypothalamus, thus favoring fertility.
Degradation
Once an insulin molecule has docked onto the receptor and effected its action, it may be released back into the extracellular environment, or it may be degraded by the cell. The two primary sites for insulin clearance are the liver and the kidney. It is broken down by the enzyme, protein-disulfide reductase (glutathione), which breaks the disulphide bonds between the A and B chains. The liver clears most insulin during first-pass transit, whereas the kidney clears most of the insulin in systemic circulation. Degradation normally involves endocytosis of the insulin-receptor complex, followed by the action of insulin-degrading enzyme. An insulin molecule produced endogenously by the beta cells is estimated to be degraded within about one hour after its initial release into circulation (insulin half-life ~ 4–6 minutes).
Regulator of endocannabinoid metabolism
Insulin is a major regulator of endocannabinoid (EC) metabolism and insulin treatment has been shown to reduce intracellular ECs, the 2-arachidonoylglycerol (2-AG) and anandamide (AEA), which correspond with insulin-sensitive expression changes in enzymes of EC metabolism. In insulin-resistant adipocytes, patterns of insulin-induced enzyme expression is disturbed in a manner consistent with elevated EC synthesis and reduced EC degradation. Findings suggest that insulin-resistant adipocytes fail to regulate EC metabolism and decrease intracellular EC levels in response to insulin stimulation, whereby obese insulin-resistant individuals exhibit increased concentrations of ECs. This dysregulation contributes to excessive visceral fat accumulation and reduced adiponectin release from abdominal adipose tissue, and further to the onset of several cardiometabolic risk factors that are associated with obesity and type 2 diabetes.
Hypoglycemia
Hypoglycemia, also known as "low blood sugar", is when blood sugar decreases to below normal levels. This may result in a variety of symptoms including clumsiness, trouble talking, confusion, loss of consciousness, seizures or death. A feeling of hunger, sweating, shakiness and weakness may also be present. Symptoms typically come on quickly.
The most common cause of hypoglycemia is medications used to treat diabetes such as insulin and sulfonylureas. Risk is greater in diabetics who have eaten less than usual, exercised more than usual or have consumed alcohol. Other causes of hypoglycemia include kidney failure, certain tumors, such as insulinoma, liver disease, hypothyroidism, starvation, inborn error of metabolism, severe infections, reactive hypoglycemia and a number of drugs including alcohol. Low blood sugar may occur in otherwise healthy babies who have not eaten for a few hours.
Diseases and syndromes
There are several conditions in which insulin disturbance is pathologic:
Diabetes – general term referring to all states characterized by hyperglycemia. It can be of the following types:
Type 1 diabetes – autoimmune-mediated destruction of insulin-producing β-cells in the pancreas, resulting in absolute insulin deficiency
Type 2 diabetes – either inadequate insulin production by the β-cells or insulin resistance or both because of reasons not completely understood.
there is correlation with diet, with sedentary lifestyle, with obesity, with age and with metabolic syndrome. Causality has been demonstrated in multiple model organisms including mice and monkeys; importantly, non-obese people do get Type 2 diabetes due to diet, sedentary lifestyle and unknown risk factors, though this may not be a causal relationship.
it is likely that there is genetic susceptibility to develop Type 2 diabetes under certain environmental conditions
Other types of impaired glucose tolerance (see Diabetes)
Insulinoma – a tumor of beta cells producing excess insulin or reactive hypoglycemia.
Metabolic syndrome – a poorly understood condition first called syndrome X by Gerald Reaven. It is not clear whether the syndrome has a single, treatable cause, or is the result of body changes leading to type 2 diabetes. It is characterized by elevated blood pressure, dyslipidemia (disturbances in blood cholesterol forms and other blood lipids), and increased waist circumference (at least in populations in much of the developed world). The basic underlying cause may be the insulin resistance that precedes type 2 diabetes, which is a diminished capacity for insulin response in some tissues (e.g., muscle, fat). It is common for morbidities such as essential hypertension, obesity, type 2 diabetes, and cardiovascular disease (CVD) to develop.
Polycystic ovary syndrome – a complex syndrome in women in the reproductive years where anovulation and androgen excess are commonly displayed as hirsutism. In many cases of PCOS, insulin resistance is present.
Medical uses
Biosynthetic human insulin (insulin human rDNA, INN) for clinical use is manufactured by recombinant DNA technology. Biosynthetic human insulin has increased purity when compared with extractive animal insulin, enhanced purity reducing antibody formation. Researchers have succeeded in introducing the gene for human insulin into plants as another method of producing insulin ("biopharming") in safflower. This technique is anticipated to reduce production costs.
Several analogs of human insulin are available. These insulin analogs are closely related to the human insulin structure, and were developed for specific aspects of glycemic control in terms of fast action (prandial insulins) and long action (basal insulins). The first biosynthetic insulin analog was developed for clinical use at mealtime (prandial insulin), Humalog (insulin lispro), it is more rapidly absorbed after subcutaneous injection than regular insulin, with an effect 15 minutes after injection. Other rapid-acting analogues are NovoRapid and Apidra, with similar profiles. All are rapidly absorbed due to amino acid sequences that will reduce formation of dimers and hexamers (monomeric insulins are more rapidly absorbed). Fast acting insulins do not require the injection-to-meal interval previously recommended for human insulin and animal insulins. The other type is long acting insulin; the first of these was Lantus (insulin glargine). These have a steady effect for an extended period from 18 to 24 hours. Likewise, another protracted insulin analogue (Levemir) is based on a fatty acid acylation approach. A myristic acid molecule is attached to this analogue, which associates the insulin molecule to the abundant serum albumin, which in turn extends the effect and reduces the risk of hypoglycemia. Both protracted analogues need to be taken only once daily, and are used for type 1 diabetics as the basal insulin. A combination of a rapid acting and a protracted insulin is also available, making it more likely for patients to achieve an insulin profile that mimics that of the body's own insulin release. Insulin is also used in many cell lines, such as CHO-s, HEK 293 or Sf9, for the manufacturing of monoclonal antibodies, virus vaccines, and gene therapy products.
Insulin is usually taken as subcutaneous injections by single-use syringes with needles, via an insulin pump, or by repeated-use insulin pens with disposable needles. Inhaled insulin is also available in the U.S. market.
The Dispovan Single-Use Pen Needle by HMD is India’s first insulin pen needle that makes self-administration easy. Featuring extra-thin walls and a multi-bevel tapered point, these pen needles prioritise patient comfort by minimising pain and ensuring seamless medication delivery. The product aims to provide affordable Pen Needles to the developing part of the country through its wide distribution channel. Additionally, the universal design of these needles guarantees compatibility with all insulin pens.
Unlike many medicines, insulin cannot be taken by mouth because, like nearly all other proteins introduced into the gastrointestinal tract, it is reduced to fragments, whereupon all activity is lost. There has been some research into ways to protect insulin from the digestive tract, so that it can be administered orally or sublingually.
In 2021, the World Health Organization added insulin to its model list of essential medicines.
Insulin, and all other medications, are supplied free of charge to people with diabetes by the National Health Service in the countries of the United Kingdom.
History of study
Discovery
In 1869, while studying the structure of the pancreas under a microscope, Paul Langerhans, a medical student in Berlin, identified some previously unnoticed tissue clumps scattered throughout the bulk of the pancreas. The function of the "little heaps of cells", later known as the islets of Langerhans, initially remained unknown, but Édouard Laguesse later suggested they might produce secretions that play a regulatory role in digestion. Paul Langerhans' son, Archibald, also helped to understand this regulatory role.
In 1889, the physician Oskar Minkowski, in collaboration with Joseph von Mering, removed the pancreas from a healthy dog to test its assumed role in digestion. On testing the urine, they found sugar, establishing for the first time a relationship between the pancreas and diabetes. In 1901, another major step was taken by the American physician and scientist Eugene Lindsay Opie, when he isolated the role of the pancreas to the islets of Langerhans: "Diabetes mellitus when the result of a lesion of the pancreas is caused by destruction of the islets of Langerhans and occurs only when these bodies are in part or wholly destroyed".
Over the next two decades researchers made several attempts to isolate the islets' secretions. In 1906 George Ludwig Zuelzer achieved partial success in treating dogs with pancreatic extract, but he was unable to continue his work. Between 1911 and 1912, E.L. Scott at the University of Chicago tried aqueous pancreatic extracts and noted "a slight diminution of glycosuria", but was unable to convince his director of his work's value; it was shut down. Israel Kleiner demonstrated similar effects at Rockefeller University in 1915, but World War I interrupted his work and he did not return to it.
In 1916, Nicolae Paulescu developed an aqueous pancreatic extract which, when injected into a diabetic dog, had a normalizing effect on blood sugar levels. He had to interrupt his experiments because of World War I, and in 1921 he wrote four papers about his work carried out in Bucharest and his tests on a diabetic dog. Later that year, he published "Research on the Role of the Pancreas in Food Assimilation".
The name "insulin" was coined by Edward Albert Sharpey-Schafer in 1916 for a hypothetical molecule produced by pancreatic islets of Langerhans (Latin insula for islet or island) that controls glucose metabolism. Unbeknown to Sharpey-Schafer, Jean de Meyer had introduced the very similar word "insuline" in 1909 for the same molecule.
Extraction and purification
In October 1920, Canadian Frederick Banting concluded that the digestive secretions that Minkowski had originally studied were breaking down the islet secretion, thereby making it impossible to extract successfully. A surgeon by training, Banting knew that blockages of the pancreatic duct would lead most of the pancreas to atrophy, while leaving the islets of Langerhans intact. He reasoned that a relatively pure extract could be made from the islets once most of the rest of the pancreas was gone. He jotted a note to himself: "Ligate pancreatic ducts of dog. Keep dogs alive till acini degenerate leaving Islets. Try to isolate the internal secretion of these + relieve glycosurea[sic]."
In the spring of 1921, Banting traveled to Toronto to explain his idea to John Macleod, Professor of Physiology at the University of Toronto. Macleod was initially skeptical, since Banting had no background in research and was not familiar with the latest literature, but he agreed to provide lab space for Banting to test out his ideas. Macleod also arranged for two undergraduates to be Banting's lab assistants that summer, but Banting required only one lab assistant. Charles Best and Clark Noble flipped a coin; Best won the coin toss and took the first shift. This proved unfortunate for Noble, as Banting kept Best for the entire summer and eventually shared half his Nobel Prize money and credit for the discovery with Best. On 30 July 1921, Banting and Best successfully isolated an extract ("isletin") from the islets of a duct-tied dog and injected it into a diabetic dog, finding that the extract reduced its blood sugar by 40% in 1 hour.
Banting and Best presented their results to Macleod on his return to Toronto in the fall of 1921, but Macleod pointed out flaws with the experimental design, and suggested the experiments be repeated with more dogs and better equipment. He moved Banting and Best into a better laboratory and began paying Banting a salary from his research grants. Several weeks later, the second round of experiments was also a success, and Macleod helped publish their results privately in Toronto that November. Bottlenecked by the time-consuming task of duct-tying dogs and waiting several weeks to extract insulin, Banting hit upon the idea of extracting insulin from the fetal calf pancreas, which had not yet developed digestive glands. By December, they had also succeeded in extracting insulin from the adult cow pancreas. Macleod discontinued all other research in his laboratory to concentrate on the purification of insulin. He invited biochemist James Collip to help with this task, and the team felt ready for a clinical test within a month.
On 11 January 1922, Leonard Thompson, a 14-year-old diabetic who lay dying at the Toronto General Hospital, was given the first injection of insulin. However, the extract was so impure that Thompson had a severe allergic reaction, and further injections were cancelled. Over the next 12 days, Collip worked day and night to improve the ox-pancreas extract. A second dose was injected on 23 January, eliminating the glycosuria that was typical of diabetes without causing any obvious side-effects. The first American patient was Elizabeth Hughes, the daughter of U.S. Secretary of State Charles Evans Hughes. The first patient treated in the U.S. was future woodcut artist James D. Havens; John Ralston Williams imported insulin from Toronto to Rochester, New York, to treat Havens.
Banting and Best never worked well with Collip, regarding him as something of an interloper, and Collip left the project soon after. Over the spring of 1922, Best managed to improve his techniques to the point where large quantities of insulin could be extracted on demand, but the preparation remained impure. The drug firm Eli Lilly and Company had offered assistance not long after the first publications in 1921, and they took Lilly up on the offer in April. In November, Lilly's head chemist, George B. Walden discovered isoelectric precipitation and was able to produce large quantities of highly refined insulin. Shortly thereafter, insulin was offered for sale to the general public.
Patent
Toward the end of January 1922, tensions mounted between the four "co-discoverers" of insulin and Collip briefly threatened to separately patent his purification process. John G. FitzGerald, director of the non-commercial public health institution Connaught Laboratories, therefore stepped in as peacemaker. The resulting agreement of 25 January 1922 established two key conditions: 1) that the collaborators would sign a contract agreeing not to take out a patent with a commercial pharmaceutical firm during an initial working period with Connaught; and 2) that no changes in research policy would be allowed unless first discussed among FitzGerald and the four collaborators. It helped contain disagreement and tied the research to Connaught's public mandate.
Initially, Macleod and Banting were particularly reluctant to patent their process for insulin on grounds of medical ethics. However, concerns remained that a private third-party would hijack and monopolize the research (as Eli Lilly and Company had hinted), and that safe distribution would be difficult to guarantee without capacity for quality control. To this end, Edward Calvin Kendall gave valuable advice. He had isolated thyroxin at the Mayo Clinic in 1914 and patented the process through an arrangement between himself, the brothers Mayo, and the University of Minnesota, transferring the patent to the public university. On 12 April, Banting, Best, Collip, Macleod, and FitzGerald wrote jointly to the president of the University of Toronto to propose a similar arrangement with the aim of assigning a patent to the Board of Governors of the university. The letter emphasized that:The assignment to the University of Toronto Board of Governors was completed on 15 January 1923, for the token payment of $1.00. The arrangement was congratulated in The World's Work in 1923 as "a step forward in medical ethics". It has also received much media attention in the 2010s regarding the issue of healthcare and drug affordability.
Following further concern regarding Eli Lilly's attempts to separately patent parts of the manufacturing process, Connaught's Assistant Director and Head of the Insulin Division Robert Defries established a patent pooling policy which would require producers to freely share any improvements to the manufacturing process without compromising affordability.
Structural analysis and synthesis
Purified animal-sourced insulin was initially the only type of insulin available for experiments and diabetics. John Jacob Abel was the first to produce the crystallised form in 1926. Evidence of the protein nature was first given by Michael Somogyi, Edward A. Doisy, and Philip A. Shaffer in 1924. It was fully proven when Hans Jensen and Earl A. Evans Jr. isolated the amino acids phenylalanine and proline in 1935.
The amino acid structure of insulin was first characterized in 1951 by Frederick Sanger, and the first synthetic insulin was produced simultaneously in the labs of Panayotis Katsoyannis at the University of Pittsburgh and Helmut Zahn at RWTH Aachen University in the mid-1960s. Synthetic crystalline bovine insulin was achieved by Chinese researchers in 1965. The complete 3-dimensional structure of insulin was determined by X-ray crystallography in Dorothy Hodgkin's laboratory in 1969.
Hans E. Weber discovered preproinsulin while working as a research fellow at the University of California Los Angeles in 1974. In 1973–1974, Weber learned the techniques of how to isolate, purify, and translate messenger RNA. To further investigate insulin, he obtained pancreatic tissues from a slaughterhouse in Los Angeles and then later from animal stock at UCLA. He isolated and purified total messenger RNA from pancreatic islet cells which was then translated in oocytes from Xenopus laevis and precipitated using anti-insulin antibodies. When total translated protein was run on an SDS-polyacrylamide gel electrophoresis and sucrose gradient, peaks corresponding to insulin and proinsulin were isolated. However, to the surprise of Weber a third peak was isolated corresponding to a molecule larger than proinsulin. After reproducing the experiment several times, he consistently noted this large peak prior to proinsulin that he determined must be a larger precursor molecule upstream of proinsulin. In May 1975, at the American Diabetes Association meeting in New York, Weber gave an oral presentation of his work where he was the first to name this precursor molecule "preproinsulin". Following this oral presentation, Weber was invited to dinner to discuss his paper and findings by Donald Steiner, a researcher who contributed to the characterization of proinsulin. A year later in April 1976, this molecule was further characterized and sequenced by Steiner, referencing the work and discovery of Hans Weber. Preproinsulin became an important molecule to study the process of transcription and translation.
The first genetically engineered (recombinant), synthetic human insulin was produced using E. coli in 1978 by Arthur Riggs and Keiichi Itakura at the Beckman Research Institute of the City of Hope in collaboration with Herbert Boyer at Genentech. Genentech, founded by Swanson, Boyer and Eli Lilly and Company, went on in 1982 to sell the first commercially available biosynthetic human insulin under the brand name Humulin. The vast majority of insulin used worldwide is biosynthetic recombinant human insulin or its analogues. Recently, another recombinant approach has been used by a pioneering group of Canadian researchers, using an easily grown safflower plant, for the production of much cheaper insulin.
Recombinant insulin is produced either in yeast (usually Saccharomyces cerevisiae) or E. coli In yeast, insulin may be engineered as a single-chain protein with a KexII endoprotease (a yeast homolog of PCI/PCII) site that separates the insulin A chain from a C-terminally truncated insulin B chain. A chemically synthesized C-terminal tail containing the missing threonine is then grafted onto insulin by reverse proteolysis using the inexpensive protease trypsin; typically the lysine on the C-terminal tail is protected with a chemical protecting group to prevent proteolysis. The ease of modular synthesis and the relative safety of modifications in that region accounts for common insulin analogs with C-terminal modifications (e.g. lispro, aspart, glulisine). The Genentech synthesis and completely chemical synthesis such as that by Bruce Merrifield are not preferred because the efficiency of recombining the two insulin chains is low, primarily due to competition with the precipitation of insulin B chain.
Nobel Prizes
The Nobel Prize committee in 1923 credited the practical extraction of insulin to a team at the University of Toronto and awarded the Nobel Prize to two men: Frederick Banting and John Macleod. They were awarded the Nobel Prize in Physiology or Medicine in 1923 for the discovery of insulin. Banting, incensed that Best was not mentioned, shared his prize with him, and Macleod immediately shared his with James Collip. The patent for insulin was sold to the University of Toronto for one dollar.
Two other Nobel Prizes have been awarded for work on insulin. British molecular biologist Frederick Sanger, who determined the primary structure of insulin in 1955, was awarded the 1958 Nobel Prize in Chemistry. Rosalyn Sussman Yalow received the 1977 Nobel Prize in Medicine for the development of the radioimmunoassay for insulin.
Several Nobel Prizes also have an indirect connection with insulin. George Minot, co-recipient of the 1934 Nobel Prize for the development of the first effective treatment for pernicious anemia, had diabetes. William Castle observed that the 1921 discovery of insulin, arriving in time to keep Minot alive, was therefore also responsible for the discovery of a cure for pernicious anemia. Dorothy Hodgkin was awarded a Nobel Prize in Chemistry in 1964 for the development of crystallography, the technique she used for deciphering the complete molecular structure of insulin in 1969.
Controversy
The work published by Banting, Best, Collip and Macleod represented the preparation of purified insulin extract suitable for use on human patients. Although Paulescu discovered the principles of the treatment, his saline extract could not be used on humans; he was not mentioned in the 1923 Nobel Prize. Ian Murray was particularly active in working to correct "the historical wrong" against Nicolae Paulescu. Murray was a professor of physiology at the Anderson College of Medicine in Glasgow, Scotland, the head of the department of Metabolic Diseases at a leading Glasgow hospital, vice-president of the British Association of Diabetes, and a founding member of the International Diabetes Federation. Murray wrote:
In a private communication, Arne Tiselius, former head of the Nobel Institute, expressed his personal opinion that Paulescu was equally worthy of the award in 1923.
References
Further reading
Famous Canadian Physicians: Sir Frederick Banting at Library and Archives Canada
External links
University of Toronto Libraries Collection: Discovery and Early Development of Insulin, 1920–1925
CBC Digital Archives – Banting, Best, Macleod, Collip: Chasing a Cure for Diabetes
Animations of insulin's action in the body at AboutKidsHealth.ca (archived 9 March 2011)
Animal products
Genes on human chromosome 11
Hormones of glucose metabolism
Human hormones
Insulin receptor agonists
Insulin-like growth factor receptor agonists
Pancreatic hormones
Peptide hormones
Recombinant proteins
Tumor markers | Insulin | Chemistry,Biology | 11,279 |
4,439,998 | https://en.wikipedia.org/wiki/Uranium%20borohydride | Uranium borohydride is the inorganic compound with the empirical formula U(BH4)4. Two polymeric forms are known, as well as a monomeric derivative that exists in the gas phase. Because the polymers convert to the gaseous form at mild temperatures, uranium borohydride once attracted much attention. It is solid green.
Structure
It is a homoleptic coordination complex with borohydride (also called tetrahydroborate). These anions can serve as bidentate (κ2-BH4−) bridges between two uranium atoms or as tridentate ligands (κ3-BH4−) on single uranium atoms. In the solid state, a polymeric form exists that has a 14-coordinate structure with two tridentate terminal groups and four bidentate bridging groups. Gaseous features a monomeric 12-coordinate uranium, with four κ3-BH4− ligands, which envelop the metal, conferring volatility.
Preparation
This compound was first prepared by treating uranium tetrafluoride with aluminium borohydride:
UF4 + 2 Al(BH4)3 → U(BH4)4 + 2 Al(BH4)F2
It may also be prepared by the solid-state reaction of uranium tetrachloride with lithium borohydride:
UCl4 + 4 LiBH4 → U(BH4)4 + 4 LiCl
Although solid U(BH4)4 is a polymer, it undergoes cracking, converting to the monomer.
The related methylborohydride complex U(BH3CH3)4 is monomeric as a solid and hence more volatile.
History
During the Manhattan Project, the need arose to find volatile compounds of uranium suitable for use in the diffusion separation of uranium isotopes. Uranium borohydride is, after uranium hexafluoride, the most volatile compound of uranium known with a vapor pressure of at 60 °C. Uranium borohydride was discovered by Hermann Irving Schlesinger and Herbert C. Brown, who also discovered sodium borohydride.
Uranium hexafluoride is corrosive, which led to serious consideration of the borohydride. However, by the time the synthesis method was finalized, the problems related to uranium hexafluoride were solved. Borohydrides are nonideal ligands for isotope separations, since there are isotopes of boron that occur naturally in high abundance: 10B (20%) and 11B (80%), while fluorine-19 is the only isotope of fluorine that occurs in nature in more than trace quantities.
References
Uranium(IV) compounds
Borohydrides
Inorganic polymers
Coordination polymers | Uranium borohydride | Chemistry | 584 |
55,339,303 | https://en.wikipedia.org/wiki/The%20Rickchurian%20Mortydate | "The Rickchurian Mortydate" is the tenth and final episode of the third season of the American science fiction television series Rick and Morty follows the titular grandson and grandfather duo as they feud with the President of the United States. The episode, directed by Anthony Chun and written by series co-creator Dan Harmon, aired on Adult Swim on October 1, 2017.
Plot
The President calls on Rick and Morty to defeat a monster in the "Kennedy sex tunnels" underneath the White House, which they do with little effort. Annoyed that the President constantly calls on them without any gratitude "like Ghostbusters", they go back home to play Minecraft, with the President quickly finding out. After calling the pair and calling out their neglect of duty, the resulting argument leads to a battle of egos. Rick reiterates they operate above the authority of the U.S. government, and the President declares they will not accept the two's services to save the world. However, they continue one-upping the President by negotiating a peace deal with a microscopic culture in the Amazon rainforest and a treaty ending the Israeli-Palestinian conflict. It culminates in a fight in the White House between Rick and the President's security to force him to pose for a selfie with Morty, despite Morty saying he does not want one.
Meanwhile, Beth begins to fear she might be a clone made by Rick after the events of "The ABCs of Beth." When she asks Rick he denies it but also does not say if he would tell her if she was and mentions that if she was a clone who discovered her identity he would have to kill her, causing her to panic. Beth reunites with Jerry to figure out the truth, and they decide to get back together. Shortly after, the entire family gets together to hide from Rick because of his conflict with the President, but he tracks them down. Rick eventually reassures Beth that he will not kill her and submits to Jerry once again being a family member.
Rick ends his conflict with the President by pretending to be Fly Fishing Rick, a Rick from a different reality, and calling a truce. The episode ends with the family happy to be together again, although Rick is disappointed about losing his dominant position. In a post-credits scene, Mr. Poopybutthole returns to apologize for not appearing in Season 3, but he has gotten married and has a son. He ends the scene by saying that it will be a long wait until season four.
Production
On September 20, 2017, the episode title was revealed to be "The Rickchurian Mortydate" by The Futon Critic. The writing and directorial credits of Dan Harmon and Anthony Chun, respectively, were announced upon the episode's airing. The title is a reference to The Manchurian Candidate, a 1962 political thriller film.
The episode stars Justin Roiland as Rick Sanchez and Morty Smith, Chris Parnell as Jerry Smith, Sarah Chalke as Beth Smith, and Spencer Grammer as Summer Smith. Also in the episode, Roiland voices series recurring characters Mr. Poopybutthole, actor Keith David voices the President of the United States, a major character in the episode, and Tara Strong portrays the Presidentress of the Mega Gargantuans, an alien species.
Reception
Viewing figures
The episode was viewed by 2.60 million American viewers upon its airdate.
Critical reception
Joe Matar of Den of Geek said the season finale "didn’t make for a terribly funny episode" and that it "got a bit preoccupied in expositing itself. This was mostly a very, very solid season, but this finale was a mad scramble to tidy up Beth and Jerry’s divorce while Rick horsed around with the President."
Jesse Schedeen of IGN both criticized and praised the episode, saying "The Rickchurian Mortydate" "isn't the most dramatic or emotionally devastating episode ever, but it's still a fun, memorable way to wrap up the show's most eclectic season to date. Rick's violent rivalry with the president entertained from start to finish, while Beth's clone crisis gave the episode the dramatic edge it needed. Best of all, this episode gave Rick just the sort of comeuppance he needed after his sinister behavior in Season 3."
References
Fiction about size change
Rick and Morty episodes
2017 television episodes
Television episodes written by Dan Harmon | The Rickchurian Mortydate | Physics,Mathematics | 904 |
42,935 | https://en.wikipedia.org/wiki/Detection | In general, detection is the action of accessing information without specific cooperation from with the sender.
In the history of radio communications, the term "detector" was first used for a device that detected the simple presence or absence of a radio signal, since all communications were in Morse code. The term is still in use today to describe a component that extracts a particular signal from all of the electromagnetic waves present. Detection is usually based on the frequency of the carrier signal, as in the familiar frequencies of radio broadcasting, but it may also involve filtering a faint signal from noise, as in radio astronomy, or reconstructing a hidden signal, as in steganography.
In optoelectronics, "detection" means converting a received optical input to an electrical output. For example, the light signal received through an optical fiber is converted to an electrical signal in a detector such as a photodiode.
In steganography, attempts to detect hidden signals in suspected carrier material is referred to as steganalysis. Steganalysis has an interesting difference from most other types of detection, in that it can often only determine the probability that a hidden message exists; this is in contrast to the detection of signals which are simply encrypted, as the ciphertext can often be identified with certainty, even if it cannot be decoded.
In the military, detection refers to the special discipline of reconnaissance with the aim to recognize the presence of an object in a location or ambiance.
Finally, the art of detection, also known as following clues, is the work of a detective in attempting to reconstruct a sequence of events by identifying the relevant information in a situation.
See also
Object detection
Signal detection theory
Communication
Wireless locating | Detection | Technology | 352 |
1,714,346 | https://en.wikipedia.org/wiki/Freiberg%20University%20of%20Mining%20and%20Technology | The Technische Universität Bergakademie Freiberg (abbreviation: TU Bergakademie Freiberg, TUBAF) is a public university of technology with 3,471 students in the city of Freiberg, Saxony, Germany. The university's focuses are exploration, mining & extraction, processing, and recycling of natural resources & scrap, as well as developing new materials and researching renewable energies. It is highly specialized and proficient in these fields.
Today, it is the oldest university of mining and metallurgy in the world.
History
Pre-1945
The institution was established in 1765, during the Age of Enlightenment, by Prince Francis Xavier of Saxony based on plans by Friedrich Wilhelm von Oppel and Friedrich Anton von Heynitz. At the time, it was called the Kurfürstlich-Sächsische Bergakademie zu Freiberg (by 1806: Königlich-Sächsische Bergakademie zu Freiberg). Its main purpose was the education of highly skilled miners and scientists in fields connected to mining and metallurgy. There had developed a need for mining, as an industry to regenerate Saxony's economy, since Saxony had been defeated in the Seven Years' War.
Before the establishment of the Bergakademie (mining school), four similar institutions had been founded in other countries: Potosí, Bolivia (1757–1786); Kongsberg, Norway (1757–1814); Schemnitz, today's Slovakia (Banská Štiavnica, 1762–1919); and Prague (1762–1772). Since these do not exist anymore, Freiberg University is the oldest and still operational University of Mining and Technology. After the École des Ponts et Chaussées, which was established in 1747, it is also the second oldest institution of higher learning with focus on STEM-research (university of technology).
The chemical elements indium (1863) and germanium (1886) were discovered by scientists of Freiberg University. The polymath Alexander von Humboldt studied mining at the Bergakademie from 1791 to 1792, as did the poet Novalis from 1797 to 1799.
In 1899, it was incorporated as a Technische Hochschule. In 1905, Bergakademie gained the right to grant doctorates in engineering (Dr.-Ing.), and in 1939 for natural sciences (Dr. rer. nat.). In 1940, two novel faculties (divisions) where established: Natural Sciences and Mining & Metallurgy. In 1956, another faculty concerning economy was added.
1945 to 1990
After World War II, education of future engineers and scientists, as well as research were quickly re-established in order to (re-) build primary industry in the Soviet Occupation Zone/GDR. The campus and faculty-staff were expanded rapidly. The educational direction changed through establishing novel courses. Also, the student demographics changed (percentage of women increased), since the access to college was directed by central authorities. Additionally, children of "workers & farmers", who traditionally didn't pursued tertiary education, were supported by having a college preparation institute (Arbeiter-und-Bauern-Fakultät (ABF) "Wilhelm Pieck").
Since 1990
In the aftermath of German reunification, the infrastructure and academic body were reorganized in order to fit the new political circumstances. After its incorporation into the West German system of higher education, Bergakademie quickly found a prime position as "The University of Resources". As the first East German University, it joined the German Research Foundation. In connection, the social sciences section were eliminated, while a faculty for economics was restructured and expanded to 15 professorships.
One of the emerging focus points in research was semiconductors, which led to corporations settling in and around Freiberg. These include Siltronic AG, Meyer Burger Technology AG, and JT Energy Systems, specializing in semiconductors, solar power, and lithium-ion batteries, respectively. Besides geo- and materials sciences, environmental science became a university strong point.
In March 1993, then Technische Hochschule Bergakademie Freiberg was renamed Technische Universität Bergakademie Freiberg, underlining its increased status and significance.
Today, TUBAF is a modern & environmentally focused university, internationally recognized as a "university of closed resource cycles".
The university's history is presented in the Historicum through numerous exhibits, paintings and photographs, and documents. The Forum for Mining History (Forum Montangeschichte) is responsible for digitizing and publishing historic essays and publications concerning Saxony's historical mining and metallurgical industry.
Historical figures and scientific achievements
A number of known figures studied and/or lectured at the Bergakademie:
Abraham Gottlob Werner (1749–1817) was a highly influential lecturer and scientist, who systematized minerals and rock formations. He is considered the founder of an early form of geology as a science, called 'geognosis'. Thus, he laid the foundation for mineralogy and resource deposit theory. During his tenure, he attracted a wide range of students and peers, among them Alexander von Humboldt, Franz Xaver von Baader, Leopold von Buch, Friedrich Mohs, and Robert Jameson.
Wilhelm August Lampadius (1772–1842) was a professor of chemistry and metallurgy. He installed the first gas light on the European continent and advanced the technology to an industrial scale. Also, Lampadius founded the world's first chemical research laboratory in a university in 1796/97.
The poet Novalis (1772–1801; Georg Philipp Friedrich von Hardenberg) studied in Freiberg from 1797 through 1799. He also created his pseudonym for his literary works during this time. Many topics and themes of his work came from the mining culture surrounding him.
The polymath Alexander von Humboldt enrolled on 14 June 1791 and went through a rather short, but intense program, qualifying him in natural sciences and metallurgy. He took a special interest in developing appliances, such as the "Licht-Erhalter". One of Humboldt's most famous discoveries was the vegetation underground, published in 1793 as "Flora Fribergensis". Many of the plants described were discovered and characterized by him.
1848 to 1851, Gustav Anton Zeuner studied in Freiberg. He later went on to lay the groundwork for thermodynamics as field of study in engineering.
Karl Heinrich Adolf Bernhard Ledebur was one of the first to study processes in metallurgy and ironwork empirically with modern scientific tools and methods. During his tenure, he founded the university's iron laboratory.
In 1863, the chemical element indium was discovered by chemist Hieronymus Theodor Richter (1824–1898) and physicist Ferdinand Reich (1799–1882), naming it after its indigo-blue colored flame.
In 1886, chemistry professor Clemens Alexander Winkler (1838–1904) isolated the element germanium for the first time while analyzing the rather uncommon mineral argyrodite. This proved Dmitri Mendeleev's periodic table and his prediction of a so-called ekasilicon.
In the field of process engineering, Erich Rammler and Georg Bilkenroth were awarded the National Prize of the German Democratic Republic (1st class) for their work on lignite coke & coal gasification in 1951.
Profile
The university has defined core fields that create a unique profile in education and research:
Geo
Exploring, researching, and (resourceful) use of the system Earth is the focal point of TUBAF's geosciences. The work is based on innovative and novel technologies, e.g. for finding resources, extracting them without unnecessary destruction, and resourceful processing.
Materials
Innovative materials for today's problems and uses are being developed. This includes the making, as well as recycling of these materials.
Energy
In this field, scientists develop new, green solutions to energy problems. Production, use, and storage of energy are researched in conjunction. Additionally, digitisation of the energy sector is another topic.
Environment
Environmental sciences focus on safety and conservation aspects, e.g. of drinking water, as well as on processes in the primary and energy industry.
Technology
Engineers work on future-oriented solutions, novel products, and optimization of already existing processes & methods. Their studies include applied research as well as foundational questions.
Economics
Economic topics arise from all of the fields mentioned above. Therefore, researchers in this field work on projects in pure economic disciplines and interdisciplinary projects alike.
Research
TUBAF describes itself as a modern research university, especially focused on current and future ecological and econonmical challenges. Interdisciplinary research is emphasized. Most investigated topics revolve around alternative methods in resource extraction, energy systems, compound materials and recycling. The university is recognized worldwide for its expertise in geo and materials science.
TUBAF is in the Top 10 of universities in Germany based on thrd-party (private) funding per professor, according to a 2022 study. A number of patents and inventions by TUBAF-based researchers are recognized each year.
With SAXEED, a founders network, start-ups are being supported. The program has helped several successful companies like NaPaGen GmbH, Just in Time-Food GmbH and Rockfeel GmbH.
Programs
The university offers programs taught in German, as well as international programs entirely taught in English. All in all, there are 75 programs. Among those are unique ones, such as Applied Natural Sciences, Industrial Archeology, Mine-Surveying, and Chemistry (Diplom), which are taught in German.
Admission to all programs from Bachelor through Ph.D. is performance-based and without tuition fees (as usual for consecutive studies at German public universities); students pay a registration fee of €94 per semester, of which €7 is dedicated solely to the Student Body (Council).
13 masters programs (date: WS 2022/23) are taught in English:
Advanced Materials Analysis (AMA)
Advanced Mineral Resource Development (AMRD)
Computational Materials Science (CMS)
EMerald master in Resources Engineering (EMerald)
Geomatics for Mineral Resource Management
Geoscience
Groundwater Management
International Business and Resources in Emerging Markets (IBRE)
Mathematics for Data and Resource Sciences
Mechanical and Process Engineering (MPE)
Metallic Materials Technology (MMT)
Sustainable Mining and Remediation Management (MoRe)
Sustainable and Innovative Natural Resource Management (SINReM)
Technology and Application of Inorganic Engineering Materials (TAIEM)
Freiberg University of Mining and Technology has been ranked among the best universities worldwide for mining engineering.
Though a public university, it has a relatively large private endowment. The university is home to one of the largest German university foundations.
Structure
TU Bergakademie Freiberg is led by a rectorate, legislative decisions are made by the senate or extended senate.
The rectorate consists of rector, chancellor, and two prorectors for Education and Research, respectively.
The university has 6 subdivisions called faculties:
Mathematics and Computer Science
Chemistry and Physics
Geosciences, Geoengineering and Mining
Mechanical, Process and Energy Engineering
Materials Science and Technology
Economics
Student body
In winter 2022/23 3,471 students were enrolled at TUBAF, 85% in MINT-programs, with a 30% female share.
Freiberg is a highly international university. Among its c. 3500 students, 41% are from foreign countries. There are double degree agreements with universities in China, France, Ghana, Italy, Poland, Russia, Thailand, and others. About 30% of the doctoral degrees awarded by the university are given to foreign students.
Campus, institutes & facilities
Campus
Unlike other historical universities in Germany, TUBAF has a campus with most of its buildings and facilities in close proximity. The oldest buildings lie in the historic (medieval) city center, among these the
Main Buildung - administration, student office, and Faculty 1
Schlossplatzquartier - Faculty 6, international office, SIZ (student café)
Alte Mensa - former dining hall, now eent location and student-run bar
Werner-Bau - Faculty 3
The majority of the university's infrastructure can be found in the north of the city, including
Library "Gregorius Agricola"
Dining Hall "Neue Mensa" with cafeteria and a student-run bar
several buildings of Faculties 1 through 5
student housing (dorms)
The two main parts are connected by a so-called 'corridor' of recent buildings and greenery.
Additionally, a part of the university is located above and around the "Lehr- und Forschungsbergwerk Reiche Zeche", a historical mine, operated today as a teaching and research facility.
Other infrastructure includes the university sports centre, Lessing-Bau and the Scientific Diving Center.
Institutes and facilities
Through its specialization, TUBAF has created a number of institutions, centers, and facilities with state-of-the-art research equipment. Unique in Europe is the still operational mine for teaching as well as underground exploration research. TUBAF is one of two German institutions, where scientific divers are trained.
List of notable facilities
EIT RawMaterials – Regional Center Freiberg (RCF)
ERP-Kompetenzzentrum sächsischer Hochschulen
Forschungs- und Lehrbergwerk "Reiche Zeche"
Interdisziplinäres Ökologisches Zentrum (IÖZ)
Zentrales Reinraumlabor (ZRL)
Zentrum für effiziente Hochtemperatur-Stoffwandlung (ZeHS)
Biohydrometallurgical Center for Strategic Elements (BHMZ)
DBI Bergakademie
Freiberger Hochdruckforschungszentrum (FHP)
Mine Water Research Center (MWRC)
Scientific Diving Center Freiberg (SDC Freiberg)
Zentrum für Innovationskompetenz (ZIK) VIRTUHCON
Zentrum für Wasserforschung (ZeWaF)
Partners & cooperations
TU Bergakademie Freiberg has an extensive network of regional and national cooperation partners in science and industry.
These include, among others, affiliated institutes as independent research facilities that cooperate with the university and complement its teaching and research offerings. These include:
IBEXU Institut für Sicherheitstechnik GmbH Freiberg
Forschungsinstitut für Leder und Kunststoffbahnen (FILK) gGmbH Freiberg
Stahlzentrum Freiberg e. V.
Institut für Korrosionsschutz Dresden GmbH
UVR – FIA GmbH Verfahrensentwicklung-Umweltschutztechnik-Recycling Freiberg
DBI – Gastechnologisches Institut GmbH Freiberg
HAVER ENGINEERING GmbH – Ingenieurbüro für Aufbereitungstechnik, Meißen
DBI VIRTUHCON GmbH, Freiberg
PARFORCE Engineering & Consulting GmbH, Freiberg
In addition, there are cooperations and joint projects with non-university institutions.
In 2011, the university founded the joint Helmholtz Institute Freiberg for Resource Technology ("Helmholtz-Institut Freiberg für Ressourcentechnologie") with the Helmholtz-Zentrum Dresden-Rossendorf to develop technologies for raw material supply, utilization and environmentally friendly recycling.
TU Bergakademie Freiberg also has close cooperation in the field of electronic material production and material processing with the Fraunhofer Technology Center for High Performance Materials (THM) and the Fraunhofer Institute for Solar Energy Systems (ISE) in Freiburg. It also operates a joint department of the Fraunhofer Institute for Integrated Systems and Device Technology (IISB) in Erlangen.
The Institute of Geophysics at TU Bergakademie Freiberg operates the Berggießhübel Seismological Observatory.
TUBAF is also a co-initiator of the university-based "Internationalen Hochschulinstituts Zittau" (IHI), founded in 1993 and now a subdivision of TU Dresden, and the start-up network SAXEED.
In addition to direct cooperation with individual companies and institutions, participation in international networks and associations is an essential instrument for the transfer of ideas, knowledge and technology. The TU Bergakademie Freiberg is, among others, a member of:
Geokompetenzzentrum Freiberg e. V. (GKZ)
EIT RawMaterials
Silicon Saxony
EnergieRohstoff-Netzwerk (ERN)
Freiberger Interessengemeinschaft der Recycling- und Entsorgungsunternehmen e. V. (FIRE)
InnoRegio Mittelsachsen
Interdisziplinäres Kompetenzzentrum Flächenrecycling CiF e. V. Freiberg/Berlin/Aachen
World Energy Council (WEC)
Deutsch-Russischen Rohstoff-Forum (DRRF)*
German Resource Research Institute (GERRI)
Energy Saxony e. V.
biosaxony e. V.
Leichtbau-Allianz Sachsen e. V.
GlasCampus Torgau
AMZ Sachsen
4transfer Innovations- und Transferverbund
TransferAllianz e. V.
In total, the university cooperates with 274 partner institutions in 74 countries. Connections to non-European companies and research institutions exist, among others, to Bolivia, Chile, China, Mozambique, South Africa, Vietnam and Mongolia.[12] In Mongolia and Kenya, for example, TU Bergakademie Freiberg is helping to establish the German Mongolian Institute for Resources and Technology (GMIT) in Ulan Bator and the Kenyan German Centre for Mining, Environmental Engineering and Resource Management (CEMEREM) at Taita Taveta University College in Voi. It is also active in research and teaching with a wide variety of projects at universities in Russia*, South America, Asia and Africa.
All in all, TUBAF currently has 184 active partnerships (including 76 ERASMUS agreements & 18 interdisciplinary university cooperations), 755 official contacts with other universities worldwide, and Joint-/Double-Degree-agreements with partner universities in China, France, Ghana, Italy, Poland, Thailand, Czech Republic, Hungary and Ukraine. Contracts with Russian universities have been suspended due to the Russian invasion of Ukraine.
Foundations and trusts
The history of foundations for the Freiberg Mining Academy dates back to 1702, when a scholarship fund was established by the Saxon Elector at the Freiberg Mining Authority. In the further course, the university repeatedly received grants, which were initially primarily used to support students, and later increasingly for research infrastructure. After 1990, the idea of foundations and trusts, which had been interrupted after the Second World War, was revived. The following important foundations were established:
Sparkassen-Stiftung (1998)
Stiftung Technische Universität Bergakademie Freiberg (2002)
Pohl-Ströher-Mineralienstiftung (2004)
Dr.-Erich-Krüger-Stiftung (2006)
Stiftung Mineralogische Sammlung Deutschland (2008)
Dr. Frank-Michael und Marianne Engel-Stiftungsfonds (2009)
Heinisch-Stiftung (2015)
Ursula und Prof. Dr. Wolf-Dieter Schneider Stiftung (2019)
Stiftung Christian Grosse Geschichtsbibliothek (2019)
From the Dr.-Erich-Krüger-Stiftung, TUBAF received an amount of several millions, the largest endowment of a state university in Germany to date. The university uses these funds to equip research with large-scale equipment and to support doctoral students. On July 12, 2007, Peter Krüger, who had been appointed honorary senator of the Bergakademie shortly before, died in Munich. His wife Erika Krüger, who was made an honorary senator of the university in 2017, continues the foundation. Among other things, she made possible the establishment of the Graduate and Research Academy, the Freiberg Biohydrometallurgical Center and the Freiberg High Pressure Research Center. Erika Krüger also supports the university privately with great commitment and considerable financial resources - including "Deutschlandstipendium" scholarships for particularly committed students.
Collections
Since its foundation in 1765, TUBAF has had premises to house its models, equipment, specimens and instruments used in research and teaching. In addition to the library, where manuscripts, maps and cracks were also kept, the geoscientific collections emerged from the so-called "Stufenenkabinett". There was also a collection of models of innovative mining machines, which were produced in a separate workshop from 1840 and later housed in a separate model room. Over the last 250 years, a large number of new technical collections have been added. Today, they comprise more than one million scientific specimens, 15,000 scientific instruments and models, and about 1,000 works of art and cultural-historical objects.
The geoscientific collections of the TU Bergakademie Freiberg are among the ten oldest as well as most extensive geoscientific and mining collections in the world. They serve practical student education and training, complement research, and still embody enormous scientific potential today. About five percent of the total holdings are displayed in the show collections. These include a Mineralogical Collection, a Deposit Collection and a Petrological Collection in the Werner Building, a Paleontological and Stratigraphical Collection in the Humboldt Building and a Fuel Geological Collection near the Rich Colliery.
Since October 2008, TU Bergakademie Freiberg has also exhibited the world's largest private mineral collection in Freudenstein Castle. The permanent exhibition terra mineralia is on permanent loan from the Swiss patron Erika Pohl-Ströher and celebrated its tenth anniversary in April 2019.
The minerals of German sites of the famous Pohl-Ströher Mineral Foundation as well as special mineral specimens from the Geosciences Collections of TUBAF and minerals from the university foundation "Mineralogische Sammlung Deutschland" (Mineralogical Collection Germany), established in 2008, are on display in the Krügerhaus, which was renovated in 2012 by the Dr.-Erich-Krüger-Stiftung.The exhibition is open to the public.
In the Historicum, the university presents numerous exhibits, pictures and contemporary documents in a vivid way.
In the Forum Montangeschichte one can find since 2015 digitized and in full text freely available historical essays on Saxon mining and metallurgical history, including previously unpublished works, as well as current publications.
Notable alumni
Luo Gan, former Member of the Politburo Standing Committee of the Chinese Communist Party.
Mary Hegeler Carus – the first woman to legally enroll (in 1885).
Edward Renouf, chemistry professor, Johns Hopkins University.
Alexander von Humboldt, renowned naturalist, historian, and humanitarian
Logos
International University Rankings
The 2021 QS World University Rankings by subject rated TU Bergakademie Freiberg No. 17 for Mineral and Mining worldwide and No. 3 in Europe. The Center for World University Rankings (CWUR) ranked TU Freiberg 64th among German universities on research performance.
References
External links
Schools of mines
Educational institutions established in 1765
1765 establishments in the Holy Roman Empire
Freiberg
Mining in the Ore Mountains
Universities and colleges in Saxony
Technische Universitäten in Germany | Freiberg University of Mining and Technology | Engineering | 4,816 |
14,799,971 | https://en.wikipedia.org/wiki/KDM5D | Lysine-specific demethylase 5D is an enzyme that in humans is encoded by the KDM5D gene. KDM5D belongs to the alpha-ketoglutarate-dependent hydroxylases superfamily.
This gene encodes a protein containing zinc finger domains. A short peptide derived from this protein is a minor histocompatibility antigen which can lead to graft rejection of male donor cells in a female recipient.
References
Further reading
External links
Transcription factors
Human 2OG oxygenases
EC 1.14.11 | KDM5D | Chemistry,Biology | 113 |
47,572,997 | https://en.wikipedia.org/wiki/Induced%20thymic%20epithelial%20cell | An induced thymic epithelial cell (iTEC) is a cell that has been induced to become a thymic epithelial cell.
References
Stem cells
Thymus
Organ transplantation | Induced thymic epithelial cell | Biology | 39 |
1,530,353 | https://en.wikipedia.org/wiki/Mutation%20rate | In genetics, the mutation rate is the frequency of new mutations in a single gene, nucleotide sequence, or organism over time. Mutation rates are not constant and are not limited to a single type of mutation; there are many different types of mutations. Mutation rates are given for specific classes of mutations. Point mutations are a class of mutations which are changes to a single base. Missense, nonsense, and synonymous mutations are three subtypes of point mutations. The rate of these types of substitutions can be further subdivided into a mutation spectrum which describes the influence of the genetic context on the mutation rate.
There are several natural units of time for each of these rates, with rates being characterized either as mutations per base pair per cell division, per gene per generation, or per genome per generation. The mutation rate of an organism is an evolved characteristic and is strongly influenced by the genetics of each organism, in addition to strong influence from the environment. The upper and lower limits to which mutation rates can evolve is the subject of ongoing investigation. However, the mutation rate does vary over the genome.
When the mutation rate in humans increases certain health risks can occur, for example, cancer and other hereditary diseases. Having knowledge of mutation rates is vital to understanding the future of cancers and many hereditary diseases.
Background
Different genetic variants within a species are referred to as alleles, therefore a new mutation can create a new allele. In population genetics, each allele is characterized by a selection coefficient, which measures the expected change in an allele's frequency over time. The selection coefficient can either be negative, corresponding to an expected decrease, positive, corresponding to an expected increase, or zero, corresponding to no expected change. The distribution of fitness effects of new mutations is an important parameter in population genetics and has been the subject of extensive investigation. Although measurements of this distribution have been inconsistent in the past, it is now generally thought that the majority of mutations are mildly deleterious, that many have little effect on an organism's fitness, and that a few can be favorable.
Because of natural selection, unfavorable mutations will typically be eliminated from a population while favorable changes are generally kept for the next generation, and neutral changes accumulate at the rate they are created by mutations. This process happens by reproduction. In a particular generation the 'best fit' survive with higher probability, passing their genes to their offspring. The sign of the change in this probability defines mutations to be beneficial, neutral or harmful to organisms.
Measurement
An organism's mutation rates can be measured by a number of techniques.
One way to measure the mutation rate is by the fluctuation test, also known as the Luria–Delbrück experiment. This experiment demonstrated that bacteria mutations occur in the absence of selection instead of the presence of selection.
This is very important to mutation rates because it proves experimentally mutations can occur without selection being a component—in fact, mutation and selection are completely distinct evolutionary forces. Different DNA sequences can have different propensities to mutation (see below) and may not occur randomly.
The most commonly measured class of mutations are substitutions, because they are relatively easy to measure with standard analyses of DNA sequence data. However substitutions have a substantially different rate of mutation (10−8 to 10−9 per generation for most cellular organisms) than other classes of mutation, which are frequently much higher (~10−3 per generation for satellite DNA expansion/contraction).
Substitution rates
Many sites in an organism's genome may admit mutations with small fitness effects. These sites are typically called neutral sites. Theoretically mutations under no selection become fixed between organisms at precisely the mutation rate. Fixed synonymous mutations, i.e. synonymous substitutions, are changes to the sequence of a gene that do not change the protein produced by that gene. They are often used as estimates of that mutation rate, despite the fact that some synonymous mutations have fitness effects. As an example, mutation rates have been directly inferred from the whole genome sequences of experimentally evolved replicate lines of Escherichia coli B.
Mutation accumulation lines
A particularly labor-intensive way of characterizing the mutation rate is the mutation accumulation line.
Mutation accumulation lines have been used to characterize mutation rates with the Bateman-Mukai Method and direct sequencing of well-studied experimental organisms ranging from intestinal bacteria (E. coli), roundworms (C. elegans), yeast (S. cerevisiae), fruit flies (D. melanogaster), and small ephemeral plants (A. thaliana).
Variation in mutation rates
Mutation rates differ between species and even between different regions of the genome of a single species. Mutation rates can also differ even between genotypes of the same species; for example, bacteria have been observed to evolve hypermutability as they adapt to new selective conditions. These different rates of nucleotide substitution are measured in substitutions (fixed mutations) per base pair per generation. For example, mutations in intergenic, or non-coding, DNA tend to accumulate at a faster rate than mutations in DNA that is actively in use in the organism (gene expression). That is not necessarily due to a higher mutation rate, but to lower levels of purifying selection. A region which mutates at predictable rate is a candidate for use as a molecular clock.
If the rate of neutral mutations in a sequence is assumed to be constant (clock-like), and if most differences between species are neutral rather than adaptive, then the number of differences between two different species can be used to estimate how long ago two species diverged (see molecular clock). In fact, the mutation rate of an organism may change in response to environmental stress. For example, UV light damages DNA, which may result in error prone attempts by the cell to perform DNA repair.
The human mutation rate is higher in the male germ line (sperm) than the female (egg cells), but estimates of the exact rate have varied by an order of magnitude or more. This means that a human genome accumulates around 64 new mutations per generation because each full generation involves a number of cell divisions to generate gametes. Human mitochondrial DNA has been estimated to have mutation rates of ~3× or ~2.7×10−5 per base per 20 year generation (depending on the method of estimation); these rates are considered to be significantly higher than rates of human genomic mutation at ~2.5×10−8 per base per generation. Using data available from whole genome sequencing, the human genome mutation rate is similarly estimated to be ~1.1×10−8 per site per generation.
The rate for other forms of mutation also differs greatly from point mutations. An individual microsatellite locus often has a mutation rate on the order of 10−4, though this can differ greatly with length.
Some sequences of DNA may be more susceptible to mutation. For example, stretches of DNA in human sperm which lack methylation are more prone to mutation.
In general, the mutation rate in unicellular eukaryotes (and bacteria) is roughly 0.003 mutations per genome per cell generation. However, some species, especially the ciliate of the genus Paramecium have an unusually low mutation rate. For instance, Paramecium tetraurelia has a base-substitution mutation rate of ~2 × 10−11 per site per cell division. This is the lowest mutation rate observed in nature so far, being about 75× lower than in other eukaryotes with a similar genome size, and even 10× lower than in most prokaryotes. The low mutation rate in Paramecium has been explained by its transcriptionally silent germ-line nucleus, consistent with the hypothesis that replication fidelity is higher at lower gene expression levels.
The highest per base pair per generation mutation rates are found in viruses, which can have either RNA or DNA genomes. DNA viruses have mutation rates between 10−6 to 10−8 mutations per base per generation, and RNA viruses have mutation rates between 10−3 to 10−5 per base per generation.
Mutation spectrum
A mutation spectrum is a distribution of rates or frequencies for the mutations relevant in some context, based on the recognition that rates of occurrence are not all the same. In any context, the mutation spectrum reflects the details of mutagenesis and is affected by conditions such as the presence of chemical mutagens or genetic backgrounds with mutator alleles or damaged DNA repair systems. The most fundamental and expansive concept of a mutation spectrum is the distribution of rates for all individual mutations that might happen in a genome (e.g., ). From this full de novo spectrum, for instance, one may calculate the relative rate of mutation in coding vs non-coding regions. Typically the concept of a spectrum of mutation rates is simplified to cover broad classes such as transitions and transversions (figure), i.e., different mutational conversions across the genome are aggregated into classes, and there is an aggregate rate for each class.
In many contexts, a mutation spectrum is defined as the observed frequencies of mutations identified by some selection criterion, e.g., the distribution of mutations associated clinically with a particular type of cancer, or the distribution of adaptive changes in a particular context such as antibiotic resistance (e.g.,
).
Whereas the spectrum of de novo mutation rates reflects mutagenesis alone, this kind of spectrum may also reflect effects of selection and ascertainment biases (e.g., both kinds of spectrum are used in ).
Evolution
The theory on the evolution of mutation rates identifies three principal forces involved: the generation of more deleterious mutations with higher mutation, the generation of more advantageous mutations with higher mutation, and the metabolic costs and reduced replication rates that are required to prevent mutations. Different conclusions are reached based on the relative importance attributed to each force. The optimal mutation rate of organisms may be determined by a trade-off between costs of a high mutation rate, such as deleterious mutations, and the metabolic costs of maintaining systems to reduce the mutation rate (such as increasing the expression of DNA repair enzymes. or, as reviewed by Bernstein et al. having increased energy use for repair, coding for additional gene products and/or having slower replication). Secondly, higher mutation rates increase the rate of beneficial mutations, and evolution may prevent a lowering of the mutation rate in order to maintain optimal rates of adaptation. As such, hypermutation enables some cells to rapidly adapt to changing conditions in order to avoid the entire population from becoming extinct. Finally, natural selection may fail to optimize the mutation rate because of the relatively minor benefits of lowering the mutation rate, and thus the observed mutation rate is the product of neutral processes.
Studies have shown that treating RNA viruses such as poliovirus with ribavirin produce results consistent with the idea that the viruses mutated too frequently to maintain the integrity of the information in their genomes. This is termed error catastrophe.
The characteristically high mutation rate of HIV (Human Immunodeficiency Virus) of 3 x 10−5 per base and generation, coupled with its short replication cycle leads to a high antigen variability, allowing it to evade the immune system.
See also
Mutation
Critical mutation rate
Mutation frequency
Dysgenics
Allele frequency
Rate of evolution
Genetics
Cancer
References
External links
Mutation
Evolutionary biology
Temporal rates | Mutation rate | Physics,Biology | 2,322 |
18,879,982 | https://en.wikipedia.org/wiki/Bi-specific%20T-cell%20engager | Bi-specific T-cell engager (BiTE) is a class of artificial bispecific monoclonal antibodies that are investigated for use as anti-cancer drugs. They direct a host's immune system, more specifically the T cells' cytotoxic activity, against cancer cells. BiTE is a registered trademark of Micromet AG (fully owned subsidiary of Amgen Inc).
BiTE molecules are fusion proteins consisting of two single-chain variable fragments (scFvs) of different antibodies, or amino acid sequences from four different genes, on a single peptide chain of about 55 kilodaltons. One of the scFvs binds to T cells via the CD3 receptor, and the other to a tumor cell via a tumor specific molecule.
Mechanism of action
Like other bispecific antibodies, and unlike ordinary monoclonal antibodies, BiTEs form a link between T cells and tumor cells. This causes T cells to exert cytotoxic activity on tumor cells by producing proteins like perforin and granzymes, independently of the presence of MHC I or co-stimulatory molecules. These proteins enter tumor cells and initiate the cell's apoptosis.
This action mimics physiological processes observed during T cell attacks against tumor cells.
BiTEs in clinical assessment or with clinical approvals
Several BiTEs are currently in preclinical and clinical trials to assess their therapeutic efficacy and safety.
Blinatumomab
Blinatumomab links T cells with CD19 receptors found on the surface of B cells. The Food and Drug Administration (US) and the European Medicines Agency approved this therapy for adults with Philadelphia chromosome-negative relapsed or refractory acute lymphoblastic leukemia.
Glofitamab
It is a bispecific CD20-directed CD3 T-cell engager. It was approved for medical use in Canada in March 2023, in the United States in June 2023, and in the European Union in July 2023.
Mosunetuzumab
Bispecifically binds CD20 and CD3 to engage T-cells. Mosunetuzumab was approved for medical use in the European Union in June 2022.
Solitomab
Solitomab links T cells with the EpCAM antigen which is expressed by colon, gastric, prostate, ovarian, lung, and pancreatic cancers.
Talquetamab
Tarlatamab
Tebentafusp
After clinical trials, in January 2022, the US FDA approved tebentafusp (a BiTE targeting the gp100 peptide) for HLA-A*02:01-positive adult patients with unresectable or metastatic uveal melanoma.
Epcoritamab
Epcoritamab, sold under the brand name Epkinly, is used for the treatment of diffuse large B-cell lymphoma. Epcoritamab is a bispecific CD20-directed CD3 T-cell engager.
Epcoritamab was approved for medical use in the United States in May 2023, in the European Union in September 2023, and in Canada in December 2023.
Further research
Utilizing the same technology, melanoma (with MCSP specific BiTEs) and acute myeloid leukemia (with CD33 specific BiTEs) can be targeted. , research in this area is active.
Another avenue for novel anti-cancer therapies is re-engineering some of the currently used conventional antibodies like trastuzumab (targeting HER2/neu), cetuximab and panitumumab (both targeting the EGF receptor), using the BiTE approach.
, BiTEs against CD66e and EphA2 are being developed as well.
References
Further reading
Monoclonal antibodies
Immunology | Bi-specific T-cell engager | Biology | 788 |
1,690,295 | https://en.wikipedia.org/wiki/Resource%20%28project%20management%29 | In project management, resources are required to carry out the project tasks. These can be people, equipment, facilities, funding, or anything else capable of definition (usually other than labour) required for the completion of a project activity. The lack of a resource can therefore be a constraint on the completion of the project activity. Resources may be storable or not storable. Storable resources remain available unless depleted by usage, and may be replenished by project tasks that produce them. Nonstorable resources must be renewed for each time period, even if not used in previous periods.
Resource scheduling, availability, and optimisation are considered key to successful project management.
Allocation of limited resources is based on the priority given to each of the project activities. Their priorities are calculated using the critical path method and heuristic analysis.
For a case with a constraint on the available resources, the objective is to create the most efficient schedule possible - minimising project duration and maximising the use of the resources available.
See also
Project management
List of project management software
References
Schedule (project management) | Resource (project management) | Physics | 223 |
18,475,687 | https://en.wikipedia.org/wiki/Linagliptin | Linagliptin, sold under the brand name Tradjenta among others, is a medication used to treat type 2 diabetes (but not type 1) in conjunction with exercise and diet. It is generally less preferred than metformin and sulfonylureas as an initial treatment. It is taken by mouth.
Common side effects include inflammation of the nose and throat. Serious side effects may include angioedema, pancreatitis, joint pain. Use in pregnancy and breastfeeding is not recommended. Linagliptin is a dipeptidyl peptidase-4 inhibitor that works by increasing the production of insulin and decreasing the production of glucagon by the pancreas.
Linagliptin was approved for medical use in the United States, Japan, the European Union, Canada, and Australia in 2011. In 2020, it was the 293rd most commonly prescribed medication in the United States, with more than 1million prescriptions. From August 2021 linagliptin became available as a generic medicine in the US.
Medical uses
Linagliptin is indicated as an adjunct to diet and exercise to improve glycemic control in adults with type 2 diabetes.
Side effects
Linagliptin may cause severe joint pain.
Mechanism of action
Linagliptin belongs to a class of drugs called DPP-4 inhibitors.
Names
Linagliptin is the international nonproprietary name (INN). Brand names: Trajenta, Tradjenta.
See also
Empagliflozin/linagliptin
References
External links
Alkyne derivatives
Drugs developed by Boehringer Ingelheim
Dipeptidyl peptidase-4 inhibitors
Drugs developed by Eli Lilly and Company
Piperidines
Quinazolines
Xanthines
Wikipedia medicine articles ready to translate | Linagliptin | Chemistry | 375 |
11,598,515 | https://en.wikipedia.org/wiki/Lillie%27s%20trichrome | Lillie's trichrome is a combination of dyes used in histology.
It is similar to Masson's trichrome stain, but it uses Biebrich scarlet for the plasma stain. It was initially published by Ralph D. Lillie in 1940. It is applied by submerging the fixated sample into the following three solutions: Weigert's iron hematoxylin working solution, Biebrich scarlet solution, and Fast Green FCF solution.
The resulting stains are black cell nuclei, brown cytoplasm, red muscle and myelinated fibers, blue collagen, and scarlet erythrocytes.
Applications
Trichrome stains are normally used to differentiate between collagen and muscle tissues. Some studies that benefit from its application include end stage liver disease (cirrhosis), myocardial infarction, muscular dystrophy, and tumor analysis.
References
External links
Lillie's trichrome at StainsFile.info
Histology
Staining | Lillie's trichrome | Chemistry,Biology | 204 |
14,862,049 | https://en.wikipedia.org/wiki/3C%20299 | 3C 299 is a radio galaxy/quasar located in the constellation Boötes.
References
External links
www.jb.man.ac.uk/atlas/ (J. P. Leahy)
3C 299
Boötes
299 | 3C 299 | Astronomy | 50 |
2,020,947 | https://en.wikipedia.org/wiki/Biological%20Innovation%20for%20Open%20Society | BiOS (Biological Open Source/Biological Innovation for Open Society) is an international initiative to foster innovation and freedom to operate in the biological sciences. BiOS was officially launched on 10 February 2005 by Cambia, an independent, international non-profit organization dedicated to democratizing innovation. Its intention is to initiate new norms and practices for creating tools for biological innovation, using binding covenants to protect and preserve their usefulness, while allowing diverse business models for the application of these tools.
As described by Richard Anthony Jefferson, CEO of Cambia, the Deputy CEO of Cambia, Dr Marie Connett worked extensively with small companies, university offices of technology transfer, attorneys, and multinational corporations to create a platform to share productive and sustainable technology. The parties developed the BiOS Material Transfer Agreement (MTA) and the BiOS license as legal instruments to facilitate these goals.
Biological Open Source
Traditionally, the term 'open source' describes a paradigm for software development associated with a set of collaborative innovation practices, which ensure access to the end product's source materials - typically, source code. The BiOS Initiative has sought to extend this concept to the biological sciences, and agricultural biotechnology in particular. BiOS is founded on the concept of sharing scientific tools and platforms so that innovation can occur at the 'application layer.' Jefferson observes that, 'Freeing up the tools that make new discoveries possible will spur a new wave of innovation that has real value.' He notes further that, 'Open source is an enormously powerful tool for driving efficiency.'
Through BiOS instruments, licensees cannot appropriate the fundamental kernel of a technology and improvements exclusively for themselves. The base technology remains the property of whichever entity developed it, but improvements can be shared with others that support the development of a protected commons around the technology.
To maintain legal access to the technology, in other words, licensees must agree not to prevent others who have agreed to the same terms from using the technology and any improvements in the development of different products
BiOS License
By making the BiOS license cost-free, Cambia has sought to create 'freedom to innovate' in the scientific community. In lieu of royalties and other restrictions often imposed by legal agreements, the BiOS licenses impose on the licensee conditions to encourage cooperation and development of the technology. To be granted full, unfettered commercial rights to listed technologies, licensees are required to comply with three conditions:
To share with all BiOS licensees any improvements in the core technologies as defined, for which they seek any Intellectual Property protection.
To agree to not assert over other BiOS licensees their own or third-party rights that might dominate the defined technologies.
To agree to share with the public any and all information about the biosafety of the defined technologies.
As with other legal instruments, definitions used in the BiOS licenses are important. The scope and core capabilities of the enabling technologies and platforms should be carefully defined to provide confidence in the development of viable business models surrounding the use of the BiOS license.
The adoption of the BiOS licenses has now extended to over 300 licensees worldwide.
Material Transfer Agreements (MTAs)
BiOS has also issued a series of Material Transfer Agreements (MTAs), a common form of bailment used to provide materials for life sciences research, such as bacterial strains, plant lines, cell cultures, or DNA. MTAs able to be adapted for biological materials are available on the BiOS site.
Open Source Biological Technologies
CambiaLabs engineered two ‘open source’ biological technologies, TransBacter and GUSPlus, which they released under the BiOS Initiative. The first, TransBacter, was designed to work around the intense patenting associated with the making of transgenic plants.
Cambia identified that the majority of patents claiming methods for plant transgenics make explicit reference to the bacterium Agrobacterium tumefaciens; therefore, the use of a bacterium outside the genus Agrobacterium would not be subject to existing patent claims. Cambia published its work on TransBacter, which uses bacteria from the genera Rhizobium, Sinorhizobium and Mesorhizobium in 2005 in Nature. TransBacter is available to all non-profit researchers and institutes upon signing a BiOS MTA. For-profit companies are asked to sign a BiOS license and to make a contribution to Cambia which is calculated on the company’s financial means.
An inventory of BiOS-licensed patents is available at the Cambia site.
See also
Cambia
GUS reporter system
Patentleft
Patent Lens
Richard Anthony Jefferson
References
External links
BiOS
BiOS Licences
Creative Commons
Open Source Initiative
Biotechnology organizations | Biological Innovation for Open Society | Engineering,Biology | 951 |
47,525,633 | https://en.wikipedia.org/wiki/Time%20in%20Lebanon | Time in Lebanon is given by Eastern European Time (EET) (UTC+02:00) or Eastern European Summer Time (EEST) (UTC+03:00) during the summer.
Postponed time change in 2023
On 23 March 2023, two days before the scheduled switch to Eastern European Summer Time (EEST), Lebanon's government postponed the change from 25 March to 20 April. (This came within days of a DST postponement also being announced in Palestine.) No official explanation was given, but local media suggested the change was made to avoid disruption during the month of Ramadan, during which some Muslims fast from sunrise till sunset. Due to the lateness of the announcement, smart devices with "automatic time" enabled changed the time on the originally scheduled date of 25 March, and some major media outlets, including MTV, LBCI and OTV, announced that they will not abide by the decision. Different religious communities in Lebanon observed the shift independently. As a result, some places or regions in Lebanon temporarily used different time zones, causing mass confusion. On 27 March, Lebanon's prime minister Najib Mikati announced that EEST would be used starting at midnight of 29 March.
References
Lebanon
Society of Lebanon
Geography of Lebanon | Time in Lebanon | Physics | 258 |
76,073,805 | https://en.wikipedia.org/wiki/ArkUI | ArkUI is a declarative based user interface framework for building user interfaces on native HarmonyOS, OpenHarmony alongside Oniro applications developed by Huawei for the ArkTS and Cangjie programming language.
Overview
ArkUI 3.0 is declarative in eTS (extended TypeScript) in HarmonyOS 3.0, followed by main ArkTS programming language in HarmonyOS 3.1, contrasting with the imperative syntax used in Java development in earlier versions of HarmonyOS in HarmonyOS 1.0 and 2.0. ArkUI allows for 2D drawing as well as 3D drawing, animations, event handling, Service Card widgets, and data binding. ArkUI automatically synchronizes between UI views and data.
ArkUI integrates with DevEco Studio IDE to provide for real-time previews during editing, alongside support for debugging and other development features.
ArkJS is designed for web development with a Vue 2-like syntax, providing a familiar environment for web developers using JS and CSS. ArkJS incorporates the HarmonyOS Markup Language (HML), which allows attributes prefixed with @ for MVVM architectural pattern.
History
During HDC 2021 on October 22, 2021, the HarmonyOS 3.0 developer preview introduced ArkUI 3.0 for eTS, JS programming languages with ArkCompiler. Compared to previous versions of ArkUI 1.0 and 2.0 under imperative development with Java in earlier versions of HarmonyOS.
During HDC 2022 HarmonyOS 3.1 in November 2022, Huawei ArkUI evolved into full declarative development featuring declarative UI capabilities, improved layout ability, component capability improvement and others. In April 2023, HarmonyOS 3.1 Beta 1 build included ArkUI declarative 2D and 3D drawing capabilities. The upgrade also improves layout, component, and app state management capabilities.
During HDC 2023, August 2023, Huawei announced HarmonyOS 4.0 improvements of ArkUI with ArkTS alongside native HarmonyOS NEXT software development using Ark Engine with ArkGraphics 2D and ArkGraphics 3D. Also, the company announced a cross platform extension of ArkUI called ArkUI-X which would allow developers to run applications across Android, iOS and HarmonyOS under one project using DevEco Studio IDE and Visual Studio Code plugins. On January 18, 2024, during HarmonyOS Ecology Conference, Huawei revealed the HarmonyOS NEXT software stack, that included ArkUI/ArkUI-X programming framework with the Ark Compiler/BiSheng Compiler/Ark Runtime compiler & runtime, for both ArkTS and incoming Cangjie programming language.
ArkUI-X
ArkUI-X is an open-source UI software development kit which is the extension of ArkUI created for building cross platform applications, including Android, iOS targets additionally. Web platform support with ArkJS was released on December 8, 2023. ArkUI-X consists of both a UI language and a rendering engine.
Features
Components
System components are built-in components within the ArkUI framework, categorized into container components and basic components. For example, Row and Column are container components that can hold other components, while Text and Button are basic components.
Examples
The following is an example of a simple Hello World program. It is standard practice in ArkUI to separate the application struct and views into different structs, with the main view named Index.
import ArkTS
// Index.ets
import router from '@ohos.router';
@Entry
@Component
struct Index {
@State message: string = 'Hello World'
build() {
Row() {
Column() {
Text(this.message)
.fontSize(50)
.fontWeight(FontWeight.Bold)
// Add a button to respond to user clicks.
Button() {
Text('Next')
.fontSize(30)
.fontWeight(FontWeight.Bold)
}
.type(ButtonType.Capsule)
.margin({
top: 20
})
.backgroundColor('#0D9FFB')
.width('40%')
.height('5%')
// Bind the onClick event to the Next button so that clicking the button redirects the user to the second page.
.onClick(() => {
router.pushUrl({ url: 'pages/Second' })
})
}
.width('100%')
}
.height('100%')
}
}The @ohos.router routing library implements page transitions, which must be declared in the main_pages.json file before being invoked.
Reception
Taobao claims that the ArkUI version of its app achieves checkout page performance 1.5 times faster than the Android version.
See also
SwiftUI
Flutter
Xamarin
React Native
Qt (software)
Jetpack Compose
References
External links
ArkUI at HarmonyOS Developer and Huawei Developer
ArkUI Example
2021 software
Gesture recognition
HarmonyOS
Proprietary software
Huawei products
Mobile software development
Software development
Programming tools
Software frameworks | ArkUI | Technology,Engineering | 1,053 |
7,976,609 | https://en.wikipedia.org/wiki/Sphygmograph | The sphygmograph ( ) was a mechanical device used to measure blood pressure in the mid-19th century. It was developed in 1854 by German physiologist Karl von Vierordt (1818–1884). It is considered the first external, non-intrusive device used to estimate blood pressure.
The device was a system of levers hooked to a scale-pan in which weights were placed to determine the amount of external pressure needed to stop blood flow in the radial artery. Although the instrument was cumbersome and its measurements imprecise, the basic concept of Vierordt's sphygmograph eventually led to the blood pressure cuff used today.
In 1863, Étienne-Jules Marey (1830–1904) improved the device by making it portable. Also he included a specialized instrument to be placed above the radial artery that was able to magnify pulse waves and record them on paper with an attached pen.
In 1872, Frederick Akbar Mahomed published a description of a modified sphygmograph. This modified version made the sphygmograph quantitative, so that it was able to measure arterial blood pressure.
In 1880, Samuel von Basch (1837–1905) invented the sphygmomanometer, which was then improved by Scipione Riva-Rocci (1863–1937) in the 1890s. In 1901 Harvey Williams Cushing improved it further, and Heinrich von Recklinghausen (1867–1942) used a wider cuff, and so it became the first accurate and practical instrument for measuring blood pressure.
References
External links
R.E. Dudgeon M.D. The sphygmograph : its history and use as an aid to diagnosis in ordinary practice (1882). The Medical Heritage Library.
Drawing of Vierordt's Sphygmograph.
Medical equipment
Blood pressure
Physiological instruments | Sphygmograph | Technology,Engineering,Biology | 381 |
853,141 | https://en.wikipedia.org/wiki/Motzkin%20number | In mathematics, the th Motzkin number is the number of different ways of drawing non-intersecting chords between points on a circle (not necessarily touching every point by a chord). The Motzkin numbers are named after Theodore Motzkin and have diverse applications in geometry, combinatorics and number theory.
The Motzkin numbers for form the sequence:
1, 1, 2, 4, 9, 21, 51, 127, 323, 835, ...
Examples
The following figure shows the 9 ways to draw non-intersecting chords between 4 points on a circle ():
The following figure shows the 21 ways to draw non-intersecting chords between 5 points on a circle ():
Properties
The Motzkin numbers satisfy the recurrence relations
The Motzkin numbers can be expressed in terms of binomial coefficients and Catalan numbers:
and inversely,
This gives
The generating function of the Motzkin numbers satisfies
and is explicitly expressed as
An integral representation of Motzkin numbers is given by
.
They have the asymptotic behaviour
.
A Motzkin prime is a Motzkin number that is prime. Four such primes are known:
2, 127, 15511, 953467954114363
Combinatorial interpretations
The Motzkin number for is also the number of positive integer sequences of length in which the opening and ending elements are either 1 or 2, and the difference between any two consecutive elements is −1, 0 or 1. Equivalently, the Motzkin number for is the number of positive integer sequences of length in which the opening and ending elements are 1, and the difference between any two consecutive elements is −1, 0 or 1.
Also, the Motzkin number for gives the number of routes on the upper right quadrant of a grid from coordinate (0, 0) to coordinate (, 0) in steps if one is allowed to move only to the right (up, down or straight) at each step but forbidden from dipping below the = 0 axis.
For example, the following figure shows the 9 valid Motzkin paths from (0, 0) to (4, 0):
There are at least fourteen different manifestations of Motzkin numbers in different branches of mathematics, as enumerated by in their survey of Motzkin numbers.
showed that vexillary involutions are enumerated by Motzkin numbers.
See also
Telephone number which represent the number of ways of drawing chords if intersections are allowed
Delannoy number
Narayana number
Schröder number
References
External links
Integer sequences
Enumerative combinatorics
Eponymous numbers in mathematics | Motzkin number | Mathematics | 539 |
160,688 | https://en.wikipedia.org/wiki/Premium%20Bonds | Premium Bonds is a lottery bond scheme organised by the United Kingdom government since 1956. At present it is managed by the government's National Savings and Investments agency.
The principle behind Premium Bonds is that rather than the stake being gambled, as in a usual lottery, it is the interest on the bonds that is distributed by a lottery. The bonds are entered in a monthly prize draw and the government promises to buy them back, on request, for their original price.
The government pays interest into the bond fund (4.15% per annum in December 2024 but decreasing to 4% in January 2025) from which a monthly lottery distributes tax-free prizes to bondholders whose numbers are selected randomly. The machine that generates the numbers is called ERNIE, an acronym for "Electronic Random Number Indicator Equipment". Prizes range from £25 to £1,000,000 and (since December 2024) the odds of a £1 bond winning a prize in a given month are 22,000 to 1.
Investors can buy bonds at any time but they must be held for a whole calendar month before they qualify for a prize. As an example, a bond purchased mid-May must then be held throughout June before being eligible for the draw in July (and onwards). Bonds purchased by reinvestment of prizes are immediately eligible for the following month's draw.
Numbers are entered in the draw each month, with an equal chance of winning, until the bond is cashed. As of 2015, each person may own bonds up to £50,000. Since 1 February 2019, the minimum purchase amount for Premium Bonds has been £25. there are over 128.7billion eligible Premium Bonds, each having a value of £1.
When introduced to the wider public in 1957, the only other similar game available in the UK was the football pools, with the National Lottery not coming into existence until 1994. Although many avenues of lotteries and other forms of gambling are now available to British adults, Premium Bonds are held by more than 24 million people, equivalent to more than 1 in 3 of the UK population.
History
The term "premium bond" has been used in the English language since at least the late 18th century, to mean a bond that earns no interest but is eligible for entry into a lottery.
The modern iteration of Premium Bonds were introduced by Harold Macmillan, as Chancellor of the Exchequer, in his Budget of 17 April 1956, to control inflation and encourage people to save. On 1 November 1956, in front of the Royal Exchange in the City of London, the Lord Mayor of London, Alderman Sir Cuthbert Ackroyd, bought the first bond from the Postmaster General, Dr Charles Hill, for £1. Councillor William Crook, the mayor of Lytham St Anne's, bought the second. The Premium Bonds office was in St Annes-on-Sea, Lancashire, until it moved to Blackpool in 1978.
Winning
Winners of the jackpot are told on the first working day of the month, although the actual date of the draw varies. The online prize finder is updated by the third or fourth working day of the month. Winners of the top £1m prize are told in person of their win by "Agent Million", an NS&I employee, usually on the day before the first working day of the month. However, in-person visits were suspended, starting in May 2020, during the COVID-19 pandemic in the United Kingdom.
Bond holders can check whether they have won any prizes on the National Savings & Investment Premium Bond Prize Checker website, or the smartphone app, which provides lists of winning bond numbers for the past six months. Older winning numbers (more than 18 months old) can also be checked in the London Gazette Premium Bonds Unclaimed Prizes Supplement.
Odds of winning
In December 2008, NS&I reduced the interest rate (and therefore the odds of winning) due to the drop in the Bank of England base rate during the Great Recession, leading to criticism from members of Parliament, financial experts and holders of bonds; many claimed Premium Bonds were now "worthless", and somebody with £30,000 invested and "average luck" would win only 10 prizes a year compared to 15 the previous year. Investors with smaller, although significant, amounts would possibly win nothing.
From 1 January 2009 the odds of winning a prize for each £1 of bond was 36,000 to 1. In October 2009, the odds returned to 24,000 to 1 with the prize fund interest rate increase. The odds reached 26,000 to 1 by October 2013 and then reverted to 24,500 to 1 in November 2017.
, the odds of winning are 1/22,000; resulting in the expected number of prizes for the maximum £50,000 worth of bonds being 27 per year.
Prize fund distribution
The prize fund is equal to one month's interest on all bonds eligible for the draw. The annual interest is set by NS&I and was 1.40% , reducing to 1.00% . This was increased to 2.2%, then increased again to 3% and is now at 4% from January 2025. The following table lists the distribution of prizes on offer in the January 2025 draw.
Economic analysis
While the mean return is 4% as of January 2025, the median return is lower. For an investor with the maximum £50,000 invested, the median return is 3.45% (£1,725). For investors with lower amounts invested, the median return is lower. The typical investor with £1,250 or less invested will receive nothing in a year.
Premium Bonds are tax free, so are more attractive to higher rate taxpayers.
ERNIE
ERNIE - an acronym for "Electronic Random Number Indicator Equipment" - is the name for a series of hardware random number generators developed for this application. There have been five models of ERNIE to date. All of them have generated true random numbers derived from random statistical fluctuations in a variety of physical processes.
The first ERNIE was built at the Post Office Research Station by a team led by Sidney Broadhurst. The designers were Tommy Flowers and Harry Fensom and it derives from Colossus, one of the world's first digital computers. It was introduced in 1957, with the first draw on 1 June, and generated bond numbers from the signal noise created by neon gas discharge tubes. ERNIE 1 is in the collections of the Science Museum in London and was on display between 2008 and 2015.
ERNIE 2 replaced the first ERNIE in 1972.
ERNIE 3 in 1988 was the size of a personal computer; at the end of its life it took five and a half hours to complete its monthly draw.
In August 2004, ERNIE 4 was brought into service in anticipation of an increase in prizes each month from September 2004. Developed by LogicaCMG, it was 500 times faster than the original and generated a million numbers an hour; these were checked against a list of valid bonds. By comparison, the original ERNIE generated 2,000 numbers an hour and was the size of a van.
ERNIE 4 used thermal noise in transistors as its source of randomness to generate true random numbers. ERNIE's output was independently tested each month by the Government Actuary's Department, the draw being valid only if it was certified to be statistically consistent with randomness. At the end of its life it was moved to Bletchley Park's National Museum of Computing.
ERNIE 5, the latest model, was brought into service in March 2019, and is a quantum random number generator built by ID Quantique. It uses quantum technology to produce random numbers through light, replacing the former 'thermal noise' method. Running at speeds 21,000 times faster than the first ERNIE, it can produce 3 million winners in just 12 minutes each month.
In popular culture
ERNIE, anthropomorphised in early advertising, receives Valentine cards, Christmas cards and letters from the public. It is the subject of the song "E.R.N.I.E." by Madness, from the 1980 album Absolutely. It is also referenced by Jethro Tull in their album Thick as a Brick.
In other countries
Premium Bonds under various names exist or have existed in various countries. Similar programmes to UK Premium Bonds include:
In the Republic of Ireland, Prize Bonds also originated in early 1957.
In Sweden, "Premieobligationer" usually run for five years and are traded on Nasdaq OMX Stockholm. The unit (one Bond) is generally 1000 SEK or 5000 SEK. Holders of 10 or 50 consecutive bonds starting at 1 + N * 10 or 50 are guaranteed one win per year. Outstanding bonds were around 28.9 billion SEK.
In Denmark, "Premieobligationer" usually ran for five or 10 years with a fixed prize list printed on the physical bonds. They were physical bearer bonds and most series were extended one or more times by another 5 or 10 years. The last series have now ended and must be redeemed for their principal cash within 10 years of the final ending dates. The bonds were generally identified by their colour, for instance the blue premium bonds were issued in 1948, and were redeemed in 1998 (10 years + 4 10-year extension). The first 200 DKK of each prize was tax free, the rest taxed at only 15% (compared to 30% or more for ordinary income).
In New Zealand, "Bonus Bonds" were established by the NZ Government in 1970 and sold to ANZ Bank in 1990. In August 2020 it was announced that the scheme would close due to low interest rates reducing the prize pool. At the time of the announcement there were 1.2m bondholders with NZD $3.2 billion invested.
Unrelated concepts
In 2023, American economist Paul Krugman used the name "premium bonds" for an unrelated type of bond that he proposed to avoid a default due to the United States debt ceiling.
Academic studies
In 2008, two financial economists, Lobe and Hoelzl, analysed the main driving factors for the immense marketing success of Premium Bonds. One in three Britons invest in Premium Bonds. The thrill of gambling is significantly boosted by enhancing the skewness of the prize distribution. However, using data collected over the past fifty years, they found that the bond bears relatively low risk compared to many other investments.
Aaron Brown discusses in a 2006 book Premium Bonds in comparison with equity-linked, commodity-linked and other "added risk" bonds. His conclusion is that it makes little difference, either to a retail investor or from a theoretical finance perspective, whether the added risk comes from a random number generator or from fluctuations in financial markets.
See also
Prize-linked savings accounts are savings accounts which use a similar system to grant interest
References
External links
National Savings & Investments website
Are Premium Bonds worth it? – BBC News, 2006
Q&A: Premium Bonds – The Guardian, 2006
Companies based in Blackpool
Companies based in Glasgow
Borough of Fylde
Government bonds issued by the United Kingdom
1956 introductions
Personal finance
Public finance of the United Kingdom
Lotteries in the United Kingdom
History of computing in the United Kingdom
Tax-advantaged savings plans in the United Kingdom | Premium Bonds | Technology | 2,277 |
861,338 | https://en.wikipedia.org/wiki/Teuthology | Teuthology (from Greek τεῦθος, "cuttlefish, squid", and -λογία, -logia) is the study of cephalopods, which are members of the class Cephalopoda in the phylum Mollusca. Some common examples of cephalopods are octopus, squid, and cuttlefish. Teuthology is a large area of study that covers cephalopod life cycles, reproduction, evolution, anatomy, and taxonomy.
Teuthology is a specific branch of malacology, the study of molluscs. A teuthologist is a scientist who studies teuthology.
Research Highlights
2023
The publication of the English translation of Albin O Ebersbach's thesis on the detailed descriptions of cirrate octopods marks an expansion of access to important taxonomical identifying information in teuthology.
The third paper in the series led by Tristian Joseph Verhoeff revisiting cirrate octopods is published.
2022
Several papers describing new species of cephalopods were published this year. Two of the papers were the beginning of the series led by Tristian Joseph Verhoeff describing new cirrate octopods discovered around Australia and New Zealand. The third paper describes two new Sepiolina species also discovered in Australian waters.
Organizations
The Cephalopod International Advisory Council (CIAC) is a group founded by teuthologists to discuss advancements and growth of cephalopod research.
See also
References
Malacology
Marine biology
Subfields of zoology | Teuthology | Biology | 326 |
52,441,236 | https://en.wikipedia.org/wiki/Dimethandrolone%20dodecylcarbonate | Dimethandrolone dodecylcarbonate (developmental code name CDB-4730), or dimethandrolone dodecanoylcarbonate, also known as 7α,11β-dimethyl-19-nortestosterone 17β-dodecylcarbonate, is a synthetic and orally active anabolic–androgenic steroid (AAS) and a derivative of nandrolone (19-nortestosterone) which was developed by the Contraceptive Development Branch (CDB) of the National Institute of Child Health and Human Development (NICHD) and has not been marketed at this time. It is an androgen ester – specifically, the C17β dodecylcarbonate ester of dimethandrolone (7α,11β-dimethyl-19-nortestosterone) – and acts as a prodrug of dimethandrolone in the body.
See also
List of androgen esters
References
Abandoned drugs
Androgen esters
Anabolic–androgenic steroids
Contraception for males
Dodecylcarbonate esters
Estranes
Enones
Progestogen esters
Progestogens | Dimethandrolone dodecylcarbonate | Chemistry | 242 |
341,265 | https://en.wikipedia.org/wiki/Jungle | A jungle is land covered with dense forest and tangled vegetation, usually in tropical climates. Application of the term has varied greatly during the past century.
Etymology
The word jungle originates from the Sanskrit word jaṅgala (), meaning rough and arid. It came into the English language in the 18th century via the Hindustani word for forest (Hindi/Urdu: /) (Jangal). Jāṅgala has also been variously transcribed in English as jangal, jangla, jungal, and juṅgala.
It has been suggested that an Anglo-Indian interpretation led to its connotation as a dense "tangled thicket". The term is prevalent in many languages of the Indian subcontinent, and the Iranian Plateau, where it is commonly used to refer to the plant growth replacing primeval forest or to the unkempt tropical vegetation that takes over abandoned areas.
Wildlife
Because jungles occur on all inhabited landmasses and may incorporate numerous vegetation and land types in different climatic zones, the wildlife of jungles cannot be straightforwardly defined.
Varying usage
As dense and tangled vegetation
One of the most common meanings of jungle is land overgrown with tangled vegetation at ground level, especially in the tropics. Typically such vegetation is sufficiently dense to hinder movement by humans, requiring that travellers cut their way through. This definition draws a distinction between rainforest and jungle, since the understorey of rainforests is typically open of vegetation due to a lack of sunlight, and hence relatively easy to traverse. Jungles may exist within, or at the borders of, tropical forests in areas where the woodland has been opened through natural disturbance such as hurricanes, or through human activity such as logging. The successional vegetation that springs up following such disturbance, is dense and tangled and is a "typical" jungle. Jungle also typically forms along rainforest margins such as stream banks, once again due to the greater available light at ground level.
Monsoon forests and mangroves are commonly referred to as jungles of this type. Having a more open canopy than rainforests, monsoon forests typically have dense understoreys with numerous lianas and shrubs making movement difficult, while the prop roots and low canopies of mangroves produce similar difficulties.
As moist forest
Because European explorers initially travelled through tropical forests largely by river, the dense tangled vegetation lining the stream banks gave a misleading impression that such jungle conditions existed throughout the entire forest. As a result, it was wrongly assumed that the entire forest was impenetrable jungle. This in turn appears to have given rise to the second popular usage of jungle as virtually any humid tropical forest. Jungle in this context is particularly associated with tropical rain forest, but may extend to cloud forest, temperate rainforest, and mangroves with no reference to the vegetation structure or the ease of travel.
The terms "tropical forest" and "rainforest" have largely replaced "jungle" as the descriptor of humid tropical forests, a linguistic transition that has occurred since the 1970s. "Rainforest" itself did not appear in English dictionaries prior to the 1970s. The word "jungle" accounted for over 80% of the terms used to refer to tropical forests in print media prior to the 1970s; since then it has been steadily replaced by "rainforest", although "jungle" still remains in common use when referring to tropical rainforests.
As metaphor
As a metaphor, jungle often refers to situations that are unruly or lawless, or where the only law is perceived to be "survival of the fittest". This reflects the view of "city people" that forests are such places. Upton Sinclair gave the title The Jungle (1906) to his famous book about the life of workers at the Chicago Stockyards, portraying the workers as being mercilessly exploited with no legal or other lawful recourse.
The term "The Law of the Jungle" is also used in a similar context, drawn from Rudyard Kipling's The Jungle Book (1894)—though in the society of jungle animals portrayed in that book and obviously meant as a metaphor for human society, that phrase referred to an intricate code of laws which Kipling describes in detail, and not at all to a lawless chaos.
The word "jungle" carries connotations of untamed and uncontrollable nature and isolation from civilisation, along with the emotions that evokes: threat, confusion, powerlessness, disorientation and immobilisation. The change from "jungle" to "rainforest" as the preferred term for describing tropical forests has been a response to an increasing perception of these forests as fragile and spiritual places, a viewpoint not in keeping with the darker connotations of "jungle".
Cultural scholars, especially post-colonial critics, often analyse the jungle within the concept of hierarchical domination and the demand western cultures often places on other cultures to conform to their standards of civilisation. For example: Edward Said notes that the Tarzan depicted by Johnny Weissmuller was a resident of the jungle representing the savage, untamed and wild, yet still a white master of it; and in his essay "An Image of Africa" about Heart of Darkness Nigerian novelist and theorist Chinua Achebe notes how the jungle and Africa become the source of temptation for white European characters like Marlowe and Kurtz.
Former Israeli Prime Minister Ehud Barak compared Israel to "a villa in the jungle", a comparison which had been often quoted in Israeli political debates. Barak's critics on the left side of Israeli politics strongly criticised the comparison.
See also
Monsoon forest
Arid Forest Research Institute (AFRI)
Rainforest
Wilderness
Grove (nature)
Amazon rainforest
References
External links
BBC - Science and Nature: Jungle
"Biomes of the World" by Dennis Paulson
Forests
Metaphors
Landscape | Jungle | Biology | 1,158 |
24,321,909 | https://en.wikipedia.org/wiki/OpenLR | OpenLR is a royalty-free open standard for "procedures and formats for the encoding, transmission, and decoding of local data irrespective of the map" developed by TomTom.
The format allows locations localised on one map to be found on another map to which the data have been transferred.
OpenLR requires that the coordinates are specified in the WGS 84 format and that route links are given in metres. Also, all routes need to be assigned to a "functional road class".
The specification is described in a white paper licensed under a Creative Commons license. Additionally, TomTom has published an open-source library for the format under the Apache license.
See also
Traffic Message Channel
GPS
Point of Interest
References
External links
OpenLR - Open, Compact and Royalty-free Dynamic Location Referencing
Open formats
Geographic data and information
GIS vector file formats | OpenLR | Technology | 173 |
23,798 | https://en.wikipedia.org/wiki/Poincar%C3%A9%20conjecture | In the mathematical field of geometric topology, the Poincaré conjecture (, , ) is a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space.
Originally conjectured by Henri Poincaré in 1904, the theorem concerns spaces that locally look like ordinary three-dimensional space but which are finite in extent. Poincaré hypothesized that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. Attempts to resolve the conjecture drove much progress in the field of geometric topology during the 20th century.
The eventual proof built upon Richard S. Hamilton's program of using the Ricci flow to solve the problem. By developing a number of new techniques and results in the theory of Ricci flow, Grigori Perelman was able to modify and complete Hamilton's program. In papers posted to the arXiv repository in 2002 and 2003, Perelman presented his work proving the Poincaré conjecture (and the more powerful geometrization conjecture of William Thurston). Over the next several years, several mathematicians studied his papers and produced detailed formulations of his work.
Hamilton and Perelman's work on the conjecture is widely recognized as a milestone of mathematical research. Hamilton was recognized with the Shaw Prize and the Leroy P. Steele Prize for Seminal Contribution to Research. The journal Science marked Perelman's proof of the Poincaré conjecture as the scientific Breakthrough of the Year in 2006. The Clay Mathematics Institute, having included the Poincaré conjecture in their well-known Millennium Prize Problem list, offered Perelman their prize of US$1 million for the conjecture's resolution. He declined the award, saying that Hamilton's contribution had been equal to his own.
Overview
The Poincaré conjecture was a mathematical problem in the field of geometric topology. In terms of the vocabulary of that field, it says the following:
Poincaré conjecture.Every three-dimensional topological manifold which is closed, connected, and has trivial fundamental group is homeomorphic to the three-dimensional sphere.
Familiar shapes, such as the surface of a ball (which is known in mathematics as the two-dimensional sphere) or of a torus, are two-dimensional. The surface of a ball has trivial fundamental group, meaning that any loop drawn on the surface can be continuously deformed to a single point. By contrast, the surface of a torus has nontrivial fundamental group, as there are loops on the surface which cannot be so deformed. Both are topological manifolds which are closed (meaning that they have no boundary and take up a finite region of space) and connected (meaning that they consist of a single piece). Two closed manifolds are said to be homeomorphic when it is possible for the points of one to be reallocated to the other in a continuous way. Because the (non)triviality of the fundamental group is known to be invariant under homeomorphism, it follows that the two-dimensional sphere and torus are not homeomorphic.
The two-dimensional analogue of the Poincaré conjecture says that any two-dimensional topological manifold which is closed and connected but non-homeomorphic to the two-dimensional sphere must possess a loop which cannot be continuously contracted to a point. (This is illustrated by the example of the torus, as above.) This analogue is known to be true via the classification of closed and connected two-dimensional topological manifolds, which was understood in various forms since the 1860s. In higher dimensions, the closed and connected topological manifolds do not have a straightforward classification, precluding an easy resolution of the Poincaré conjecture.
History
Poincaré's question
In the 1800s, Bernhard Riemann and Enrico Betti initiated the study of topological invariants of manifolds. They introduced the Betti numbers, which associate to any manifold a list of nonnegative integers. Riemann had showed that a closed connected two-dimensional manifold is fully characterized by its Betti numbers. As part of his 1895 paper Analysis Situs (announced in 1892), Poincaré showed that Riemann's result does not extend to higher dimensions. To do this he introduced the fundamental group as a novel topological invariant, and was able to exhibit examples of three-dimensional manifolds which have the same Betti numbers but distinct fundamental groups. He posed the question of whether the fundamental group is sufficient to topologically characterize a manifold (of given dimension), although he made no attempt to pursue the answer, saying only that it would "demand lengthy and difficult study".
The primary purpose of Poincaré's paper was the interpretation of the Betti numbers in terms of his newly-introduced homology groups, along with the Poincaré duality theorem on the symmetry of Betti numbers. Following criticism of the completeness of his arguments, he released a number of subsequent "supplements" to enhance and correct his work. The closing remark of his second supplement, published in 1900, said:
In order to avoid making this work too prolonged, I confine myself to stating the following theorem, the proof of which will require further developments:
Each polyhedron which has all its Betti numbers equal to 1 and all its tables orientable is simply connected, i.e., homeomorphic to a hypersphere.
(In a modern language, taking note of the fact that Poincaré is using the terminology of simple-connectedness in an unusual way, this says that a closed connected oriented manifold with the homology of a sphere must be homeomorphic to a sphere.) This modified his negative generalization of Riemann's work in two ways. Firstly, he was now making use of the full homology groups and not only the Betti numbers. Secondly, he narrowed the scope of the problem from asking if an arbitrary manifold is characterized by topological invariants to asking whether the sphere can be so characterized.
However, after publication he found his announced theorem to be incorrect. In his fifth and final supplement, published in 1904, he proved this with the counterexample of the Poincaré homology sphere, which is a closed connected three-dimensional manifold which has the homology of the sphere but whose fundamental group has 120 elements. This example made it clear that homology is not powerful enough to characterize the topology of a manifold. In the closing remarks of the fifth supplement, Poincaré modified his erroneous theorem to use the fundamental group instead of homology:
One question remains to be dealt with: is it possible for the fundamental group of to reduce to the identity without being simply connected? [...] However, this question would carry us too far away.
In this remark, as in the closing remark of the second supplement, Poincaré used the term "simply connected" in a way which is at odds with modern usage, as well as his own 1895 definition of the term. (According to modern usage, Poincaré's question is a tautology, asking if it is possible for a manifold to be simply connected without being simply connected.) However, as can be inferred from context, Poincaré was asking whether the triviality of the fundamental group uniquely characterizes the sphere.
Throughout the work of Riemann, Betti, and Poincaré, the topological notions in question are not defined or used in a way that would be recognized as precise from a modern perspective. Even the key notion of a "manifold" was not used in a consistent way in Poincaré's own work, and there was frequent confusion between the notion of a topological manifold, a PL manifold, and a smooth manifold. For this reason, it is not possible to read Poincaré's questions unambiguously. It is only through the formalization and vocabulary of topology as developed by later mathematicians that Poincaré's closing question has been understood as the "Poincaré conjecture" as stated in the preceding section.
However, despite its usual phrasing in the form of a conjecture, proposing that all manifolds of a certain type are homeomorphic to the sphere, Poincaré only posed an open-ended question, without venturing to conjecture one way or the other. Moreover, there is no evidence as to which way he believed his question would be answered.
Solutions
In the 1930s, J. H. C. Whitehead claimed a proof but then retracted it. In the process, he discovered some examples of simply-connected (indeed contractible, i.e. homotopically equivalent to a point) non-compact 3-manifolds not homeomorphic to , the prototype of which is now called the Whitehead manifold.
In the 1950s and 1960s, other mathematicians attempted proofs of the conjecture only to discover that they contained flaws. Influential mathematicians such as Georges de Rham, R. H. Bing, Wolfgang Haken, Edwin E. Moise, and Christos Papakyriakopoulos attempted to prove the conjecture. In 1958, R. H. Bing proved a weak version of the Poincaré conjecture: if every simple closed curve of a compact 3-manifold is contained in a 3-ball, then the manifold is homeomorphic to the 3-sphere. Bing also described some of the pitfalls in trying to prove the Poincaré conjecture.
Włodzimierz Jakobsche showed in 1978 that, if the Bing–Borsuk conjecture is true in dimension 3, then the Poincaré conjecture must also be true.
Over time, the conjecture gained the reputation of being particularly tricky to tackle. John Milnor commented that sometimes the errors in false proofs can be "rather subtle and difficult to detect". Work on the conjecture improved understanding of 3-manifolds. Experts in the field were often reluctant to announce proofs and tended to view any such announcement with skepticism. The 1980s and 1990s witnessed some well-publicized fallacious proofs (which were not actually published in peer-reviewed form).
An exposition of attempts to prove this conjecture can be found in the non-technical book Poincaré's Prize by George Szpiro.
Dimensions
The classification of closed surfaces gives an affirmative answer to the analogous question in two dimensions. For dimensions greater than three, one can pose the Generalized Poincaré conjecture: is a homotopy n-sphere homeomorphic to the n-sphere? A stronger assumption than simply-connectedness is necessary; in dimensions four and higher there are simply-connected, closed manifolds which are not homotopy equivalent to an n-sphere.
Historically, while the conjecture in dimension three seemed plausible, the generalized conjecture was thought to be false. In 1961, Stephen Smale shocked mathematicians by proving the Generalized Poincaré conjecture for dimensions greater than four and extended his techniques to prove the fundamental h-cobordism theorem. In 1982, Michael Freedman proved the Poincaré conjecture in four dimensions. Freedman's work left open the possibility that there is a smooth four-manifold homeomorphic to the four-sphere which is not diffeomorphic to the four-sphere. This so-called smooth Poincaré conjecture, in dimension four, remains open and is thought to be very difficult. Milnor's exotic spheres show that the smooth Poincaré conjecture is false in dimension seven, for example.
These earlier successes in higher dimensions left the case of three dimensions in limbo. The Poincaré conjecture was essentially true in both dimension four and all higher dimensions for substantially different reasons. In dimension three, the conjecture had an uncertain reputation until the geometrization conjecture put it into a framework governing all 3-manifolds. John Morgan wrote:
Hamilton's program and solution
Hamilton's program was started in his 1982 paper in which he introduced the Ricci flow on a manifold and showed how to use it to prove some special cases of the Poincaré conjecture. In the following years, he extended this work but was unable to prove the conjecture. The actual solution was not found until Grigori Perelman published his papers.
In late 2002 and 2003, Perelman posted three papers on arXiv. In these papers, he sketched a proof of the Poincaré conjecture and a more general conjecture, Thurston's geometrization conjecture, completing the Ricci flow program outlined earlier by Richard S. Hamilton.
From May to July 2006, several groups presented papers that filled in the details of Perelman's proof of the Poincaré conjecture, as follows:
Bruce Kleiner and John W. Lott posted a paper on arXiv in May 2006 which filled in the details of Perelman's proof of the geometrization conjecture, following partial versions which had been publicly available since 2003. Their manuscript was published in the journal "Geometry and Topology" in 2008. A small number of corrections were made in 2011 and 2013; for instance, the first version of their published paper made use of an incorrect version of Hamilton's compactness theorem for Ricci flow.
Huai-Dong Cao and Xi-Ping Zhu published a paper in the June 2006 issue of the Asian Journal of Mathematics with an exposition of the complete proof of the Poincaré and geometrization conjectures. The opening paragraph of their paper stated
Some observers interpreted Cao and Zhu as taking credit for Perelman's work. They later posted a revised version, with new wording, on arXiv. In addition, a page of their exposition was essentially identical to a page in one of Kleiner and Lott's early publicly available drafts; this was also amended in the revised version, together with an apology by the journal's editorial board.
John Morgan and Gang Tian posted a paper on arXiv in July 2006 which gave a detailed proof of just the Poincaré Conjecture (which is somewhat easier than the full geometrization conjecture) and expanded this to a book.
All three groups found that the gaps in Perelman's papers were minor and could be filled in using his own techniques.
On August 22, 2006, the ICM awarded Perelman the Fields Medal for his work on the Ricci flow, but Perelman refused the medal. John Morgan spoke at the ICM on the Poincaré conjecture on August 24, 2006, declaring that "in 2003, Perelman solved the Poincaré Conjecture".
In December 2006, the journal Science honored the proof of Poincaré conjecture as the Breakthrough of the Year and featured it on its cover.
Ricci flow with surgery
Hamilton's program for proving the Poincaré conjecture involves first putting a Riemannian metric on the unknown simply connected closed 3-manifold. The basic idea is to try to "improve" this metric; for example, if the metric can be improved enough so that it has constant positive curvature, then according to classical results in Riemannian geometry, it must be the 3-sphere. Hamilton prescribed the "Ricci flow equations" for improving the metric;
where g is the metric and R its Ricci curvature, and one hopes that, as the time t increases, the manifold becomes easier to understand. Ricci flow expands the negative curvature part of the manifold and contracts the positive curvature part.
In some cases, Hamilton was able to show that this works; for example, his original breakthrough was to show that if the Riemannian manifold has positive Ricci curvature everywhere, then the above procedure can only be followed for a bounded interval of parameter values, with , and more significantly, that there are numbers such that as , the Riemannian metrics smoothly converge to one of constant positive curvature. According to classical Riemannian geometry, the only simply-connected compact manifold which can support a Riemannian metric of constant positive curvature is the sphere. So, in effect, Hamilton showed a special case of the Poincaré conjecture: if a compact simply-connected 3-manifold supports a Riemannian metric of positive Ricci curvature, then it must be diffeomorphic to the 3-sphere.
If, instead, one only has an arbitrary Riemannian metric, the Ricci flow equations must lead to more complicated singularities. Perelman's major achievement was to show that, if one takes a certain perspective, if they appear in finite time, these singularities can only look like shrinking spheres or cylinders. With a quantitative understanding of this phenomenon, he cuts the manifold along the singularities, splitting the manifold into several pieces and then continues with the Ricci flow on each of these pieces. This procedure is known as Ricci flow with surgery.
Perelman provided a separate argument based on curve shortening flow to show that, on a simply-connected compact 3-manifold, any solution of the Ricci flow with surgery becomes extinct in finite time. An alternative argument, based on the min-max theory of minimal surfaces and geometric measure theory, was provided by Tobias Colding and William Minicozzi. Hence, in the simply-connected context, the above finite-time phenomena of Ricci flow with surgery is all that is relevant. In fact, this is even true if the fundamental group is a free product of finite groups and cyclic groups.
This condition on the fundamental group turns out to be necessary and sufficient for finite time extinction. It is equivalent to saying that the prime decomposition of the manifold has no acyclic components and turns out to be equivalent to the condition that all geometric pieces of the manifold have geometries based on the two Thurston geometries and S3. In the context that one makes no assumption about the fundamental group whatsoever, Perelman made a further technical study of the limit of the manifold for infinitely large times, and in so doing, proved Thurston's geometrization conjecture: at large times, the manifold has a thick-thin decomposition, whose thick piece has a hyperbolic structure, and whose thin piece is a graph manifold. Due to Perelman's and Colding and Minicozzi's results, however, these further results are unnecessary in order to prove the Poincaré conjecture.
Solution
On November 11, 2002, Russian mathematician Grigori Perelman posted the first of a series of three eprints on arXiv outlining a solution of the Poincaré conjecture. Perelman's proof uses a modified version of a Ricci flow program developed by Richard S. Hamilton. In August 2006, Perelman was awarded, but declined, the Fields Medal (worth $15,000 CAD) for his work on the Ricci flow. On March 18, 2010, the Clay Mathematics Institute awarded Perelman the $1 million Millennium Prize in recognition of his proof. Perelman rejected that prize as well.
Perelman proved the conjecture by deforming the manifold using the Ricci flow (which behaves similarly to the heat equation that describes the diffusion of heat through an object). The Ricci flow usually deforms the manifold towards a rounder shape, except for some cases where it stretches the manifold apart from itself towards what are known as singularities. Perelman and Hamilton then chop the manifold at the singularities (a process called "surgery"), causing the separate pieces to form into ball-like shapes. Major steps in the proof involve showing how manifolds behave when they are deformed by the Ricci flow, examining what sort of singularities develop, determining whether this surgery process can be completed, and establishing that the surgery need not be repeated infinitely many times.
The first step is to deform the manifold using the Ricci flow. The Ricci flow was defined by Richard S. Hamilton as a way to deform manifolds. The formula for the Ricci flow is an imitation of the heat equation, which describes the way heat flows in a solid. Like the heat flow, Ricci flow tends towards uniform behavior. Unlike the heat flow, the Ricci flow could run into singularities and stop functioning. A singularity in a manifold is a place where it is not differentiable: like a corner or a cusp or a pinching. The Ricci flow was only defined for smooth differentiable manifolds. Hamilton used the Ricci flow to prove that some compact manifolds were diffeomorphic to spheres, and he hoped to apply it to prove the Poincaré conjecture. He needed to understand the singularities.
Hamilton created a list of possible singularities that could form, but he was concerned that some singularities might lead to difficulties. He wanted to cut the manifold at the singularities and paste in caps and then run the Ricci flow again, so he needed to understand the singularities and show that certain kinds of singularities do not occur. Perelman discovered the singularities were all very simple: consider that a cylinder is formed by 'stretching' a circle along a line in another dimension, repeating that process with spheres instead of circles essentially gives the form of the singularities. Perelman proved this using something called the "Reduced Volume", which is closely related to an eigenvalue of a certain elliptic equation.
Sometimes, an otherwise complicated operation reduces to multiplication by a scalar (a number). Such numbers are called eigenvalues of that operation. Eigenvalues are closely related to vibration frequencies and are used in analyzing a famous problem: can you hear the shape of a drum? Essentially, an eigenvalue is like a note being played by the manifold. Perelman proved this note goes up as the manifold is deformed by the Ricci flow. This helped him eliminate some of the more troublesome singularities that had concerned Hamilton, particularly the cigar soliton solution, which looked like a strand sticking out of a manifold with nothing on the other side. In essence, Perelman showed that all the strands that form can be cut and capped and none stick out on one side only.
Completing the proof, Perelman takes any compact, simply connected, three-dimensional manifold without boundary and starts to run the Ricci flow. This deforms the manifold into round pieces with strands running between them. He cuts the strands and continues deforming the manifold until, eventually, he is left with a collection of round three-dimensional spheres. Then, he rebuilds the original manifold by connecting the spheres together with three-dimensional cylinders, morphs them into a round shape, and sees that, despite all the initial confusion, the manifold was, in fact, homeomorphic to a sphere.
One immediate question posed was how one could be sure that infinitely many cuts are not necessary. This was raised due to the cutting potentially progressing forever. Perelman proved this cannot happen by using minimal surfaces on the manifold. A minimal surface is one on which any local deformation increases area; a familiar example is a soap film spanning a bent loop of wire. Hamilton had shown that the area of a minimal surface decreases as the manifold undergoes Ricci flow. Perelman verified what happened to the area of the minimal surface when the manifold was sliced. He proved that, eventually, the area is so small that any cut after the area is that small can only be chopping off three-dimensional spheres and not more complicated pieces. This is described as a battle with a Hydra by Sormani in Szpiro's book cited below. This last part of the proof appeared in Perelman's third and final paper on the subject.
See also
Manifold Destiny
References
Further reading
External links
"The Poincaré Conjecture" – BBC Radio 4 programme In Our Time, 2 November 2006. Contributors June Barrow-Green, Lecturer in the History of Mathematics at the Open University, Ian Stewart, Professor of Mathematics at the University of Warwick, Marcus du Sautoy, Professor of Mathematics at the University of Oxford, and presenter Melvyn Bragg.
Geometric topology
3-manifolds
Theorems in topology
Millennium Prize Problems
Conjecture
Conjectures that have been proved
1904 introductions | Poincaré conjecture | Mathematics | 4,869 |
16,313,813 | https://en.wikipedia.org/wiki/Eyesore | An eyesore is something that is largely considered to look unpleasant or ugly. Its technical usage is as an alternative perspective to the notion of landmark. Common examples include dilapidated buildings, graffiti, litter, polluted areas, and excessive commercial signage such as billboards. Some eyesores may be a matter of opinion such as controversial modern architecture (see also spite house), transmission towers or wind turbines. Natural eyesores include feces, mud and weeds.
Effect on property values
In the US, the National Association of Realtors says an eyesore can shave about 10 percent off the value of a nearby listing.
Remediation
Clean-up programmes to improve or remove eyesores are often started by local bodies or even national governments. These are frequently called Operation Eyesore. High-profile international events such as the Olympic Games usually trigger such activity.
Others contend that it is best to address these problems while they are small, since signs of neglect encourage anti-social behaviour such as vandalism and fly-tipping. This strategy is known as fixing broken windows.
Controversy
Whether some constructions are eyesores is a matter of opinion which may change over time. Landmarks are often called eyesores.
Examples of divided opinion
Eiffel Tower – Upon its construction, Parisians wanted it torn down as an eyesore. In modern times it is one of the world's top landmarks.
Golden Gate Bridge – Controversial ahead of its construction, it being said in The Wasp that it "would prove an eye-sore to those now living ... certainly mar if not utterly destroy the natural charm of the harbor famed throughout the world." It is now considered a notable landmark.
Millennium Dome – The ugliest building in the world in a poll by the business magazine Forbes of "15 architects, all of whom were American apart from one who was British and one who was Canadian".
Federation Square – Despite being hailed a landmark by many, it has equally been rejected by many notable Australians as an eyesore.
Wind farms – Thought to be the worst eyesore by readers of Country Life but liked by others.
Boston City Hall – Has been called "The World's Ugliest Building".
One Rincon Hill – Situated just south of San Francisco's Financial District, this high-rise condominium surrounded by shorter buildings has generated some mixed reviews.
Lloyd's Building – Situated in the City of London, this building was described as an oil refinery when it was opened in 1986 for having most of its facilities, stairways and AC on the outside. Some people still say this, although the building has become more popular and liked in the recent years.
Tour Montparnasse – A lone skyscraper in the Montparnasse area of Paris, France. Its appearance mars the Paris urban landscape, and construction of skyscrapers was banned in the city centre two years after its completion. A 2008 poll of editors on Virtualtourist voted the building the second ugliest building in the world. It is sometimes said that the view from the top is the most beautiful in Paris, because it is the only place from which the tower itself cannot be seen.
Brisbane Transit Centre and Riverside Expressway – Both have been called eyesores and planning debacles by University of Queensland Associate Professor of Architecture Peter Skinner.
Tricorn Centre in Portsmouth – Built in 1964, it was initially highly respected. It was described as a "mildewed lump of elephant droppings" by Prince Charles, and was subsequently demolished.
Structures that have been described as eyesores
Spencer Street Power Station – An asbestos ridden landmark regarded by many as Melbourne's biggest eyesore. It was demolished in 2008.
Cahill Expressway in Sydney – Regarded by many as a major planning mistake.
Sydney Harbour Control Tower – Constructed in 1974 and demolished in 2016.
Riverside Plaza in Minneapolis, Minnesota
Embarcadero Freeway – Along The Embarcadero in San Francisco, this double-decker elevated freeway blocked The Embarcadero's view and shadowed the boulevard under it. When it was demolished in 1991, the long-abandoned Ferry Building and the boulevard under the freeway were restored.
Petrobras Headquarters in Rio de Janeiro, Brazil – An example of concrete brutalism applied to an office building.
The Hole In The Road in Sheffield, England – Filled-in during 1994.
City-Center in Helsinki – Colloquially known as Makkaratalo (Sausage House) because of the concrete sausage-like railing circling the third floor parking lot.
Northampton Power Station, England – Left derelict since 1975, it was demolished circa 2015 to make way for the University of Northampton.
House of Soviets, Kaliningrad, Russia – "The ugliest building on Russian soil".
School of Architecture, Royal Institute of Technology, Stockholm, Sweden – Won an opinion poll for Stockholm's ugliest building, by broad majority. Damaged by a fire in 2011.
Spire of Dublin in Dublin, Republic of Ireland
American Dream Meadowlands – Most politicians and the public have equally criticized the building's appearance calling it "The ugliest building in New Jersey".
Waldschlösschen Bridge in Dresden, Germany – The Dresden Elbe Valley lost the UNESCO World Heritage Site status because of this bridge.
Barclays Center in Brooklyn, New York – Widely regarded as a jarring and aesthetically unappealing addition to the local landscape.
Cebu City Hall – Considered an eyesore by many during the early to mid 2000s, until it was renovated in 2007, and is now considered as one of the best city halls in the Philippines.
Majesty Building in Altamonte Springs, Florida – Locally known as the I-4 Eyesore, a building that has been under construction since 2001.
Torre de Manila – A high-rise development by DMCI Homes that dwarfs the Rizal Monument.
Viking Wind Farm – Under construction in the Tingwall Valley in Central Shetland.
See also
Aesthetics
Brownfield land
Local ordinances
NIMBY
Redevelopment
Spite fence
Town planning
Ugliness
Urban blight
Visual pollution
References
External links
Aesthetics
Urban planning
Pollution | Eyesore | Engineering | 1,218 |
54,382,705 | https://en.wikipedia.org/wiki/1%2C4-Diazacycloheptane | 1,4-Diazacycloheptane is an organic compound with the formula (CH2)5(NH)2. This cyclic diamine is a colorless oily liquid that is soluble in polar solvents. It is studied as a chelating ligand. The N-H centers can be replaced with many other groups.
It has known use in piperazine pharmaceuticals, for example:
Fasudil
Bunazosin
Homochlorcyclizine
Homopipramol
Related compounds
1,5-Diazacyclooctane
References
Diamines
Chelating agents | 1,4-Diazacycloheptane | Chemistry | 122 |
68,604,844 | https://en.wikipedia.org/wiki/Mars%20and%20the%20Mind%20of%20Man | Mars and the Mind of Man is a non-fiction book chronicling a public symposium at the California Institute of Technology on November 12, 1971. The panel consisted of five luminaries of science, literature, and journalism: Ray Bradbury; Arthur C. Clarke; Bruce C. Murray; Carl Sagan and Walter Sullivan. These five are the authors of this book. The symposium occurred shortly before the Mariner 9 space probe entered orbit around Mars. The book was published in 1973 by Harper and Row of New York.
About the book
The book is record of the November 1971 discussion undertaken by the five distinguished panel members mentioned above. This conversation earmarked Mariner 9's Martian arrival as an important moment. Also, the symposium hailed a remarkable milestone. Mariner 9 was to be the first earth spacecraft to be inserted into the orbit of another distinct planet. As noted, "...Caltech Planetary Science professor Bruce Murray summoned [the] formidable panel of thinkers to discuss the implications of this historic event." The discussion's moderator was Walter Sullivan, the New York Times science editor. Varied perspectives were offered on the Mariner 9 mission; the red planet itself; the interrelationship of humans and the Cosmos; prioritizing the exploration of space; and contemplating civilization's future. Also included in the book are the first photos sent to Earth by the Mariner 9 space probe and "...a selection of 'afterthoughts' by the panelists, looking back on the historic achievement."
Bradbury's poem
On several minutes of archived footage released by NASA, Bradbury is shown engaging in witty banter with other panel members at the November 1971 panel discussion. The film segment was issued in 2012 to honor a newly named site on the red planet,"Bradbury Landing". Also the released footage shows Bradbury reading his poem "If Only We Had Taller Been" (poem begins at 2:20) At the time, this was "...one of several unpublished poems he shared at the event." Before reading the poem, Bradbury is recorded saying "I don’t know what in the hell I’m doing here. I’m the least scientific of all the people up on the platform here today...I was hoping, that during the last few days, as we got closer to Mars and the dust cleared, that we’d see a lot of Martians standing there with huge signs saying, ‘Bradbury was right,’”
References
External links
Exploration of the Planets. A short 1971 NASA film. US National Archives. YouTube.
American non-fiction books
1973 non-fiction books
Astronomy books
Popular physics books
Popular science books
California Institute of Technology
NASA space probes
Harper & Row books
Works by Carl Sagan | Mars and the Mind of Man | Astronomy | 566 |
37,493,716 | https://en.wikipedia.org/wiki/C1orf109 | Chromosome 1 open reading frame 109 is a protein in humans that is encoded by the C1orf109 gene.
Clinical significance
This gene may play a role in cancer cell proliferation.
References
External links
Uncharacterized proteins | C1orf109 | Biology | 46 |
14,014,315 | https://en.wikipedia.org/wiki/49%20Cassiopeiae | 49 Cassiopeiae is a binary star system in the northern circumpolar constellation of Cassiopeia. It is visible to the naked eye as a faint, yellow-hued point of light with an apparent visual magnitude of 5.22. The system is located about 412 light years away from the Sun, based on parallax. The pair had an angular separation of along a position angle of 244°, as of 2008, with the brighter component being of magnitude 5.32 and its faint companion having magnitude 12.30.
The primary, designated component A, is an aging giant star with a stellar classification of G8III. It is 302 million years old with 3.3 times the mass of the Sun. With the supply of hydrogen at its core exhausted, the star has now expanded to 17 times the Sun's radius. It is a red clump giant on the horizontal branch, which indicates it is generating energy through the fusion of helium at its core. The star is radiating 140 times the luminosity of the Sun from its swollen photosphere at an effective temperature of . Its faint secondary companion, component B, is of an unknown spectral type. It has a temperature similar to the primary, but a luminosity much lower than the Sun's.
References
G-type giants
Horizontal-branch stars
Double stars
Cassiopeia (constellation)
BD+75 0086
Cassiopeiae, 49
012339
009763
0592 | 49 Cassiopeiae | Astronomy | 304 |
638,998 | https://en.wikipedia.org/wiki/Congaree%20National%20Park | Congaree National Park is a national park of the United States in central South Carolina, 18 miles southeast of the state capital, Columbia. The park preserves the largest tract of old growth bottomland hardwood forest left in the United States. The lush trees growing in its floodplain forest are some of the tallest in the eastern United States, forming one of the highest temperate deciduous forest canopies remaining in the world. The Congaree River flows through the park. About are designated as a wilderness area.
The park received its official designation in 2003 as the culmination of a grassroots campaign that began in 1969. With 145,929 visitors in 2018, it ranks as the United States' 10th-least visited national park, just behind Nevada's Great Basin National Park.
Park history
Pre-park
Resource extraction on the Congaree River centered on cypress logging from 1898 when the Santee River Cypress Logging Company began to operate in the area of what is now the park. Owned by Francis Beidler and Benjamin F. Ferguson of Chicago, the company operated until 1914; subsequently, Beidler and his heirs retained ownership of the area. In the 1950s Harry R. E. Hampton was a member of the Cedar Creek Hunt Club and co-editor of The State. Hampton joined with Peter Manigault at the Charleston The Post and Courier to advocate preservation of the Congaree floodplain. Hampton formed the Beidler Forest Preservation Association in 1961. As a result of this advocacy a 1963 study by the National Park Service reported favorably on the establishment of a national monument.
Monument establishment
No progress was made in the 1960s. Renewed logging by the Beidlers in 1969 prompted the 1972 formation of the Congaree Swamp National Preserve Association (CSNPA). The CSNPA joined forces with the Sierra Club and other conservation organizations to promote federal legislation to preserve the tract. South Carolina Senators Strom Thurmond and Ernest F. Hollings introduced legislation in 1975 for the establishment of a national preserve. On October 18, 1976, legislation was passed to create Congaree Swamp National Monument. An expansion plan was introduced by Hollings and Thurmond in 1988, expanding the monument to .
Conversion to a national park
Over two-thirds of the national monument was designated a wilderness area on October 24, 1988, and it became an Important Bird Area on July 26, 2001. Congress redesignated the monument Congaree National Park on November 10, 2003, dropping the misleading "swamp" from the name, and simultaneously expanded its authorized boundary by approximately . As of December 31, 2011, approximately of the park are in federal ownership.
Environment
The park preserves a significant part of the Middle Atlantic coastal forests ecoregion. Although it is frequently referred to as a swamp, it is largely bottomland subject to periodic inundation by floodwaters.
It has been designated an old growth forest and part of the Old Growth Forest Network. The park also has one of the largest concentrations of champion trees in the world, with the tallest known examples of 15 species. Champion trees include a 361-point loblolly pine, a 384-point sweetgum, a 465-point cherrybark oak, a 354-point American elm, a 356-point swamp chestnut oak, a 371-point overcup oak, and a 219-point common persimmon.
Large animals possibly seen in the park include bobcats, deer, feral pigs, feral dogs, coyotes, armadillos, turkeys, and otters. Its waters contain interesting creatures like amphibians, turtles, snakes, and many types of fish, including bowfin, alligator gar, and catfish.
Amenities and attractions
In addition to being a designated wilderness area, a UNESCO biosphere reserve, an important bird area and a national natural landmark, Congaree National Park features primitive campsites and offers hiking, canoeing, kayaking, and bird watching. The park is also a popular spot for watching firefly displays on summer evenings. Primitive and backcountry camping are available. Some of the hiking trails include the Bluff Trail (0.7 mi), Weston Lake Loop Trail (4.6 mi), Oakridge Trail (7.5 mi), and King Snake Trail (11.1 mi) where hikers may spot deer, raccoon, opossum, and even bobcat tracks. The National Park Service rangers have current trail conditions which can be found in the Harry Hampton Visitor Center. Along with hiking trails, the park also has a marked canoe trail on Cedar Creek.
Most visitors to the park walk along the Boardwalk Loop, an elevated walkway through the swampy environment that protects delicate fungi and plant life at ground level. Congaree boasts both the tallest () and largest (42 cubic meters) loblolly pines (Pinus taeda) alive today as well as several cypress trees well over 500 years old.
The Harry Hampton Visitor Center features exhibits about the natural history of the park, and the efforts to protect the swamp.
Monthly volunteer-led hikes are offered on some of the longer trails to give visitors an opportunity to get off the boardwalk and up close to nature.
Climate
According to the Köppen climate classification system, Congaree National Park has a Humid subtropical climate (Cfa).
Geology
The park resides entirely within the Congaree River Floodplain Complex with flood deposits of sand, silt, and clay. Muck and peat are the products of vegetation decay. The meander of the river has produced distinctive oxbow lakes. North of the park is the NE-SW regional trending Augusta Fault and the Terrace Complex consisting of Pliocene fluvial terraces. South of the park is the Southern Bluffs, which have been eroding since the Late Pleistocene. West of the park is the Fall Line and Piedmont.
Documentary
In 2008, South Carolina Educational Television (SCETV) produced a documentary on the history of the Congaree National Park titled Roots in the River: The Story of Congaree National Park. The documentary featured interviews with people involved in the movement that eventually led to the area's U.S. National Monument status, and observed the role the park plays in the surrounding community of the Lower Richland County area of South Carolina. The program first aired on the SCETV network in September 2009.
See also
List of national parks of the United States
List of National Natural Landmarks in South Carolina
References
Notes
Sources
The National Parks: Index 2001–2003. Washington: U.S. Department of the Interior.
https://web.archive.org/web/20110722135216/http://www.scetv.org/index.php/press/release/etv_to_broadcast_new_carolina_stories_documentary_roots_in_the_river
External links
Official site: Congaree National Park
Friends of Congaree Swamp
Wilderness.net page on the park
Panoramic photo of the exhibits in the Harry Hampton Visitor Center
Old-growth forests
Museums in Richland County, South Carolina
Natural history museums in South Carolina
Protected areas established in 2003
Ramsar sites in the United States
2003 establishments in South Carolina
Santee River
Wetlands of South Carolina
Landforms of Richland County, South Carolina
National Natural Landmarks in South Carolina | Congaree National Park | Biology | 1,484 |
4,024,093 | https://en.wikipedia.org/wiki/Thermal%20efficiency | In thermodynamics, the thermal efficiency () is a dimensionless performance measure of a device that uses thermal energy, such as an internal combustion engine, steam turbine, steam engine, boiler, furnace, refrigerator, ACs etc.
For a heat engine, thermal efficiency is the ratio of the net work output to the heat input; in the case of a heat pump, thermal efficiency (known as the coefficient of performance or COP) is the ratio of net heat output (for heating), or the net heat removed (for cooling) to the energy input (external work). The efficiency of a heat engine is fractional as the output is always less than the input while the COP of a heat pump is more than 1. These values are further restricted by the Carnot theorem.
Overview
In general, energy conversion efficiency is the ratio between the useful output of a device and the input, in energy terms. For thermal efficiency, the input, , to the device is heat, or the heat-content of a fuel that is consumed. The desired output is mechanical work, , or heat, , or possibly both. Because the input heat normally has a real financial cost, a memorable, generic definition of thermal efficiency is
From the first law of thermodynamics, the energy output cannot exceed the input, and by the second law of thermodynamics it cannot be equal in a non-ideal process, so
When expressed as a percentage, the thermal efficiency must be between 0% and 100%. Efficiency must be less than 100% because there are inefficiencies such as friction and heat loss that convert the energy into alternative forms. For example, a typical gasoline automobile engine operates at around 25% efficiency, and a large coal-fuelled electrical generating plant peaks at about 46%. However, advances in Formula 1 motorsport regulations have pushed teams to develop highly efficient power units which peak around 45–50% thermal efficiency. The largest diesel engine in the world peaks at 51.7%. In a combined cycle plant, thermal efficiencies approach 60%. Such a real-world value may be used as a figure of merit for the device.
For engines where a fuel is burned, there are two types of thermal efficiency: indicated thermal efficiency and brake thermal efficiency. This form of efficiency is only appropriate when comparing similar types or similar devices.
For other systems, the specifics of the calculations of efficiency vary, but the non-dimensional input is still the same:
Efficiency = Output energy / input energy.
Heat engines
Heat engines transform thermal energy, or heat, Qin into mechanical energy, or work, Wnet. They cannot do this task perfectly, so some of the input heat energy is not converted into work, but is dissipated as waste heat Qout < 0 into the surroundings:
The thermal efficiency of a heat engine is the percentage of heat energy that is transformed into work. Thermal efficiency is defined as
The efficiency of even the best heat engines is low; usually below 50% and often far below. So the energy lost to the environment by heat engines is a major waste of energy resources. Since a large fraction of the fuels produced worldwide go to powering heat engines, perhaps up to half of the useful energy produced worldwide is wasted in engine inefficiency, although modern cogeneration, combined cycle and energy recycling schemes are beginning to use this heat for other purposes. This inefficiency can be attributed to three causes. There is an overall theoretical limit to the efficiency of any heat engine due to temperature, called the Carnot efficiency. Second, specific types of engines have lower limits on their efficiency due to the inherent irreversibility of the engine cycle they use. Thirdly, the nonideal behavior of real engines, such as mechanical friction and losses in the combustion process causes further efficiency losses.
Carnot efficiency
The second law of thermodynamics puts a fundamental limit on the thermal efficiency of all heat engines. Even an ideal, frictionless engine can't convert anywhere near 100% of its input heat into work. The limiting factors are the temperature at which the heat enters the engine, , and the temperature of the environment into which the engine exhausts its waste heat, , measured in an absolute scale, such as the Kelvin or Rankine scale. From Carnot's theorem, for any engine working between these two temperatures:
This limiting value is called the Carnot cycle efficiency because it is the efficiency of an unattainable, ideal, reversible engine cycle called the Carnot cycle. No device converting heat into mechanical energy, regardless of its construction, can exceed this efficiency.
Examples of are the temperature of hot steam entering the turbine of a steam power plant, or the temperature at which the fuel burns in an internal combustion engine. is usually the ambient temperature where the engine is located, or the temperature of a lake or river into which the waste heat is discharged. For example, if an automobile engine burns gasoline at a temperature of and the ambient temperature is , then its maximum possible efficiency is:
It can be seen that since is fixed by the environment, the only way for a designer to increase the Carnot efficiency of an engine is to increase , the temperature at which the heat is added to the engine. The efficiency of ordinary heat engines also generally increases with operating temperature, and advanced structural materials that allow engines to operate at higher temperatures is an active area of research.
Due to the other causes detailed below, practical engines have efficiencies far below the Carnot limit. For example, the average automobile engine is less than 35% efficient.
Carnot's theorem applies to thermodynamic cycles, where thermal energy is converted to mechanical work. Devices that convert a fuel's chemical energy directly into electrical work, such as fuel cells, can exceed the Carnot efficiency.
Engine cycle efficiency
The Carnot cycle is reversible and thus represents the upper limit on efficiency of an engine cycle. Practical engine cycles are irreversible and thus have inherently lower efficiency than the Carnot efficiency when operated between the same temperatures and . One of the factors determining efficiency is how heat is added to the working fluid in the cycle, and how it is removed. The Carnot cycle achieves maximum efficiency because all the heat is added to the working fluid at the maximum temperature , and removed at the minimum temperature . In contrast, in an internal combustion engine, the temperature of the fuel-air mixture in the cylinder is nowhere near its peak temperature as the fuel starts to burn, and only reaches the peak temperature as all the fuel is consumed, so the average temperature at which heat is added is lower, reducing efficiency.
An important parameter in the efficiency of combustion engines is the specific heat ratio of the air-fuel mixture, γ. This varies somewhat with the fuel, but is generally close to the air value of 1.4. This standard value is usually used in the engine cycle equations below, and when this approximation is made the cycle is called an air-standard cycle.
Otto cycle: automobiles The Otto cycle is the name for the cycle used in spark-ignition internal combustion engines such as gasoline and hydrogen fuelled automobile engines. Its theoretical efficiency depends on the compression ratio r of the engine and the specific heat ratio γ of the gas in the combustion chamber. Thus, the efficiency increases with the compression ratio. However the compression ratio of Otto cycle engines is limited by the need to prevent the uncontrolled combustion known as knocking. Modern engines have compression ratios in the range 8 to 11, resulting in ideal cycle efficiencies of 56% to 61%.
Diesel cycle: trucks and trains In the Diesel cycle used in diesel truck and train engines, the fuel is ignited by compression in the cylinder. The efficiency of the Diesel cycle is dependent on r and γ like the Otto cycle, and also by the cutoff ratio, rc, which is the ratio of the cylinder volume at the beginning and end of the combustion process: The Diesel cycle is less efficient than the Otto cycle when using the same compression ratio. However, practical Diesel engines are 30% - 35% more efficient than gasoline engines. This is because, since the fuel is not introduced to the combustion chamber until it is required for ignition, the compression ratio is not limited by the need to avoid knocking, so higher ratios are used than in spark ignition engines.
Rankine cycle: steam power plants The Rankine cycle is the cycle used in steam turbine power plants. The overwhelming majority of the world's electric power is produced with this cycle. Since the cycle's working fluid, water, changes from liquid to vapor and back during the cycle, their efficiencies depend on the thermodynamic properties of water. The thermal efficiency of modern steam turbine plants with reheat cycles can reach 47%, and in combined cycle plants, in which a steam turbine is powered by exhaust heat from a gas turbine, it can approach 60%.
Brayton cycle: gas turbines and jet engines The Brayton cycle is the cycle used in gas turbines and jet engines. It consists of a compressor that increases pressure of the incoming air, then fuel is continuously added to the flow and burned, and the hot exhaust gasses are expanded in a turbine. The efficiency depends largely on the ratio of the pressure inside the combustion chamber p2 to the pressure outside p1
Other inefficiencies
One should not confuse thermal efficiency with other efficiencies that are used when discussing engines. The above efficiency formulas are based on simple idealized mathematical models of engines, with no friction and working fluids that obey simplified thermodynamic models. Real engines have many departures from ideal behavior that waste energy, reducing actual efficiencies below the theoretical values given above. Examples are:
friction of moving parts
inefficient combustion
heat loss from the combustion chamber
departure of the working fluid from the thermodynamic properties of an ideal gas
aerodynamic drag of air moving through the engine
energy used by auxiliary equipment like oil and water pumps.
inefficient compressors and turbines
imperfect valve timing
These factors may be accounted when analyzing thermodynamic cycles, however discussion of how to do so is outside the scope of this article.
Energy conversion
For a device that converts energy from another form into thermal energy (such as an electric heater, boiler, or furnace), the thermal efficiency is
where the quantities are heat-equivalent values.
So, for a boiler that produces 210 kW (or 700,000 BTU/h) output for each 300 kW (or 1,000,000 BTU/h) heat-equivalent input, its thermal efficiency is 210/300 = 0.70, or 70%. This means that 30% of the energy is lost to the environment.
An electric resistance heater has a thermal efficiency close to 100%. When comparing heating units, such as a highly efficient electric resistance heater to an 80% efficient natural gas-fuelled furnace, an economic analysis is needed to determine the most cost-effective choice.
Effects of fuel heating value
The heating value of a fuel is the amount of heat released during an exothermic reaction (e.g., combustion) and is a characteristic of each substance. It is measured in units of energy per unit of the substance, usually mass, such as: kJ/kg, J/mol.
The heating value for fuels is expressed as the HHV, LHV, or GHV to distinguish treatment of the heat of phase changes:
Higher heating value (HHV) is determined by bringing all the products of combustion back to the original pre-combustion temperature, and in particular condensing any vapor produced. This is the same as the thermodynamic heat of combustion.
Lower heating value (LHV) (or net calorific value) is determined by subtracting the heat of vaporization of the water vapor from the higher heating value. The energy required to vaporize the water therefore is not realized as heat.
Gross heating value accounts for water in the exhaust leaving as vapor, and includes liquid water in the fuel prior to combustion. This value is important for fuels like wood or coal, which will usually contain some amount of water prior to burning.
Which definition of heating value is being used significantly affects any quoted efficiency. Not stating whether an efficiency is HHV or LHV renders such numbers very misleading.
Heat pumps and refrigerators
Heat pumps, refrigerators and air conditioners use work to move heat from a colder to a warmer place, so their function is the opposite of a heat engine. The work energy (Win) that is applied to them is converted into heat, and the sum of this energy and the heat energy that is taken up from the cold reservoir (QC) is equal to the magnitude of the total heat energy given off to the hot reservoir (|QH|)
Their efficiency is measured by a coefficient of performance (COP). Heat pumps are measured by the efficiency with which they give off heat to the hot reservoir, COPheating; refrigerators and air conditioners by the efficiency with which they take up heat from the cold space, COPcooling:
The reason the term "coefficient of performance" is used instead of "efficiency" is that, since these devices are moving heat, not creating it, the amount of heat they move can be greater than the input work, so the COP can be greater than 1 (100%). Therefore, heat pumps can be a more efficient way of heating than simply converting the input work into heat, as in an electric heater or furnace.
Since they are heat engines, these devices are also limited by Carnot's theorem. The limiting value of the Carnot 'efficiency' for these processes, with the equality theoretically achievable only with an ideal 'reversible' cycle, is:
The same device used between the same temperatures is more efficient when considered as a heat pump than when considered as a refrigerator since
This is because when heating, the work used to run the device is converted to heat and adds to the desired effect, whereas if the desired effect is cooling the heat resulting from the input work is just an unwanted by-product. Sometimes, the term efficiency is used for the ratio of the achieved COP to the Carnot COP, which can not exceed 100%.
Energy efficiency
The 'thermal efficiency' is sometimes called the energy efficiency. In the United States, in everyday usage the SEER is the more common measure of energy efficiency for cooling devices, as well as for heat pumps when in their heating mode. For energy-conversion heating devices their peak steady-state thermal efficiency is often stated, e.g., 'this furnace is 90% efficient', but a more detailed measure of seasonal energy effectiveness is the annual fuel use efficiency (AFUE).
Heat exchangers
The role of a heat exchanger is to transfer heat between two mediums, so the performance of the heat exchanger is closely related to energy or thermal efficiency. A counter flow heat exchanger is the most efficient type of heat exchanger in transferring heat energy from one circuit to the other. However, for a more complete picture of heat exchanger efficiency, exergetic considerations must be taken into account. Thermal efficiencies of an internal combustion engine are typically higher than that of external combustion engines.
See also
Kalina cycle
Electrical efficiency
Mechanical efficiency
Heat engine
Federal roofing tax credit for energy efficiency (US)
Lower heating value
Cost of electricity by source
Higher heating value
Energy conversion efficiency
References
Thermodynamic properties
Heating, ventilation, and air conditioning
Energy conversion
Engineering thermodynamics | Thermal efficiency | Physics,Chemistry,Mathematics,Engineering | 3,185 |
1,273,765 | https://en.wikipedia.org/wiki/Cerberin | Cerberin is a type of cardiac glycoside, found in the seeds of the dicotyledonous angiosperm genus Cerbera; including the suicide tree (Cerbera odollam) and the sea mango (Cerbera manghas). As a cardiac glycoside, cerberin disrupts the function of the heart by blocking its sodium and potassium ATPase. Cerberin can be used as a treatment for heart failure and arrhythmia.
Overconsumption of cerberin results in poisoning. Symptoms include nausea, vomiting, and bradycardia, often leading to death. Cerberin containing plants such as Cerbera odollam have historically been used for suicide and homicide in their growth regions due to their high toxicity.
Structure and properties
Structure
Cerberin, like all cardiac glycosides, has as its core a steroid-type set of four carbocycles (all-carbon rings). In cerberin, this steroid core is connected, first, to a separate oxygen-containing lactone ring (shown here, upper right of box), and second, to a sugar substituent (shown in infobox structure, left of image).
There are two types of cardiac glycosides depending on the characteristics of the lactone moiety. Cerberin, with its five-membered ring, belongs to the cardenolide class; cardenolides are 23-carbon steroids with methyl groups at positions 10 and 13 of the steroid ring system, and the appended five-membered butenolide-type of lactone at C-17.
Many types of sugars can be attached to cardiac glycosides; in the case of cerberin, it is an O-acetylated derivative of α-L-thevetose, which is itself a derivative of L-glucose (6-deoxy-3-O-methyl-α-L-glucopyranose). The cardenolide substructure to which the sugar is attached has also been independently characterised, and can be referred to as digitoxigenin (see image), hence, cerberin is, synonymously, (L-2′-O-acetylthevetosyl)digitoxigenin. As well, the non-acetylated structure was independently discovered and named neriifolin, and so cerberin is, synonymously, 2′-acetylneriifolin.
Physical properties
Cerberin is slightly soluble in chloroform and methanol. It is white to pale yellow in color.
Toxicity
The literature on cerberin toxicity, per se, remains sparse; unless otherwise specifically indicated, the following is general information regarding cardiac glycoside toxicity, with an emphasis on information from cardenolides (i.e., steroid natural products bearing the same digitoxigenin substructure).
A historic, reported lethal dose of cerberin in dog is 1.8 mg/kg, and in cat 3.1 mg/kg; that is, it is very low. However, toxicity in humans is variable. One study showed that with treatment, humans could survive dosages of 1/2 kernel 94% of the time, 1 kernel 92% of the time, 2 kernels 71% of the time, and 4 kernels 67% of the time. Deaths were also variable when cerberin was used for trials by ordeal.
Symptoms
Those who ingest cerberin experience, within an hour, a variety of gastrointestinal and cardiac symptoms, particularly nausea, vomiting, abdominal pain, and bradycardia. Forensic sources indicate presentations for cardiac toxin poisonings that additionally include burning sensations in the mouth, diarrhea, headache, dilated pupils, irregular beating of the heart, and drowsiness; coma and death most often eventually follow. There is no clear, reported correlation between the dose and mortality (see above); death often occurs after 3–6 hours.
Poisonings
There is significant evidence from Cerbera with regard to lethal poisonings. Individual cases of poisoning from Cerbera are documented including direct and indirect, and intentional and unintentional ingestion. Cases of human fatalities from consumption of crab where the crustacean had earlier consumed plants producing cerberin or related cardenolides are known.
Aside from accidental poisoning, cerberin has also been used for suicide and homicide. For example, a 2004 study found that Cerbera odollam was responsible for an average of one suicide death per week between 1989 and 1999 in Kerala, India. It was also the cause of 50% of plant poisoning cases and 10% of all poisonings in that region.
Cerberin is ideal for use as a poison because it is fast acting, its flavor is easy to mask when added to food, and it is relatively undetectable because there is only one analytical test to determine its presence in tissues after death.
Mechanism of action
There is very little formal, modern published information on the mechanism of action of cerberin.
Cerberin, as a cardiac glycoside, is seen as binding to and inhibiting the cellular Na+/K+ -ATPase, because it binds to the alpha-subunit of the enzyme. This is the catalytic moiety. There are also a beta- and FXYD subunits. These two subunits influence the affinity of cerberin to Na+/K+ -ATPase. The expression of the beta- and FXYD-subunit is tissue-specific. Because of this, cerberin will have different effects in different tissues. When cerberin binds to the Na+/K+-ATPase the conformation of the enzyme changes. This will lead to the activation of signal transduction pathways in the cell. A detailed description of the effects of cerberin in the cell is given below.
Na+/K+-ATPase pump
Na+/K+-ATPase is an ion transport system of sodium and potassium ions and requires energy. It is often used in many types of cellular systems. Sodium ions move out of the cell and potassium ions enter the cell (3:2) with the aid of this pump. During the transport of these ions, the enzyme undergoes several changes in conformation. Including a phosphorylation and dephosphorylation step.
The transport of Na+ and K+ is important for cell survival. Cardiac glycosides, such as cerberin, alter the transport of ions against their gradient. Cerberin is able to bind to the extracellular part of the Na+/K+-ATPase pump and can block the dephosphorylation step. Due to this inhibition it is impossible to transport sodium and potassium across the membrane and results in raising intracellular concentration of Na+.
Na+/Ca2+-exchanger
Accumulation of intracellular sodium ions cause an increase of intracellular calcium. This is because the calcium-sodium exchange pump’s activity decreases. The calcium-sodium exchange pump exchanges Ca2+ and Na+ without the use of energy. This exchanger is essential for maintaining sodium and calcium homeostasis. The exact mechanism by which this exchanger works is unclear. It is known that calcium and sodium can move in either direction across the membrane of muscle cells. It is also known that three sodium ions are exchanged for each calcium and that an increase in intracellular sodium concentration through this exchange mechanism leads to an increase in intracellular calcium concentration. As intracellular sodium increases, the concentration gradient driving sodium into the cell across the exchanger is reduced. As a result, the activity of the exchanger is reduced, which decreases the movement of calcium out of the cell.
Thus by inhibiting the Na+/K+-ATPase, cardiac glycosides cause intracellular sodium concentration to increase. This leads to an accumulation of intracellular calcium via the Na+/Ca2+-exchange system with the following effects:
In the heart, increased intracellular calcium causes more calcium to be released, thereby making more calcium available to bind to troponin-C, which increases contractility (inotropy).
Inhibition of the Na+/K+-ATPase in vascular smooth muscle causes depolarization, which causes smooth muscle contraction.
The conformational change of Na+/K+-ATPase plays not only a role in the contraction of muscles, but also in cell growth, cell motility and apoptosis. Due to de binding of cerberin, specific second messengers can be activated. After a cascade of cellular interactions nuclear transcription factors binds to the DNA and new enzymes will be made. This enzymes can for example play a role in cell proliferation.[subscription required]
Metabolism
Very little is known about the metabolism of cerberin. For the related digoxin, another cardiac glycoside, it is in largest part excreted unchanged by the kidneys (60-80%), with the remaining mostly metabolised by the liver. The half-life for digoxin is 36–48 hours for people with a normal renal function and up to 6 days for people with a compromised renal function. This makes the renal function an important factor in the toxicity of digoxin and perhaps for cerberin as well.
Efficacy
There is very little formal, modern published information on the pharmacological actions of cerberin. One primary source reports that its ingestion results in electrocardiogram (ECG) changes, such as various types of bradycardia (e.g., sinus bradycardia), AV dissociation, and junctional rhythms; second-degree sinoatrial block and nodal rhythm are also described.
In the case of digitalis administration, ST depression or T wave inversion may occur without indicating toxicity; however, PR interval prolongation indicate toxicity.
Therapeutic uses
There are no clearly established therapeutic uses of the title compound, cerberin. Digitalis compounds, related cardiac glycosides, function through the inhibition of the Na+/K+-ATPase-pump, and have been widely used for in the treatment of chronic heart failure and arrhythmias; although newer and more efficacious treatments for heart failure are available, digitalis compounds are still used. Some cardiac glycosides have been shown to have antiproliferative and apoptotic effects, and are therefore of interest as potential agents in cancer chemotherapy; there is a single report to date of possible antiproliferative activity of cerberin.
Further reading
References
Cardiac glycosides
Plant toxins | Cerberin | Chemistry | 2,207 |
37,402,637 | https://en.wikipedia.org/wiki/APRS%20Calling | APRS Calling is a manual procedure for calling stations on the Automatic Packet Reporting System (APRS) to initiate communications on another frequency, or possibly by other means. It is inspired by Digital Selective Calling, a part of the Global Maritime Distress Safety System. It also builds on existing digital procedures inherited from morse code and radioteletype operation. ITU Q codes are used in conjunction with APRS text messages to implement APRS calling. APRS calling is intended to complement monitoring voice calling frequencies.
Procedure
The calling station sends a QSX signal to the station or group they wish to reach using an APRS text message. The QSX should include the necessary information for contact, possibly including frequency and mode.
Once the called station or stations are ready to communicate on the specified channel, they answer using a QSX text message on APRS.
The stations shift communications to the arranged channel.
Example
All APRS transmissions include the call sign programmed into the APRS unit. The text message doesn't necessarily require the station identification commonly seen in voice, CW or RTTY exchanges, so long as the programmed call sign is valid.
This example shows N6BRK announcing a net on 147.480 MHz to the NALCO APRS group. When ready to communicate on the coordinated frequency, KJ6VVJ responds with an acknowledgment to the QSX. If the operating mode isn't obvious in context of the frequency, the initiating stations QSX should specify what it is.
See also
ACP-131 - Combined Communications-Electronics Board Communications Instructions / Operating Signals
Amateur radio
References
American Radio Relay League. Field Service Form FSD-218. American Radio Relay League, 2004.
Bass, Richard K. GMDSS A study guide for the Global Maritime Distress Safety System. Tele-Technology, 2007.
Brehaut, Denise. GMDSS A User's Handbook. Bloomsbury Publishing Plc, 2009.
Combined Communication Electronics Board (CCEB). Communications Instructions ACP 131 (F) Operating Signals. Combined Communications-Electronics Board, 2006.
External links
Amateur Radio Universal Text Messaging/Contact Initiative
Automatic Voice Relay System
ARRL Field Service Form FSD-218 (1/04)
Digital amateur radio
Brevity codes
Packet radio | APRS Calling | Technology | 464 |
60,071,977 | https://en.wikipedia.org/wiki/Suillus%20kaibabensis | Suillus kaibabensis is a species of fungus in the family Boletaceae. The species was first described scientifically by American mycologist Harry D. Thiers.
Description
Suillus kaibabensis is a hardy yellow member of the genus Suillus. While yellow is the primary color, it can also include brown and white tones in varying degrees as well. This mushroom has a stem around 2–4 cm long, and 1–2 cm thick. The cap is broad and convex to flat, typical of many species of boletes. The stipe is bare, spotted with olive brown and can be reddish-brown at the base. The pores are yellowish-brown that can turn a salmon color with age.
Taxonomy
This species was first described by Harry D. Thiers in 1978. This species bears close resemblance to Suillus granulatus. The species has a very common look to other Suillus, and is rather difficult to tell apart other than the close association with Ponderosa pine. An easy identifier to use for Suillus is Suillus Filter.
Ecology
Suillus kaibabensis grows in the four corners region of Arizona, New Mexico, Utah, and Colorado. This species exclusively prefers Ponderosa pine. It is mycorrhizal, and requires these trees to survive. It produces fruiting bodies during the wetter season of late July to September.
See also
List of Boletus species
List of North American boletes
References
kaibabensis
Fungi of North America
Fungus species | Suillus kaibabensis | Biology | 310 |
25,312,638 | https://en.wikipedia.org/wiki/Hair%20whorl%20%28horse%29 | A hair whorl is a patch of hair growing in the opposite direction of the rest of the hair. Hair whorls can occur on animals with hairy coats, and are often found on horses and cows. Locations where whorls are found in equines include the stomach, face, stifle and hocks. Hair whorls in horses are also known as crowns, swirls, trichoglyphs, or cowlicks.
Hair whorls are sometimes classified according to the direction of hair growth (e.g. clockwise or counterclockwise), shape, or other physical characteristics.
Anecdotal evidence claims a statistical correlation between the location, number, or type of whorls and behaviour or temperament in horses and other species. There is some research suggesting that the direction of hair whorls may correlate to a horse's preference for the right or left lead and other directionality.
History
The theories that hair whorls could describe various physical and personality characteristics in horses have been around for thousands of years.
There are references of hair whorls in the works of the Indian sage Salihotra.
Bedouin horsemen used whorls to determine the value of horses for sale. One Arabian horse has been recorded with 40 whorls on his body, although the average horse has around six. Bedouins looked for whorls between the horse's ears as a sign of swiftness, and if there were any on either side of the neck, they were known as the 'finger of the Prophet'.
One legend of whorls is the "Prophet's Thumbprint" a birthmark in the form of an indentation, usually found on the side of a horse’s neck, totally harmless although it comes with a legend.
The Prophet Mohammed was wandering the desert with his herd of horses for many days, and as they approached an oasis he sent them forth to drink. But as the thirsty horses approached the water, he called them back. Only five of his mares stopped and returned to him, and to thank them for their loyalty he blessed them by pressing his thumbprint into their necks.
It’s believed that a horse with such a mark will be outstanding, being a descendant of one of these brood mares that the Prophet Mohammed particularly treasured.
Other Bedouin beliefs include:
A whorl on the chest meant prosperity.
A whorl on the girth was a sign of good fortune, and an increase in flocks
A whorl on the flank was known as a 'spur whorl' and if curved up meant safety in battle; if inclined downwards it meant prosperity. The Byerley Turk, a founding sire of the Thoroughbred breed, was said to have spur whorls and was never hurt in battle.
The Whorl of the Sultan was located on the windpipe, and meant love and prosperity.
Whorls above the eyes meant the master was to die of a head injury
The whorl of the coffin was located close to the withers. If sloping downwards towards the shoulder it meant the rider would die in the saddle, probably in battle or from a gunshot.
Classification
There are several types of whorls on horses:
Simple: hairs draw into a single point from all directions
Tufted: hairs converges and piles up into a tuft
Linear: hair growing in opposite directions meet along the same line vertically
Crested: hair growing in opposite directions meet to form a crest
Feathered: hair meets along a line but at an angle to form a feathered pattern
Relation to behaviour
Several studies have reported a statistical relationship between the location, number, or type of whorls and behaviour or temperament in horses.
One study of 219 working horses found a relation between the direction of facial hair whorls and motor laterality; right-lateralised horses had significantly more clockwise facial hair whorls and left-lateralised horses had significantly more counter-clockwise facial hair whorls.
Konik horses with a single whorl located above their eyes were rated as more difficult to handle whereas horses that also had a single whorl but located below or right in between their eyes were easier to handle. Whorls that were found to be elongated or doubled acted the most cautious when coming up to an unfamiliar object. They looked longer and were slower to approaching then the single whorled horses.
Lundy ponies with 'left' whorls score highly on calmness, placidness, enthusiasm and friendliness, whereas those with 'right' whorls score highly on wariness, associated flightiness and unfriendliness. Ponies with two facial whorls are rated as significantly more 'enthusiastic' and less 'wary' than those with one or three facial whorls.
Whorls on Thoroughbred horses may be physical indicators of a predisposition to perform repetitive abnormal behaviours, i.e. stereotypies.
Notes
References
Identification of domesticated animals
Ethology
Horse coat colors
Spirals | Hair whorl (horse) | Biology | 964 |
1,968,245 | https://en.wikipedia.org/wiki/Paul%20Kunz | Paul Kunz (December 20, 1942 – September 12, 2018) was an American particle physicist and software developer, who initiated the deployment of the first web server outside of Europe. After a meeting in September 1991 with Tim Berners-Lee of CERN, he returned to the Stanford Linear Accelerator Center (SLAC) with word of the World Wide Web. By Thursday, December 12, 1991, there was an active Web server installed and operational at SLAC, establishing the first Web server in the US, the SPIRES HEP, connected to the SPIRES High Energy Physics database, thanks to the efforts of Kunz, Louise Addis, and Terry Hung.
He was also the originator of the free/open source GNUstep implementation of the NeXTSTEP framework and also at the basis of the idea for objcX (objective-C for the X Window System). He was the chief developer of HippoDraw, a statistical analysis software, primarily intended for the analysis and presentation of particle physics and astrophysics data at SLAC.
External links
"GNUstep: Who's Who Developers"
"Early World Wide Web at SLAC"
References
1942 births
2018 deaths
American particle physicists | Paul Kunz | Technology | 247 |
19,657,849 | https://en.wikipedia.org/wiki/Blossom%20%28functional%29 | In numerical analysis, a blossom is a functional that can be applied to any polynomial, but is mostly used for Bézier and spline curves and surfaces.
The blossom of a polynomial ƒ, often denoted is completely characterised by the three properties:
It is a symmetric function of its arguments:
(where π is any permutation of its arguments).
It is affine in each of its arguments:
It satisfies the diagonal property:
References
Numerical analysis | Blossom (functional) | Mathematics | 93 |
12,077,900 | https://en.wikipedia.org/wiki/UK%20Threat%20Levels | The United Kingdom Terror Threat Levels, often referred to as UK Threat Levels, are the alert states that have been in use since 1 August 2006 by the British government to warn of forms of terrorist activity. In September 2010 the threat levels for Northern Ireland-related terrorism were also made available. In July 2019 changes were made to the terrorism threat level system, to reflect the threat posed by all forms of terrorism, irrespective of ideology. There is now a single national threat level describing the threat to the UK, which includes Islamist, Northern Ireland, left-wing and right-wing terrorism. Before 2006, a colour-based alert scheme known as BIKINI state was used. The response indicates how government departments and agencies and their staffs should react to each threat level.
Categories of threat
Since 23 July 2019, the Home Office has reported two different categories of terrorist threat:
National Threat Level.
Northern Ireland-related Threat Level to Northern Ireland
Previously, since 24 September 2010, the Home Office has reported three different categories of terrorist threat:
Threat from international terrorism.
Terrorism threat related to Northern Ireland in Northern Ireland itself.
Terrorism threat related to Northern Ireland in Great Britain (i.e. excluding Northern Ireland).
A fourth category of terrorist threat is also assessed but is not disclosed, relating to threats to sectors of the UK's critical national infrastructure such as the London Underground, National Rail network and power stations.
The Joint Terrorism Analysis Centre (JTAC) is responsible for setting the threat level from international terrorism and the Security Service (MI5) is responsible for setting both threat levels related to Northern Ireland. The threat level informs decisions on protective security measures taken by public bodies, the police and the transport sector.
Threat levels
Threat Levels are decided using the following information:
Available intelligence. It is rare that specific threat information is available and can be relied upon. More often, judgements about the threat will be based on a wide range of information, which is often fragmentary, including the level and nature of current terrorist activity, comparison with events in other countries and previous attacks. Intelligence is only ever likely to reveal part of the picture.
Terrorist capability. An examination of what is known about the capabilities of the terrorists in question and the method they may use based on previous attacks or from intelligence. This would also analyse the potential scale of the attack.
Terrorist intentions. Using intelligence and publicly available information to examine the overall aims of the terrorists and the ways they may achieve them including what sort of targets they would consider attacking.
Timescale. The threat level expresses the likelihood of an attack in the near term. We know from past incidents that some attacks take years to plan, while others are put together more quickly. In the absence of specific intelligence, a judgement will need to be made about how close an attack might be to fruition. Threat levels do not have any set expiry date, but are regularly subject to review in order to ensure that they remain current.
History
Threat levels were originally produced by MI5's Counter-Terrorism Analysis Centre for internal use within the British government. Assessments known as Security Service Threat Reports or Security Service Reports were issued to assess the level of threat to British interests in a given country or region. They had six levels: Imminent, High, Significant, Moderate, Low and Negligible. Following terrorist attacks in Indonesia in 2002, the system was criticised by the Intelligence and Security Committee of Parliament (ISC) as insufficiently clear and needing to be of greater use to "customer departments".
The 7 July 2005 London bombings prompted the government to update the threat level system following a recommendation from the ISC that it should deliver "a greater transparency of the threat level and alert systems as a whole, and in particular [it is recommended] that more thought is given to what is put in the public domain about the level of threat and required level of alert." The system was accordingly simplified and made easier to understand. Since 2006, MI5 and the Home Office have published international terrorism threat levels for the entire UK on their websites, and since 2010 they have also published threat levels for Northern Ireland, with separate threat levels for Northern Ireland and the rest of the UK.
2019 'New Reporting Format'
In July 2019 changes were made to the terrorism threat level system creating a 'New Format' of threat levels, to reflect the threat posed by all forms of terrorism, irrespective of ideology. There is now a single national threat level describing the threat to the UK, which includes Islamist, Northern Ireland, left-wing and right-wing terrorism.
Changes to threat levels
The following table records changes to the threat levels from July 2019 – Present:
Old-format historical threat levels
Since 2006, information about the national threat level has been available on the MI5 and Home Office websites. In September 2010 the threat levels for Northern Ireland-related terrorism were also made available. The following table records changes to the threat levels from August 2006 – July 2019 before the 'New Format' was put into place:
See also
Historic/Defunct:
References
External links
Current Threat Level, Home Office
2006 introductions
Alert measurement systems
Threat Levels
Emergency management in the United Kingdom | UK Threat Levels | Technology | 1,039 |
34,344,124 | https://en.wikipedia.org/wiki/Terminator%202%3A%20Judgment%20Day | Terminator 2: Judgment Day is a 1991 American science fiction action film directed by James Cameron, who co-wrote the script with William Wisher. Starring Arnold Schwarzenegger, Linda Hamilton, and Robert Patrick, it is the sequel to The Terminator (1984) and is the second installment in the Terminator franchise. In the film, the malevolent artificial intelligence Skynet sends a Terminator—a highly advanced killing machine—back in time to 1995 to kill the future leader of the human resistance John Connor when he is a child. The resistance sends back a less advanced, reprogrammed Terminator to protect Connor and ensure the future of humanity.
The Terminator was considered a significant success, enhancing Schwarzenegger's and Cameron's careers, but work on a sequel stalled because of animosity between the pair and Hemdale Film Corporation, which partially owned the film's rights. In 1990, Schwarzenegger and Cameron persuaded Carolco Pictures to purchase the rights from The Terminator producer Gale Anne Hurd and Hemdale, which was financially struggling. A release date was set for the following year, leaving Cameron and Wisher seven weeks to write the script. Principal photography lasted from October 1990 to March 1991, taking place in and around Los Angeles on an estimated $94–102million budget, making it the most expensive film made at the time. The advanced visual effects by Industrial Light & Magic (ILM), which include the first use of a computer-generated main character in a blockbuster film, resulted in a schedule overrun. Theatrical prints were not delivered to theaters until the night before the picture's release on July 3, 1991.
Terminator 2 was a critical and commercial success, grossing $519–520.9million at the box office to become the highest-grossing film of 1991 worldwide and the third-highest-grossing film of its time. The film won several accolades, including Saturn, BAFTA, and Academy awards. Terminator 2 merchandise includes video games, comic books, novels, and T2-3D: Battle Across Time, a live-action attraction.
Terminator 2 is considered one of the best science fiction, action, and sequel films ever made. It is also seen as a major influence on visual effects in films, helping usher in the transition from practical effects to reliance on computer-generated imagery. The United States Library of Congress selected it for preservation in the National Film Registry in 2023. Although Cameron intended for Terminator 2 to be the end of the franchise, it was followed by a series of sequels, including Terminator 3: Rise of the Machines (2003), Terminator Salvation (2009), Terminator Genisys (2015), and Terminator: Dark Fate (2019), as well as a 2008 television series.
Plot
In 2029, Earth has been ravaged by the war between the malevolent artificial intelligence Skynet and the human resistance. Skynet sends the T-1000—an advanced, shape-shifting prototype Terminator made of virtually indestructible liquid metal—back in time to kill resistance leader John Connor when he is a child. To protect John, the resistance sends back a reprogrammed T-800 Terminator, a less advanced metal endoskeleton covered in living tissue.
In 1995 Los Angeles, John's mother Sarah is incarcerated in Pescadero State Hospital for her violent efforts to prevent "Judgment Day"—the prophesied events of August 29, 1997, when Skynet will gain sentience and, in response to its creators' attempts to deactivate it, incite a nuclear holocaust. John, living with foster parents, also considers Sarah delusional and resents her efforts to prepare him for his future role. The T-1000 locates John in a shopping mall, but the T-800 intervenes, coming to John's aid and enabling his escape. John calls to warn his foster parents, but the T-800 deduces that the T-1000 has already killed them. Realizing the T-800 is programmed to obey him, John forbids it to kill people and orders it to save Sarah from the T-1000.
The T-800 and John intercept Sarah as she is making an escape attempt, but Sarah flees in horror upon seeing that the T-800 looks identical to the Terminator sent to kill her in 1984. John and the T-800 persuade her to join them, and they escape the pursuing T-1000. Although distrustful of the T-800, Sarah uses its knowledge of the future to learn that a revolutionary microprocessor, being developed by Cyberdyne engineer Miles Dyson, will be crucial to Skynet's creation. Over the course of their journey, Sarah sees the T-800 serving as a friend and father figure to John, who teaches it catchphrases and hand signs while encouraging it to become more human-like.
Sarah plans to escape to Mexico with John, but a nightmare about Judgment Day convinces her to kill Dyson. She attacks Dyson in his home but realizes she cannot bring herself to kill a person and relents. John arrives and reconciles with Sarah while the T-800 convinces Dyson of the future consequences of his work. Dyson reveals that his research has been reverse engineered from the CPU and severed arm of the 1984 Terminator. Believing that his work must be destroyed, Dyson helps Sarah, John, and the T-800 break into Cyberdyne, retrieve the CPU and the arm, and set explosives to destroy the lab. The police assault the building and fatally shoot Dyson, but he detonates the explosives as he dies. The T-1000 pursues the surviving trio, cornering them in a steel mill.
Sarah and John split up to escape while the T-1000 mangles the T-800 and briefly deactivates it by destroying its power source. The T-1000 assumes Sarah's appearance and voice to lure out John, but Sarah intervenes and, along with the reactivated T-800, pushes it into a vat of molten steel, where it disintegrates. John also throws the 1984 Terminator's arm and CPU into the vat. The T-800 explains that it must also be destroyed to prevent it from serving as a foundation for Skynet. Despite John's tearful protests, the T-800 persuades him that its destruction is the only way to protect their future. Sarah, having come to respect the T-800, shakes its hand and lowers it into the vat. The T-800 gives John a thumbs-up as it is incinerated. As Sarah drives down a highway with John, she reflects on her renewed hope for an unknown future, musing that if the T-800 could learn the value of life, so can humanity.
Cast
Arnold Schwarzenegger as the Terminator: a reprogrammed Model 101 Series 800 "T-800" Terminator that is composed of human tissue over a metal endoskeleton
Linda Hamilton as Sarah Connor: a self-trained soldier who is dedicated to preventing the rise of Skynet
Edward Furlong as John Connor: Sarah's son who is destined to lead the human resistance in opposition to Skynet
Robert Patrick as T-1000: an advanced, shape-shifting prototype Terminator composed of liquid metal
Earl Boen as Dr. Peter Silberman: Sarah's doctor at Pescadero State Hospital
Joe Morton as Miles Bennett Dyson: director of special projects at Cyberdyne Systems Corporation
The film's cast also includes Jenette Goldstein and Xander Berkeley as John's foster parents Janelle and Todd Voight, Cástulo Guerra as Sarah's friend Enrique Salceda, S. Epatha Merkerson and DeVaughn Nixon as Dyson's wife Tarissa and son Danny, and Danny Cooksey as John's friend Tim. Hamilton's twin sister Leslie Hamilton Gearren appears as the T-1000 impersonating Sarah when Hamilton is also on-screen. Twins Don and Dan Stanton portray a guard at Pescadero State Hospital and the T-1000 imitating him.
Other cast members includes Ken Gibbel as an abusive orderly; Robert Winley, Ron Young, Charles Robert Brown, and Pete Schrum as men who confront the T-800 in a biker bar; Abdul Salaam El Razzac as Gibbons, a Cyberdyne guard; and Dean Norris as the SWAT team leader. Michael Edwards portrays the John Connor of 2029, and Hamilton's infant son Dalton Abbott portrays John in a dream sequence. Co-writer William Wisher cameos as a man photographing the T-800 in the mall, and Michael Biehn reprises his role as resistance soldier Kyle Reese in scenes that were removed from the theatrical release.
Production
Development
The Terminator had been a surprise hit, earning $78.4million against its $6.4million budget, confirming Schwarzenegger's status as a lead actor and establishing James Cameron as a mainstream director. Schwarzenegger expressed interest in a sequel, saying, "I always felt we should continue the story... I told [Cameron] that right after we finished the first film". Cameron said Schwarzenegger had always been more enthusiastic about a sequel than he was, because Cameron considered the original a complete story.
Discussions to make a sequel stalled until 1989, in part owing to Cameron's work on other films such as Aliens (1986) and The Abyss (1989), but also because of a dispute with rights holder Hemdale Film Corporation. Hemdale co-founder John Daly, against Cameron's wishes, had attempted to alter the ending of The Terminator, nearly resulting in a physical confrontation. A sequel could not be made without Hemdale's approval as Cameron had surrendered 50% of his rights to the company to get The Terminator made. Cameron had also sold half of the remaining stake to his ex-wife Gale Anne Hurd, producer and co-writer on the first film, for $1 following their 1989 divorce. By 1990, Hemdale was being sued by Cameron, Schwarzenegger, Hurd, and special-effects artist Stan Winston for unpaid profits from The Terminator.
Schwarzenegger, aware Hemdale was experiencing financial difficulties, convinced Carolco Pictures to purchase the film rights to The Terminator, having worked with the independent film studio on the big-budget science fiction film Total Recall (1990). Owner Mario Kassar described the rights acquisition as the most difficult deal Carolco ever conducted. He accepted a $10million offer for Hemdale's share, considering it a sum fabricated to ward him off, and paid Hurd $5million for her share. Prior to development, the total cost of the acquisition rose to $17million after factoring in incidental costs.
Kassar told Cameron that in order to recoup his investment, the film would proceed with or without him, and offered Cameron $6million to be involved and write the script. The film would become a collaboration between several production studios: Carolco, Le Studio Canal+, Cameron's Lightstorm Entertainment, and Hurd's Pacific Western Productions. The studio also had an existing U.S. distribution deal with TriStar Pictures, which stipulated that the film be ready for release by May 27, 1991, Memorial Day.
Writing
With a scheduled release date, Cameron had six to seven weeks to write the sequel. He approached his frequent collaborator and The Terminator co-writer William Wisher in March 1990. They spent two weeks developing a film treatment based on Cameron's vision to form a relationship between John Connor and the T-800, a concept Wisher believed was a joke. Their treatment diverged from the "science fiction slasher" theme of the original, focusing on the unconventional family bond formed between Sarah, John, and the T-800. Cameron said this relationship is "the heart of the movie", comparing it to the Tin Man receiving a heart in The Wizard of Oz (1939).
Cameron's concept featured Skynet and the resistance each sending a T-800—both played by Schwarzenegger—into the past, one to kill John and the other to protect him. Wisher believed a fight between two identical Terminators would be boring. The pair briefly considered a larger "Super-Terminator", but found it uninteresting and adopted an early idea Cameron had for The Terminator—a liquid-metal Terminator resembling an average-sized human in contrast to Schwarzenegger's large frame. The first half of their concept concluded with the destruction of Skynet's T-800, forcing it to use the T-1000, its ultimate weapon. Although he once considered removing the T-1000 altogether, Cameron solidified it as the only antagonist. Cameron and Wisher had the T-1000 take on the appearance of a police officer, allowing it to operate with less suspicion. Wisher found it challenging to depict the T-800 as "good" without making it non-threatening at the same time. The pair decided to give it the ability to learn and develop emotions, becoming more human over time. They kept the T-800's dialogue brief, relying on the audience to infer a lot of meaning through "small bites". Its catchphrase, "Hasta la vista, baby", was something Wisher and Cameron said after their telephone calls.
Wisher developed the first half of the treatment at Cameron's home over the course of four weeks, while Cameron worked on the latter half. Many pages were removed, including a "convoluted" subplot about Dyson, and a massacre of a camp of survivalists helping Sarah. Cameron, who did not consider the budget while writing, had to cut some elaborate scenes, including a nine-minute opening that showed a time-travel machine being used in 2029. Wisher and Cameron also frequently conferred with special-effects studio Industrial Light & Magic (ILM) to determine which ideas were achievable.
Cameron and Wisher analyzed the first film to help envision each character's development and evolution. Cameron believed Sarah's knowledge of the future would isolate her, forcing her to associate with survivalists and become a self-sufficient commando. She was written to have become an emotionally cold and distant character comparable to a Terminator, especially when deciding to go after Dyson. Instead of the story beginning with Sarah, John is placed with a foster family to increase tension. John's character was inspired by the 1985 Sting song "Russians", with Cameron recalling, "I remember sitting there once, high on E... I was struck by [the lyrics] 'I hope the Russians love their children too'. And I thought... The idea of a nuclear war is just so antithetical to life itself'. That's where [John] came from". They spent three days refining the script before flying to Cannes, where Terminator 2 was announced in early May 1990. Schwarzenegger initially struggled with portions of the script, once asking "What is 'polyalloy'?" He also expressed concern about his character's non-lethal depiction, which conflicted with his action-hero persona and portrayal of the character in The Terminator. Cameron explained he wanted to defy audience expectations. Schwarzenegger requested: "Just make me cool".
Casting
Schwarzenegger became interested in reprising his role after finding the character more complex and sympathetic than in the previous film. To accurately portray a fearless and emotionless machine, he trained extensively with stunt coordinator Joel Kramer to remain unaffected by fire and explosions around him. Schwarzenegger earned $12–$15million for his involvement. Carolco had been blamed for the increase in exorbitant salaries paid to actors, having paid Schwarzenegger around $11million for Total Recall (1990). They justified the expense as the value of their leads' wide appeal in markets outside the U.S. To lessen the immediate financial burden, Carolco paid most of Schwarzenegger's salary with a financed $12.75million Gulfstream III jet.
Cameron refused to re-cast Hamilton's role but developed plans to work around her absence if she chose not to return. Negotiations were protracted but concluded promptly after Cameron informed Carolco the script could not be finished until he knew if Hamilton would be involved. Hamilton received roughly $1million, which she described as "quite a bit more" than her earnings for The Terminator, but expressed disappointment at the pay disparity between her and Schwarzenegger. Hamilton requested that Sarah exhibit a "crazy" demeanor, explaining that after years of living with the impending doom of humanity, she believed Sarah would have transformed into an untamed entity, a warrior combined with a psychologically unstable woman. She continued: "[The T-800] is a better human than I am, and I'm a better Terminator than he is". Cameron considered giving the character a facial scar but determined that applying it daily would be difficult. Hamilton undertook extensive preparation for her role, working with a personal trainer for three hours a day, six days a week, and maintaining a strict low-fat diet, losing about of body weight. She also received judo and military training from former Israeli commando Uziel Gal. Between training, filming, and spending time with her infant son Dalton, Hamilton averaged only four hours of sleep per day. She described her experience as "sheer hell" but enjoyed showing off her new physique. Hamilton's twin sister, Leslie, was also cast in scenes where two versions of Sarah appear on-screen simultaneously.
Patrick, who was living in his car, was one of several actors in their late 20s considered for the T-1000 role. Cameron wanted a lithe actor resembling a newly recruited police officer to contrast with Schwarzenegger. According to Cameron, "If the [T-800] series is a kind of human Panzer tank, then the [T-1000] series had to be a Porsche". Casting director Mali Finn believed Patrick had the "intense presence" they wanted. Patrick auditioned by acting like an emotionless hunter and later participated in a screen test to judge the way lighting worked with his skin and eyes. For his character, he drew inspiration from Schwarzenegger's performance in The Terminator and observed hunting creatures—reptiles, insects, cats, and sharks. Patrick's facial expressions were based on those of an eagle, keeping his head tilted down to imply constant forward movement. He also employed a mixture of military posture with martial arts to express a fluid motion that differed from the T-800's rigid movements. The role demanded that Patrick be lean and fast, requiring peak physical shape. He learned to sprint without displaying heavy breathing and exhaustion, and received specialized training from Gal. Weapons master Harry Lu taught Patrick to operate and reload weapons, such as the T-1000's Beretta 92FS, without looking and eventually without blinking. Singer Billy Idol was originally cast for the role before seriously injuring his leg in a motorcycle crash. In a 2021 retrospective, Cameron said Idol had an interesting aesthetic but in hindsight, he probably would not have cast him. Singer Blackie Lawless of the rock band W.A.S.P. was also considered but deemed too tall.
Furlong, among hundreds of other prospects, secured the role of young John Connor at his last audition. Cameron believed that early candidates for the role were either overexposed in other media or came from advertisement backgrounds, which trained them to be happy and perky. Furlong had no acting experience and was discovered by Finn at the Boys & Girls Club in Pasadena. Cameron described Furlong as having a "surliness, an intelligence, just a question of pulling it out". He was required to take acting lessons, learn Spanish, and be able to ride a motorcycle and repair guns.
Joe Morton believed his casting as Miles Dyson had to do with Cameron wanting a minority character to be integral to the changing of the world. Morton avoided interacting with the cast so that their on-screen relationships would seem believably distant. The role of Dyson was reduced after the preferred casting choice, Denzel Washington, declined it because the role mainly required him to act scared.
Filming
The planned three months of pre-production was reduced to meet the release schedule, leaving Cameron without the time he wanted to prepare all aspects before filming began. Over a week, he spent several hours each day choreographing vehicle scenes with toy cars and trucks, filming the results, and printing the footage for storyboard artists. There was no time to properly test practical effects before filming, so if effects did not work, the filmmakers had to work around them. Principal photography began on October 8–9, 1990, with a $60million budget. Scenes were filmed out of sequence to prioritize those requiring extensive visual effects. Schwarzenegger found this difficult because he was meant to convey subtle signs of the T-800's progressive humanity and was unsure what was fitting for each scene. Cinematographer Adam Greenberg, who also worked on The Terminator, described the greater scope of the sequel as the most daunting prospect. Where he had been able to shout instructions to his crew on the original film, he used one of 187 walkie-talkies to conduct efforts over an expansive area.
The production was arduous, in part because of Cameron, who was known for his short temper and uncompromising "dictatorial" manner. The crew made T-shirts bearing the slogan "You can't scare me—I work for Jim Cameron". Schwarzenegger described him as a supportive but "demanding taskmaster" with a "fanaticism for physical and visual detail". Even so, by the 101st day of filming, Schwarzenegger and Hamilton were frustrated by the high number of takes Cameron performed, spending five days just on close-ups of Hamilton in the Dyson home. To stay on schedule, Cameron worked through Christmas and persuaded Schwarzenegger to cancel a visit to American troops in Saudi Arabia with U.S. President George H. W. Bush to film his scenes.
The production was filmed in many locations in and around Los Angeles. The now-destroyed Corral bar in Sylmar is where the T-800 confronts a group of bikers. Location manager Jim Morris chose Corral because it was raised above ground, allowing the scene to take place over different levels. The 1991 police beating of Rodney King took place at the same location a week after filming, being captured on the same videotape a spectator used to capture the filming of the biker bar scenes. On one occasion, a woman who was oblivious to ongoing filming walked into the bar. When she asked Schwarzenegger, who was wearing only a pair of shorts, what was going on, he replied: "It's male stripper night". Executives suggested cutting the scene to save money but Cameron and Schwarzenegger refused.
The T-1000's arrival in 1995 was filmed at the Sixth Street Viaduct, and John's hacking of an ATM was filmed at a bank in Van Nuys. His foster parents' residence is in the Canoga Park neighborhood, deliberately chosen for its generic appearance. The Terminators' confrontation with John takes place inside Santa Monica Place mall, although exterior shots were captured at Northridge Fashion Center because there was less traffic. In the subsequent scene, Patrick's training allowed him to outrun John on his dirtbike, so the bike's maximum speed was increased. The T-1000 continues its pursuit using a truck, in a scene filmed at the Bull Creek spillway. Other locations include the Lake View Terrace hospital, standing in as Pescadero State Hospital and the Petersen Automotive Museum was used as its garage. In a 2012 interview, Hamilton said she suffered permanent partial hearing loss after not wearing earplugs during the hospital elevator scene, where the T-800 fires a gun, as well as shell shock from months of exposure to violence, loud noise, and gunfire. Elysian Park serves as the site of Sarah's apocalyptic dream, and scenes at the Dyson home were captured at a private property in Malibu. The Cyberdyne Building's destruction was filmed at an abandoned office in San Jose, scheduled for demolition. To bring a heightened sense of authenticity, real members of the Los Angeles Police Department's SWAT division were featured in the scene, although Cameron embellished their tactics to be visually interesting. In a spontaneous decision during Morton's death scene, Cameron opted to detonate nearby glass to examine its visual impact.
The final highway chase was filmed along the Terminal Island Freeway near Long Beach, of which a stretch was closed to traffic every night for two weeks. Scenes set during the future war of 2029 were filmed in the rubble of an abandoned steel mill in Oxnard, California, in a space that was enhanced with burned bicycles and cars from a 1989 fire at the Universal Studios Lot. Terminator 2s ending was filmed in the closed Kaiser Steel mill in Fontana, which Greenberg made appear operational mainly through lighting techniques. Despite appearing to be actively smelting steel, the mill was frigid and dangerous because of the moving machinery and high catwalks. The T-800's thumbs-up during its death was added during filming (Hamilton considered it too sentimental). Six months of filming concluded on March 28, 1991, about three weeks behind schedule.
Post-production
Terminator 2 was edited by Conrad Buff IV, Richard A. Harris, and Mark Goldblatt, who said although there was more time to edit than on The Terminator, it was still relatively small given the greater scope of the sequel. They described the complexity of scenes such as the final battle between the Terminators, which required a seamless combination of live-action, practical effect shots, and CGI. After having to rush editing at the end of The Abyss, Cameron limited filming on Terminator 2 to five days a week so he could help edit the film on weekends from the start of filming.
Several scenes were deleted, in part to reduce its running time. These include Kyle Reese appearing to Sarah in a dream and encouraging her to continue fighting, Sarah being beaten in the hospital, the T-1000 killing John's dog (a scene the animal-loving Patrick was not a fan of), John teaching the T-800 to smile and discussing whether it fears death, the T-1000 malfunctioning after being frozen in the steel mill, and additional scenes with Dyson's family. Schwarzenegger unsuccessfully rallied to retain his favorite scene, in which John and Sarah modify the T-800's CPU, allowing it to learn and evolve, and Sarah attempts to destroy the CPU but John defends the T-800. The scene was replaced with dialogue indicating that the T-800 already possesses the ability to learn. The scripted ending depicted an alternative 2029 that was filmed at the Los Angeles Arboretum in Arcadia, in which an aged Sarah narrates how Skynet was never created while John, now a US Senator, plays with his daughter in a Washington, D.C., playground. To make the film more evocative and memorable, Cameron changed this scene to one in which the characters look out at the road ahead.
The production ran until about two days before the film's theatrical release. Delays were caused mainly by the rendering of shots at Consolidated Film Industries, the most difficult of which was the T-1000's death. Co-producer Stephanie Austin said the production crew worked 24-hour shifts and slept on site. The 137-minute-long release print was delivered to theaters the night before its release. There were two private pre-release screenings: one for family, friends, and crew at Skywalker Ranch, and another in Los Angeles for studio executives. Austin said, "People were stamping their feet and clapping for ten or fifteen minutes", at which point the crew knew they had succeeded. During test screenings the ending was well received, and was described as a "touching" favorite scene.
The minimum estimated cost to produce Terminator 2 had been $60million, dwarfing the budget of the first film. Cameron and Schwarzenegger said the final budget, excluding marketing, was about $70million, and the cost of making the film was about $51million. According to Carolco executives Peter Hoffman and Roger Smith, the film cost $75million before marketing, saying Terminator 2 was only "modestly" over budget. Including marketing and other costs, the film's total budget is reported to have been between $94million and $102million. Kassar said he had secured 110% of the budget from advances and guarantees of $91million, including North American television ($7million) and home-video ($10million) rights, and $61million from theatrical, home-video and television rights outside the U.S. The distribution deal with TriStar Pictures earned it a set percentage of the budget—an estimated $4million. News sources labeled Terminator 2 the most expensive independent film ever and predicted it would "bankrupt Carolco".
Visual effects and design
A 10-month schedule and about $15–$17million of Terminator 2s budget was allocated just for special effects, including $5million for the T-1000 alone, and a further $1million for stunts, at the time one of the largest-ever stunt budgets. Four main companies were involved in creating the 150 visual effects. ILM special effects supervisor Dennis Muren managed the computer-generated imagery (CGI) effects, Stan Winston Studio the prosthetics and animatronics. Fantasy II Film Effects developed miniatures and optical effects, and 4-Ward Productions was responsible for creating a nuclear explosion effect. Pacific Data Images and Video Image provided additional visual effects, the former doing digital wire removal and the latter creating the "Termovision" POV. The cost and time involved in producing CGI meant that the effect was used sparingly, appearing in 42–43 shots, alongside 50–60 practical effects.
Portraying the T-1000 was a risky endeavor, as CGI was in its infancy and there was no backup plan in place if the CGI did not work as intended or could not be composited effectively with Winston's practical effects. The computer systems needed to animate and render the T-1000 CGI cost thousands of dollars alone, but creating the character also relied on a variety of practical appliances, visual illusions, and filming techniques. A team of up to 35 at ILM was required for the five minutes of screen time the T-1000's effects appear, and the process was so complex that rendering 15 seconds of footage took up to ten days.
Music
The Terminator composer Brad Fiedel returned for the sequel, working in his garage in Studio City, Los Angeles. Film industry professionals regarded his return with concern and skepticism as they believed his style would not suit the film. Fiedel quickly realized he would not receive the finished footage until late in the production after most effects were completed, which made it difficult to commit to decisions such as use of an orchestra because, unlike ambient music, the score had to accompany the on-screen action. Fiedel and Cameron wanted the musical tone to be "warmer" due to its focus on a nobler Terminator and young John. Fiedel experimented with sounds and shared them with Cameron for feedback.
While The Terminator score had mainly used oscillators and synthesizers, Fiedel recorded real instruments and modified their sounds. He developed a library of sounds for characters such as the T-1000, whose theme was created by sampling brass-instrument players warming up and improvising. Fiedel said to the players, "You're an insane asylum. You're a bedlam of instruments." He slowed down the resulting sample and lowered the pitch, describing it as "artificial intelligent monks chanting". Cameron considered the "atonal" sound "too avant-garde", to which Fiedel replied, "you're creating something that people have never seen before, and [the score] ought to sound like something people have never heard before to support that".
Tri-Star asked Schwarzenegger to arrange a tie-in music video and theme song for the film. He chose to work with rock band Guns N' Roses because they were popular and there was "a rose in the movie and bloody guns". The band offered the use of "You Could Be Mine", the debut single from their album Use Your Illusion II (1991). The music video, featuring Schwarzenegger as the T-800 pursuing the band, was directed by Stan Winston, Andrew Morahan, and Jeffrey Abelson. Patrick unsuccessfully lobbied to use "Head Like a Hole" by Nine Inch Nails as the tie-in song, in part because his brother, Richard Patrick, was their tour guitarist. Wisher suggested using "Bad to the Bone" by George Thorogood & the Destroyers as the T-800 puts on the biker clothes. Although Cameron did not like the idea, Wisher said he later found that Cameron had used the song but had forgotten it was his idea. "Guitars, Cadillacs" by Dwight Yoakam also features in Terminator 2.
Release
Context
The summer theatrical season, spanning from mid-May to early September, was expected to witness strong competition among studios. Fifty-five films were slated for release, compared with thirty-seven in 1990. Release dates underwent frequent changes as studios aimed to evade direct competition and optimize their films' chances of success to compensate for the 20% increase in film production costs since 1990. This increase was partly attributed to hefty salaries demanded by stars who also claimed a portion of the film's profits. Moreover, revenues from box-office receipts, video sales, and television-network deals were on the decline. Films scheduled for release included City Slickers, The Naked Gun 2½: The Smell of Fear, Only the Lonely, Hudson Hawk, The Rocketeer, What About Bob?, and Point Break. Terminator 2 was among the films expected to do well, along with Backdraft, Dying Young, and the year's predicted top film Robin Hood: Prince of Thieves. It was also seen as having strong international appeal. An unnamed studio executive said audiences were seeking escapist entertainment such as comedy or action, and avoiding films about less positive subject matter.
Marketing and promotion
Schwarzenegger was involved in Terminator 2 marketing and merchandising campaign, which was estimated to be worth at least $20million. By 1991, advertising for Terminator 2 was ubiquitous, with high audience recognition; despite its U.S. Rrating, which restricted the film to audiences aged 17 and over unless accompanied by an adult, merchandise was mainly aimed at children. TriStar contributed about $20million for marketing, which included a $150,000 teaser trailer that was directed by Winston and depicts the construction of a T-800. Trailers ran for six months before the film's release. Tristar incentivized cinema staff to play it frequently by offering chances to win Terminator 2-branded goods and tickets to the premiere. Fast-food restaurants and soft-drink manufacturers, such as Subway and Pepsi, also offered Terminator 2-themed foodstuffs and drinks, alongside promotional posters.
The premiere took place on July 1, 1991, at the Cineplex Odeon in Century City, Los Angeles. According to Fiedel, it was treated as a major event, unlike the premiere of The Terminator, during which the audience was skeptical or laughed at the wrong times. Celebrities in attendance included Maria Shriver, Nicolas Cage, Sylvester Stallone, Sharon Stone, Michael Douglas, and Furlong's date Soleil Moon Frye.
Box office
Terminator 2: Judgment Day opened in the United States and Canada on July 3, leading into the Independence Day holiday weekend. It had the highest-grossing Wednesday opening with $11.8million. Between Friday and Sunday, the film grossed $31.8million from 2,274 theaters, an average of $13,969 per theater, making it the number-one film of the weekend ahead of The Naked Gun 2½ ($11.6million) in its second weekend and Robin Hood: Prince of Thieves ($10.3million) in its fourth. Over the five-day holiday weekend (Wednesday to Sunday), Terminator 2 grossed $52.3million, becoming the second-highest opening five-day total ever behind Batmans $57million in 1989, It set a record opening for an R-rated film and for an Independence Day weekend. The opening week audience was evenly split between adults, teenagers, and children, about 25%–30% of whom were women, although TriStar said the figure was higher. The film benefited from repeat viewings by young audience members. One theater chain executive said: "...nothing since Batman has created the frenzy for tickets we saw this weekend with Terminator. At virtually all our locations, we are selling out... the word-of-mouth buzz out there is just phenomenal". Industry professional Lawrence Kasanoff said it was an "open secret" that despite the R rating, children were seeing the film, remarking "When T2 opened, I saw kids skateboard up to the ticket window..."
It retained the number-one position in its second weekend, grossing $20.7million, ahead of the debuts of One Hundred and One Dalmatians ($10.3million) and Boyz n the Hood ($10million), and in its third weekend with $14.9million, ahead of Bill & Ted's Bogus Journey ($10.2million) and One Hundred and One Dalmatians ($7.8million). Terminator 2: Judgment Day fell to number two in its fifth weekend, grossing $8.6million against the debut of the comedy Hot Shots! ($10.8million). It remained in the top-five highest-grossing films for twelve consecutive weeks and the top-ten highest-grossing films for fifteen weeks. In total, Terminator 2: Judgment Day spent about twenty-six weeks in theaters in a total of 2,495 cinemas, and grossed $204.8million, making it the highest-grossing film of the year, ahead of Robin Hood: Prince of Thieves ($165million), Beauty and the Beast ($145million), and The Silence of the Lambs ($130million). This also made it the thirteenth-highest-grossing film of its time, behind Back to the Future (1985), and the highest-grossing R-rated film. The Los Angeles Times estimated after the theater and distributor cuts, the box-office returns to Carolco would be well over twenty percent of the film's cost.
Outside the U.S. and Canada, Terminator 2: Judgment Day set numerous box office records. In the United Kingdom it had a record three-day opening weekend of $4.4million (and a one-week record of $7.8million) and went on to gross at least $30million. In France it grossed a record $9.5 million in its opening week (the biggest opening since Rocky IV) and $16million in two weeks. In Germany it grossed a record $8million in five days and also had a record Australian opening weekend of $1.9 million. In Thailand it was the highest-grossing western-hemisphere film ever with a gross of $1.2million. The film also performed well in Brazil and grossed at least $51million in Japan. Internationally, the film grossed about $312.1million, making it the first film to gross over $300million outside of the U.S. and Canada. Terminator 2: Judgment Day is estimated to have grossed a worldwide total of $519–$520.9million, making it the year's highest-grossing film, and the third-highest-grossing film ever, behind 1977's Star Wars ($530million) and 1982's E.T. the Extra-Terrestrial ($619million).
Reception
Critical response
Terminator 2: Judgment Day was released to general acclaim. Many reviews focused on the state-of-the-art physical, special, and make-up effects, which were roundly praised as "revolutionary" and "spectacular", particularly those calling the T-1000 a "technological wonder". Several publications wrote that Cameron's ability to realize cinematic action blockbusters was unmatched. Janet Maslin said that at his best, despite occasional lapses into melodrama, Cameron's work is akin to that of director Stanley Kubrick. Both Maslin and The Austin Chronicle commented on the kindness and compassion in the film. The Austin Chronicle contrasted it to the lack of a moral message in The Terminator and Travers described it as a "visionary parable" but they, alongside others, criticized Terminator 2s "muddled" message about protecting the value of human life and peace by using extreme violence to prevent the use of nuclear weapons, war, and technological reliance.
Reviewers generally agreed the narrative early in the film is stronger than the one near the end. Owen Gleiberman said the first hour has a genuine "emotional pull" and according to Roger Ebert, the initial concept of a boy finding a father figure in a Terminator that is learning to be human is "intriguing", but Gleiberman said the narrative weakens once Hamilton's character joins the group. Travers and Corliss wrote it stumbles after hours of relentless action and a "conventional climax". Despite this observation, Gleiberman praised the final battle between the T-1000 and the protagonists. Empires review and Terrence Rafferty found the film's narrative less satisfying and idea-driven than that of The Terminator. Gleiberman said despite it being an effective and witty thriller, Terminator 2: Judgment Day comes across as an expensive B movie when compared with "visionary spectacles" such as the Mad Max series and RoboCop (1987). Kenneth Turan said Terminator 2s action scenes succeed without the extreme gore and violence of RoboCop.
Ebert and Maslin, among others, appreciated the twist on Schwarzenegger's public action-hero persona by making him a hero who does not kill his enemies. David Ansen and Glieberman found humor in the T-800's non-lethal methods and efforts to become more human-like. Maslin and Hinson agreed that, as in The Terminator, Schwarzenegger's role is perfect for his acting abilities. Hinson said Schwarzenegger portrayed more humanity as a machine than he did when portraying normal people. In contrast, Empire suggested that the change was a concession to Schwarzenegger's young fans, and Peter Travers chose the T-800's death as a "cornball" scene that is out of place for the actor and film.
Several reviewers praised the T-1000 character for the combination of Patrick's "chilling" expressionless performance and the advanced special effects, which create an implacable, "showstopping" villain. Empire called the character "one of the great monsters of the cinema". Gleiberman said the character's absence from much of the film's second act is to the film's detriment, and Hinson wrote that the T-1000 lacks any "soul" and thus a way for the audience to identify with it. Critics generally agreed Hamilton portrays a "fierce" female hero with an impressive physique that lets her outshine another action hero, Sigourney Weaver's Ellen Ripley in Cameron's Aliens (1986). Other publications found Sarah Connor's narrations about peace to be "heavy-handed", overused, and "unintentionally amusing". Furlong was praised for giving a natural performance at a young age, and Hinson wrote that despite limited screentime, Morton made an impression. Audiences polled by CinemaScore gave the film an average grade of "A+" on a scale of A+ to F.
Accolades
At the 1992 Saturn Awards, Terminator 2: Judgment Day received awards for Best Science Fiction Film, Best Director (Cameron), Best Actress (Hamilton), Best Performance by a Younger Actor (Furlong), and Best Special Effects, as well as nomination for Best Actor (Schwarzenegger). It also won Favorite Motion Picture at the 18th People's Choice Awards. For the 45th British Academy Film Awards, Terminator 2 received awards for Best Sound (Lee Orloff, Tom Johnson, Gary Rydstrom, Gary Summers) and Best Special Visual Effects (Stan Winston, Dennis Muren, Gene Warren Jr., Robert Skotak), as well as a nomination for Best Production Design (Joseph Nemec III).
The 64th Academy Awards earned Terminator 2 four awards: Best Makeup (Winston and Jeff Dawn), Best Sound (Orloff, Johnson, Rydstrom, and Summers), Best Sound Effects Editing (Rydstrom and Gloria S. Borders), and Best Visual Effects (Muren, Winston, Warren Jr., and Skotak), as well as nominations for Best Cinematography (Adam Greenberg) and Best Film Editing (Conrad Buff, Mark Goldblatt and Richard A. Harris). It was the first film to win an Academy Award when its predecessor had not been nominated. It received six awards at the 1992 MTV Movie Awards, including: Best Movie, Best Action Sequence ("L.A. Freeway Scene"), Best Breakthrough Performance (Furlong), Best Female Performance (Hamilton), Best Male Performance (Schwarzenegger), and nominations for Best Song From a Movie ("You Could Be Mine"), Best Villain (Patrick), and Most Desirable Female (Hamilton), as well as a Hugo Award for Best Dramatic Presentation (Cameron and Wisher).
Post-release
Aftermath
Terminator 2: Judgment Day launched the careers or raised the profiles of its principal actors. According to industry professionals, Schwarzenegger became the top international star, ahead of actors such as Mel Gibson and Tom Cruise. It also marked the start of a lasting friendship between Schwarzenegger and Cameron, who formed a "midlife crisis motorcycle club" and reunited for the action film True Lies (1994). Cameron and Hamilton began a romantic relationship in 1991, married in 1997, and later divorced. In 1992 Cameron was given a five-year, $500million contract by 20th Century Fox to produce twelve films.
Furlong became a highly sought-after actor, and Patrick found dealing with his new-found recognition difficult as people asked him to impersonate the T-1000.
Despite the film's success, Carolco reported 1991 losses of $265.1million, which was caused by the financial problems of its other films and subsidiaries. Support from investors failed to prevent the studio from filing for bankruptcy in 1995 and its assets, including Terminator 2, were sold to Canal Plus for $58million.
Home media
On December 11, 1991, Terminator 2: Judgment Day was released on VHS and LaserDisc. It was a popular rental in the U.S. and Canada, with a record 714,000 copies shipped to retailers, and it became the best-selling rental by mid-January 1992. Varèse Sarabande released Fiedel's score, which spent six weeks on the Billboard 200 record chart, peaking at number70. The theme song "You Could Be Mine" peaked at number29 on the U.S. Billboard 100, and performed well in the United Kingdom, Australia, Germany, Spain, and Canada.
A "Special Edition" LaserDisc was released in 1993, featuring a 15-minute extended version of the film that restored deleted scenes, interviews with cast and crew, storyboards, designs, and unrestored deleted scenes. Cameron stated he did not use the label "Director's Cut" because he considered the theatrical releases to be definitive and the extended versions as opportunities to restore "depth and character made omissible by theatrical running time". The theatrical version was released on DVD in 1997. In 2000, an "Ultimate Edition" DVD was released, containing the theatrical and "Special Edition" cuts, and a new "Extended Cut", containing a scene of the T-1000 inspecting John's bedroom, and the alternate ending. Terminator 2 special-effects coordinator Van Ling supervised the release. The "Extreme Edition" was released in 2003, featuring the theatrical and "Special Edition" cuts, a remastered 1080p image, Cameron's first commentary, and a documentary about the film's influence on special effects.
Terminator 2 was released on Blu-ray in 2006, followed in 2009 by a "Skynet Edition" that contains the theatrical and "Special Edition" cuts, and commentaries with the cast and crew. This release includes a limited collector's set containing the Blu-ray, the "Ultimate" and "Extreme" editions on DVDs, a digital download version, all extant special features, and a T-800 skull bust. A 4K Ultra HD Blu-ray version that includes a standard Blu-ray and digital version was released in 2017. This release also offered a collector's option that includes one of 6,000 life-size replicas of a T-800 skeleton forearm, each signed by Cameron and individually numbered, the soundtrack, the theatrical, "Special", "Extended", and 2017 3D remaster cuts, and "Reprogramming the Terminator", a documentary that includes interviews with Schwarzenegger, Cameron, Furlong, and others.
Other media
Terminator 2: Judgment Day was marketed with numerous tie-in products, including toys, puppets, trading cards, jigsaw puzzles, clothing, a perfume named "Hero", and a novelization by Randall Frakes that expands on the film's ending. In 1991 Marvel Comics adapted the film into a comic book, which was followed by expansions of the Terminator 2 narrative, including Malibu Comics's "Cybernetic Dawn" and "Nuclear Twilight" (1995–1996), Dynamite Entertainment's "Infinity" and "Revolution" (2007), and the T2 novel series by S. M. Stirling in the early 2000s. Several video game adaptations of Terminator 2 were published, including a pinball machine and an arcade game in 1991. The arcade game was popular enough to be ported to home consoles as T2: The Arcade Game. Multiple studios developed widely differing adaptations for home consoles, including Terminator 2 for Game Boy and Terminator 2 for the Nintendo Entertainment System (NES). A later adaptation was developed for the Sega Genesis and Super Nintendo Entertainment System, and a different game was published for home computers. Merchandise for Terminator 2: Judgment Day was estimated to have generated $400million in sales by 1997.
In 1996, T2-3D: Battle Across Time, a live-action attraction, was opened at Universal Studios Florida, and later at locations in Hollywood and Japan. The twenty-minute attraction was co-written and directed by Cameron and cost $60million to produce, including live-action stunts and a $24million, 12-minute, 3Dfilm starring Schwarzenegger, Hamilton, Patrick, and Furlong as their in-world characters, making it the most-expensive film per minute produced of its time. In it, Sarah and John attempt to stop Cyberdyne, which has developed Skynet. They are confronted by the T-1000 but saved by the T-800, which returns to 2029 with John to defeat Skynet and its latest creation, the T-1000000.
3D remaster
Cameron oversaw a year-long 3D remaster and subsequent theatrical re-release of Terminator 2: Judgment Day in August 2017. Cameron said: "If you've never seen it, this'll be the version you want to see and remember". Cameron made visual modifications to the film to fix errors that had bothered him, including the addition of windshield glass to the T-1000's truck, which fell out during its stunt fall and reappears in later scenes, concealing the obvious use of stuntmen for Furlong and Schwarzenegger during the same scene, concealed more of Patrick's nudity during his introduction, and brightened the visuals. The 3D remaster's theatrical release was seen as a disappointment, earning about $562,000 in its debut across 386 theaters compared to the 3D re-release of Cameron's Titanic in 2012, which fetched $17million.
Themes and analysis
Themes
A central theme of Terminator 2: Judgment Day is the relationship between John Connor and the T-800, which serves as a surrogate for the father (Kyle Reese) he never knew. Cameron said: "Sure, there's going to be big, thunderous action sequences, but the heart of the movie is that relationship", comparing it to the Tin Man getting a heart in The Wizard of Oz. As with Cameron's earlier film Aliens, Terminator 2 focuses on compassion and parental figures, depicting the T-800 as a relentless protector and father figure to John, against the equally relentless T-1000. The T-800 is designed to emulate humans for infiltration purposes, but as it grows and evolves, its emotions become real and it learns from John to feel grief. The T-800 chooses to sacrifice its life to ensure the survival of everyone else. In 1991 essayist Robert Bly wrote that older men were not offering suitable role models for young men, and in Terminator 2, Sarah denounces the many men in her past who failed to be a father for John, except for the T-800. Once its role is complete, the T-800 leaves John for his own good after stating that it lacks the emotions John must rely on.
While John teaches the T-800 about humanity, his biological mother Sarah has become less human because of her knowledge about the future. Cameron said: "She's a sad character—a tragic character... she believes that everyone she meets, talks to, or interacts with will be dead very soon". This theme of machine-like humans links with Cameron's and Wisher's choice to make the T-1000 appear as a police officer, because thematically they believed it represents humans who should have empathy for others becoming more machine-like and detached from their emotions. The SWAT team at Cyberdyne shoots Dyson, an African American, without warning. Cinephilia described Dyson as the most human character in the film, an intelligent, optimistic family man who represents real-world encounters between police forces and people of color, in contrast to their encounter with the Caucasian T-800, during which they warn him before opening fire.
Following her escape from the state hospital, Sarah appears to embrace John but is actually checking him for injuries, forgoing any emotional attachment for the practicality of ensuring his survival and bringing about his destiny as a future leader. The T-800 is portrayed as a better parent than Sarah, offering him undivided attention while Sarah remains distant and focused on the future rather than the present. Philosophy professor Richard T. McClelland notes that Sarah's acceptance of the T-800 as John's surrogate father is such that she leaves it in control of John when she drives away to kill Dyson. Sarah's dream about the nuclear holocaust that will kill six billion people, including her son, incites her to kill Dyson before he can complete the work that will bring about Skynet, but when the moment comes, she is unable to fully forsake her humanity and murder him with no emotion. Cameron described this as a question of humanity's worth if we abandon it to win the battle for its existence. In contrast with the bleak, nihilistic themes of the first film, Terminator 2 emphasizes the concept of free will and the value of human life. Schwarzenegger quoted the film's line "no fate but what we make", saying people have control over their own destinies.
Terminator 2 also comments on the use of violence. On its release, reviewers were critical of Terminator 2: Judgment Day message about preserving peace through violence. Owen Gleiberman stated that "reckless indifference" to human life is intrinsic to the film but the T-800 maiming people rather than killing them potentially condemns victims of violence to a life of pain. Cameron described the film as the "world's most violent anti-war movie", and said it is about people struggling with their own violent natures. In particular, Cameron had been concerned by the original antagonist T-800's status as a cultural icon and power fantasy as a lethal, unstoppable force of strength and power, and chose to redefine it in Terminator 2, retaining the power fantasy without taking lives. Cinephilia said it is not morally possible to recover from killing people, so Terminator 2 is about redeeming the T-800 and Sarah.
Analysis
According to Professor Jeffrey A. Brown, there was a growth of female-led action films in the wake of Alienss success. Brown believes this reflected the increase in women assuming non-traditional roles and the division between professional critics—who perceive a masculinization of the female hero—and audiences who embrace characters regardless of gender. The hyper-masculine heroes played by Schwarzenegger, Stallone, and Jean-Claude Van Damme were replaced with independent women who are capable of defending themselves and defeating villains in films such as Terminator 2 and The Silence of the Lambs. Brown said these female characters often perform stereotypical male actions and have muscular physiques rather than feminine, "soft" bodies. He considers Hamilton's undershirt to be symbolic of typically male action heroes such as John McClane and John Rambo, as well as women displaying masculine traits such as Rachel McLish in Aces: Iron Eagle III (1992).
Despite the emphasis on strong femininity, Hamilton's character remains secondary to Schwarzenegger's. Sarah's efforts to defeat the T-1000 fall short until the last-minute intervention of the T-800. Author Victoria Warren said this allows the female character to be strong enough to be admired but not strong enough to undermine the male protagonist's masculinity. Professors Amanda Fernbach and Thomas B. Byers said the rigid form of the T-800 represents reactionary masculinity that is in direct opposition to the gender-bending T-1000, which represents a post-modern, fluid nature that is outside traditional norms and in opposition to patriarchy and the preservation of the traditional family.
Author Mark Duckenfield said Terminator 2: Judgment Day can be seen as an unintended allegory for the decline of United States industries against successful Japanese technology firms, with the cutting-edge T-1000 representing Japan against the older, less-advanced T-800. The U.S. industries, which were sometimes seen as villains during the economic boom of the 1980s, are seen as more sympathetic in the face of obsolescence, just as the T-800 is presented as friendlier and still powerful but no longer overwhelmingly so. Duckenfield considers the final scene, which takes place in a steel mill—a place of American industry—symbolic. According to Warren, Terminator 2 reflects Cold War American values that emphasized principles of American culture, in particular individualism and rejection of government intervention. The institutions that the film's protagonists should be able to rely on, such as the government, the police, and technology, are the ones attempting to stop them because they do not believe in the protagonists' doomsday prophecy.
Legacy
Cultural influence
Terminator 2 is considered a highly influential film, setting a benchmark for sequels, action set pieces, and visual effects. Cameron and special-effects supervisor Dennis Muren said the groundbreaking special effects in Terminator 2 demonstrated the possibilities of computer-generated effects and that without it, effects-focused films such as Jurassic Park (1993) would not have been possible. Various publications have referenced Terminator 2 influence on special effects, describing it as the most important special-effects film since Tron (1982), marking the start of the era of reliance on CGI effects for films such as Jurassic Park and The Matrix (1999). In 2007 the Visual Effects Society, an entertainment-industry organization of visual-effects practitioners, named Terminator 2 the 14th-most-influential visual-effects film of all time, and the T-1000 is listed by Guinness World Records as the "first major blockbuster movie character generated using computers". According to The Guardian, the film's "groundbreaking" effects led to "CGI laziness", a reliance on computer graphics over practical effects, stunts, and craft. A 2014 Entertainment Weekly article said Terminator 2 contributed to the contemporary Hollywood high-budget science fiction epic film, and a reliance on turning films into franchises targeted at young audiences and broad demographics. Den of Geek described it as one of the most influential blockbusters since the thriller Jaws (1975). Several filmmakers and creative leads have named it as an influence on their work, including Steven Caple Jr., Ryan Coogler, Kevin Feige, and Hideo Kojima, and it was the favorite film of Russian political prisoner Alexei Navalny.
With a $94–102million budget, Terminator 2: Judgment Day was the most expensive movie made in its time, and, , it remains Schwarzenegger's highest-grossing film. Alongside her appearance in The Terminator, Hamilton's Sarah Connor became regarded as one of the greatest and most influential cinematic female action heroes and an iconic character. Patrick's T-1000 is considered one of the most iconic cinematic villains. He made cameo appearances as the T-1000 in Wayne's World (1992) and Schwarzenegger's Last Action Hero (1993). In Last Action Hero, Stallone replaces Schwarzenegger as the T-800 on the Terminator 2 poster. The T-800's line "Hasta la vista, baby" is considered an iconic piece of movie dialogue and is often quoted. Schwarzenegger also used it in speeches during his political career. Terminator 2: Judgment Day has been referenced in a variety of media, including television, films, and video games. The biker bar scene was recreated for a 2015 advertisement, which featured Schwarzenegger, for the video game WWE 2K16—the bar patrons were replaced with WWE wrestlers. In 2023, the Library of Congress selected Terminator 2: Judgment Day for preservation in the National Film Registry as being "culturally, historically, or aesthetically significant".
Retrospective assessments
Since its release, Terminator 2: Judgment Day has been assessed as one of the best action, science fiction, and sequel films ever made. Terminator 2 and The Terminator are generally considered the standout films in the Terminator franchise, with each taking turns in the top spot. Some publications have listed Terminator 2 among the greatest films made.
In 2001 the American Film Institute (AFI) ranked Terminator 2 number77 on its 100 Years... 100 Thrills list, recognizing the "most heart-pounding movies", and the 2003 list of the 100 Best Heroes & Villains ranked the T-800 character as the forty-eighth-best hero. The 2005 list of the 100 Best Movie Quotes listed the T-800 dialogue line "Hasta la vista, baby" as the 76th-best quotation, and the 2008 AFI's 10 Top 10 named Terminator 2 as the eighth-best science fiction film. To mark Schwarzenegger's 75th birthday in 2022, Variety listed Terminator 2: Judgment Day as the best film in his 46-year career.
Review aggregator Rotten Tomatoes offers a approval rating from the aggregated reviews of critics, with an average score of . The website's critical consensus says: "T2 features thrilling action sequences and eye-popping visual effects, but what takes this sci-fi/action landmark to the next level is the depth of the human (and cyborg) characters". The film has a score of 75 out of 100 on Metacritic based on 22 critics' reviews, indicating "generally favorable reviews". During Terminator 2 30th anniversary in 2021, Cameron, among others, said despite using older models of cars, the film's visuals still compare well with contemporary films. Cameron also said Terminator 2 remains relevant because artificial intelligence had become a ubiquitous reality rather than a fantasy. In 2006 Terminator 2 was listed at number32 on Film4's 50 Films to See Before You Die list, and is included in the film reference book 1001 Movies You Must See Before You Die. Rotten Tomatoes lists it as one of 300 essential movies and at number123 on its list of 200 essential movies. Popular Mechanics and Rolling Stone jointly listed it alongside The Terminator as the third-best time-travel film ever made. Rolling Stones reader-voted list of the best sequels ranks Terminator 2 second, behind The Godfather Part II (1974), and Empire readers ranked the film 17th on its 2017 "100 Greatest Movies" list.
Sequels
Cameron said he had no intentions for further sequels, believing that Terminator 2 "brings the story full circle and ends. And I think ending it at this point is a good idea". Wisher and Cameron wrote the script with the intention of leaving no option for a sequel. Even so, four sequels followed: Terminator 3: Rise of the Machines (2003), Terminator Salvation (2009), Terminator Genisys (2015), and Terminator: Dark Fate (2019), though none replicated the success of The Terminator or Terminator 2.
Schwarzenegger returned for all but Terminator Salvation, while Cameron and Hamilton returned only for Dark Fate, a direct sequel to the events of Terminator 2. Although better critically received than other post-Terminator 2 sequels, Dark Fate is also considered a failure. Analysts blamed audience disinterest on the diminishing quality of the series since Terminator 2, and repeated attempts to reboot the series. Fans also criticized Dark Fate opening scene, in which a T-800 kills Furlong's teenage John Connor. Entertainment website Collider wrote that this retroactively damages the ending of Terminator 2. A television series, Terminator: The Sarah Connor Chronicles (2008–2009), also takes place after the events of Terminator 2, and ignores the events in sequels Terminator 3 and beyond.
Notes
References
Works cited
Journals
Magazines
External links
1990s American films
1990s chase films
1990s dystopian films
1990s English-language films
1990s films about time travel
1990s science fiction action films
1991 films
1991 science fiction films
3D re-releases
American chase films
American films about revenge
American post-apocalyptic films
American science fiction action films
American sequel films
Apocalyptic films
Articles containing video clips
BAFTA winners (films)
Carolco Pictures films
English-language action thriller films
English-language science fiction action films
Fiction about nanotechnology
Fictional portrayals of the Los Angeles Police Department
Films about cyborgs
Films about mother–son relationships
Films about shapeshifting
Films about single parent families
Films about technological impact
Films about World War III
Films directed by James Cameron
Films produced by James Cameron
Films scored by Brad Fiedel
Films set in 1995
Films set in 1997
Films set in 2029
Films set in California
Films set in Los Angeles
Films set in psychiatric hospitals
Films set in the future
Films shot in California
Films shot in New Mexico
Films that won the Academy Award for Best Makeup
Films that won the Best Sound Editing Academy Award
Films that won the Best Sound Mixing Academy Award
Films that won the Best Visual Effects Academy Award
Films using stop-motion animation
Films with screenplays by James Cameron
Films with screenplays by William Wisher Jr.
Girls with guns films
Hugo Award for Best Dramatic Presentation–winning works
Lightstorm Entertainment films
Nickelodeon Kids' Choice Award–winning films
StudioCanal films
Techno-thriller films
Judgment Day
TriStar Pictures films
United States National Film Registry films
Saturn Award–winning films | Terminator 2: Judgment Day | Materials_science | 14,073 |
36,948,506 | https://en.wikipedia.org/wiki/Library%20portal | A library portal is an interface to access library resources and services through a single access and management point for users: for example, by combining the circulation and catalog functions of an integrated library system (ILS) with additional tools and facilities.
Definition
A library portal is defined as "a combination of software components that unify the user experience of discovering and accessing information" in contrast to a "single technology" to provide "services that support discovery, access and effective use of information."
Major elements
In addition to the basic functions of access to the library catalog, and a user's subscription records, significant elements of a library portal normally include:
"Metasearching tools, browsable interfaces, and online reference help", which aid in the discovery process
Links to full-text articles, OpenURL,
Interlibrary loan (ILL) or document delivery, for material the library does not own
Citation management software, user preferences services, "knowledge management tools"
More recently, the focus has been on the discovery goal, which has led to even more difficulties in defining a library portal. The terms "discovery tools", "discovery services", "next-generation discovery tool", and "next-generation OPAC" are often used interchangeably.
Current market
The focus on discovery tools has led to increased competitors in the discovery services market; the competitors that existed in the library portal market have also shifted their focus to this particular function.
A list of competitors in the current library portal market who have recently been awarded contracts by various libraries for their entire portal include :
Axiell Arena: contract with The University of Gävle
Axiell Calm: contract with Denmark's Roskilde Libraries for archive management
BIBIS Library Portal: contract with ROC Mondriaan in The Hague as well as the library of the central bank of the Netherlands, the library of Provincie Zuid-Holland in South Holland, and at Dutch law firm Ploum Lodder Princen.
ExLibris Primo: contract with Hesburgh Libraries of Notre Dame. Library Technology refers to this "discovery and delivery solution" as a "library portal".
MetaLib Library Portal, ExLibris: contract with NASA's Johnson Space Center
By contrast, the following list highlights contracts signed by libraries for specific discovery service tools, mostly at more recent dates
EOS.Web OPAC Discovery, EOS International: while it is unclear which of EOS services were purchased by their clients, the benefits to the EOS.Web OPAC Discovery grew significantly recently when EOS International signed an ILL agreement with the New York Law Institute, which will allow EOS clients to easily request NYLI union catalog items from their EOS.Web OPAC". EOS International's press releases do not specify which service was purchased but only mention the names of new clients.
Summon, Serials Solutions: contract with University of Texas at Austin Libraries, University of Connecticut Libraries, University of Illinois at Chicago Library, California State University System, Syracuse University Library, University of North Carolina at Chapel Hill Library, Lund University Libraries, Helmut Schmidt University, Peking University, University of the Free State, Cornell University Library, Brown University Library, Kyushu University Library
Ex Libris Primo and SFX OpenURL: contract with Online Dakota Information Network (Library Technology, March 27, 2012); Silesian University of Technology, Poland;
EBSCO Discovery Service: contract with Seton Hall University; Massey University Library, New Zealand; Warsaw University, Poland; Bielefeld University, Germany; Bibliothèque nationale de France
Challenges
When building a portal for a library, one of the challenges discussed by Morgan is communication: the building of a portal requires consensus with regards to what should be included. Another challenge is ensuring a user-centered design for the portal. This involves conducting surveys, focus group interviews, and usability studies – all of which can be seen as time-consuming. Additionally, compatibility with the hosting institution is critical. Finally, the question of whether a library should go with open source software or commercial products is always a point of contention.
Standards
There are no accepted standards for library portals. The only standards in the literature are the more general search and retrieval standards, including Z39.50 and ZING (Z39.50-International: Next Generation), the Open Archives Initiative Protocol for Metadata Harvesting, and OpenURL.
As a result of the lack of standards, and since customization is required in a library portal, individual institutions decide what they expect their portal to look like, and what services it will provide. For example, Harvard University is currently conducting a library portal project, which will begin implementation during the summer of 2012. They have identified their own list of criteria, which naturally differs substantially from the needs of other institutions. The various general areas that the committee has looked at include: content, user experience, features and capabilities, infrastructure and security, and search and discovery. It is uncertain which areas will be selected as part of the Phase I implementation of the portal.
Relationship between OPACs and library portals
The online public access catalog (OPAC) is a basic module, part of the library's integrated library system. Earlier, the OPAC has been limited to searching physical texts, and sometimes digital copies but has only limited special features. Caplan argues that they are in process of replacement by newer "discovery tools" allowing more customization. Yang and Hofmann suggest that vendors see money in building either separate discovery tools or Next-Generation OPACs to be purchased as an add-on feature. A problem with vocabulary arises here. Yang and Wagner (2010, in Yang and Hofmann, 2011) refer to discovery tools by many names, including "stand-alone OPAC, discovery layer, and next-generation catalog [sic.]" This contrasts Bair, Boston, and Garrison, who differentiate between next-generation catalogues and web-scale discovery services. Despite any confusion, it is clear that the OPAC as it currently stands is outdated, and will be replaced by more modern, user-friendly tools. The next-generation OPAC as described by Yang and Hofmann will ideally have the following 12 features (although not all features are currently available in any single discovery product):
Single point of entry for all library resources
State-of-the-art web interface
Enriched content
Faceted navigation
Simple keyword search box with a link to advanced search on every page
Relevancy ranking
Spell-checking
Recommendations/related materials
User contribution
RSS feeds
Integration with social networking sites
Persistent links
See also
Digital information
Library science
Web portal
References
External links
History of Library Automation Wikiversity
Library automation
Library science | Library portal | Engineering | 1,356 |
2,139,915 | https://en.wikipedia.org/wiki/Mason%27s%20mark | A mason's mark is an engraved symbol often found on dressed stone in buildings and other public structures.
In stonemasonry
Regulations issued in Scotland in 1598 by James VI's Master of Works, William Schaw, stated that on admission to the guild, every mason had to enter his name and his mark in a register. There are three types of marks used by stonemasons.
Banker marks were made on stones before they were sent to be used by the walling masons. These marks served to identify the banker mason who had prepared the stones to their paymaster. This system was employed only when the stone was paid for by measure, rather than by time worked. For example, the 1306 contract between Richard of Stow, mason, and the Dean and Chapter of Lincoln Cathedral, specified that the plain walling would be paid for by measure, and indeed banker marks are found on the blocks of walling in this cathedral. Conversely, the masons responsible for walling the eastern parts of Exeter Cathedral were paid by the week, and consequently few banker marks are found on this part of the cathedral. Banker marks make up the majority of masons' marks, and are generally what are meant when the term is used without further specification.
Assembly marks were used to ensure the correct installation of important pieces of stonework. For example, the stones on the window jambs in the chancel of North Luffenham church in Rutland are each marked with a Roman numeral, directing the order in which the stones were to be installed.
Quarry stones were used to identify the source of a stone, or occasionally the quality.
In Freemasonry
Freemasonry, a fraternal order that uses an analogy to stonemasonry for much of its structure, also makes use of marks. A Freemason who takes the degree of Mark Master Mason will be asked to create his own Mark, as a type of unique signature or identifying badge. Some of these can be quite elaborate.
Gallery of mason's marks
See also
Benchmark (surveying)
Builder's signature
Carpenter's mark
House mark
Merchant's mark
References
Further reading
External links
Examples of Mason's marks
Site detailing Mason's Marks in Scotland
Freemasonry
Masonic symbolism
Stonemasonry
Symbols
Inscriptions | Mason's mark | Mathematics,Engineering | 463 |
9,804,476 | https://en.wikipedia.org/wiki/Itching%20powder | Itching powder is a powder or powder-like substance that induces itching when applied onto human skin. This is usually done as a practical joke or prank to an unsuspecting victim.
Description and uses
The cause of the irritation can be mechanical, such as products containing ground rose hips. Another common ingredient is Mucuna pruriens, a type of legume that produces seedpods coated with thousands of detachable spicules (needle-like hairs). The spicules contain an enzyme, mucunain, that causes severe itching, and they have been sold commercially as itching powder. Mucuna pruriens has been used to test the efficacy of anti-itch drugs.
The term "itching powder" is colloquial; there is no one specific source of the powder. For the safety of the maker and of the victim, gloves, dust masks, and glasses are worn, as itching powder is a mouth- and eye-irritant, and caution is strongly encouraged whenever handling the processed powder. Rose hips contain prickly hairs that are used as the active ingredient, whereas the body (rather than the wing) of the samara of the bigleaf maple is covered with spiny hairs that cause skin irritation and are used to make itching powder.
Itching powder was created from Mucuna pruriens in the early-19th century as a cure for lost feeling in the epidermis. When a person would lose feeling on their skin in conditions such as paralysis, the powder (mixed with lard to form an ointment) was used as a local stimulant believed to treat the condition.
Gallery
See also
Sneezing powder
List of practical joke topics
References
Practical joke devices
Powders | Itching powder | Physics | 362 |
17,887,935 | https://en.wikipedia.org/wiki/Polyvinyl%20chloride%20acetate | Polyvinyl chloride acetate (PVCA) is a thermoplastic copolymer of vinyl chloride and vinyl acetate. It is used in the manufacture of electrical insulation, of protective coverings (including garments), and of credit cards and "vinyl" audio recordings.
References
Acetate esters
Copolymers
Organochlorides
Vinyl polymers | Polyvinyl chloride acetate | Chemistry | 75 |
44,246,644 | https://en.wikipedia.org/wiki/Henny%20van%20der%20Windt | Hendrik Johannes (Henny) van der Windt (born 22 August 1955, in Vlaardingen) is a Dutch associate professor at the Rijksuniversiteit Groningen, specialized in the relationship between sustainability and science, in particular the relationship between nature conservation and ecology and between energy technologies, locale energy-initiatives and the energy transition.
Youth and study
Van der Windt grew up in Vlaardingen where he went to high school ('Hogere Burgerschool-B'). He was active in the regional environmental group Centraal Aksiekomitee Rijnmond and various student committees on environmental protection. After high school he studied biology at the Rijksuniversiteit Groningen (1972-1981).
PhD and academic position
He received his doctorate in 1995 with his PhD dissertation "En dan: wat is natuur nog in dit land?: natuurbescherming in Nederland 1880-1990" (After all, what is nature in this country, nature conservation in the Netherlands 1980–1990), with chapters on the rise of nature conservation, tensions between agriculture and nature conservation, forestry, ecological restoration and the management of the Wadden Sea.
At that time he worked as a junior scientist and lecturer at the Biology Department of the University of Groningen. After his doctorate, he worked several years as a researcher (post-doc) in Groningen within the Ethics and Policy research programme of NWO Around 2000 he became associate professor at the Science & Society Group (later Integrated Research on Energy, Environment and Society (IREES)) of the University of Groningen.
Research and education
He studied science-society interactions concerning genomics, food, ecological restoration, energy and sustainability, combining approaches and insights from biology, environmental science, environmental history and science and technology studies.
His education tasks include various courses such as second year Bachelor programmes Science & Society, the minor Future Planet Innovation and courses of the mastertrack Science, Business & Policy and the master Energy and Environmental sciences.
Publications
In addition to scientific papers, journalistic articles and policy reports Van der Windt was author or editor of several books or chapters. A selection:
1995. En dan: wat is natuur nog in dit land?: natuurbescherming in Nederland 1880-1990. Boom.
2001. Een Spiegel der Wetenschap: 200 Jaar Koninklijk Natuurkundig Genootschap te Groningen. With Adriaan Blaauw, Bert Boekschoten, Ulco Kooystra, Dick Leijenaar, Franck Smit, Kees Wiese & Marten van Wijhe. Profiel.
2005. Harmony or diversity? In: Nature and Art: The Hoge Veluwe. Waanders.
2006. Een groene voorzitter, raadheer en bruggenbouwer: prof. H.J.L. Vonhoff als voorzitter van NP De Hoge Veluwe en de Natuurbeschermingsraad. With Elio Pelzers. Waanders.
2008. Tussen dierenliefde en milieubeleid. Academia Press.
2012. Knocking on Doors: Boundary Objects in Ecological Conservation and Restoration. With Sjaak Swart. In: Sustainability Science, The Emerging Paradigm and the Urban Environment, Springer.
2012. Parks without Wilderness, Wilderness without Parks? In: Civilizing Nature, National Parks in Global Historical Perspective. Berghahn.
2019. Community Energy Storage: Governance and Business Models. With Binod Koirala, Rudi Hakvoort & Ellen van Oost. In: Consumer, Prosumer, Prosumager, Elsevier.
2021. New Pathways for Community Energy and Storage. With Ellen van Oost, Binod Koirala |& Esther van der Waal. MDPI.
References
External links
Henny van der Windt Rijksuniversiteit Groningen profile
Henny van der Windt NARCIS profile
1955 births
Living people
Dutch biologists
Environmental scientists
University of Groningen alumni
Academic staff of the University of Groningen
People from Vlaardingen | Henny van der Windt | Environmental_science | 860 |
7,412,739 | https://en.wikipedia.org/wiki/Mind%20games | Mind games (also power games or head games) are actions performed for reasons of psychological one-upmanship, often employing passive–aggressive behavior to specifically demoralize or dis-empower the thinking subject, making the aggressor look superior. It also describes the unconscious games played by people engaged in ulterior transactions of which they are not fully aware, and which transactional analysis considers to form a central element of social life all over the world.
The first known use of the term "mind game" dates from 1963, and "head game" from 1977.
Conscious one-upmanship
In intimate relationships, mind games can be used to undermine one partner's belief in the validity of their own perceptions. Personal experience may be denied and driven from memory, and such abusive mind games may extend to the denial of the victim's reality, social undermining, and downplaying the importance of the other partner's concerns or perceptions. Both sexes have equal opportunities for such verbal coercion which may be carried out unconsciously as a result of the need to maintain one's own self-deception.
Mind games in the struggle for prestige appear in everyday life in the fields of office politics, sport, and relationships. Office mind games are often hard to identify clearly, as strong management blurs with over-direction, and healthy rivalry with manipulative head games and sabotage. The wary salesman will be consciously and unconsciously prepared to meet a variety of challenging mind games and put-downs in the course of their work. The serious sportsman will also be prepared to meet a variety of gambits and head games from their rivals, attempting to tread the fine line between competitive psychology and paranoia.
Unconscious games
Eric Berne described a psychological game as an organized series of ulterior transactions taking place on twin levels: social and psychological, and resulting in a dramatic outcome when the two levels finally came to coincide. He described the opening of a typical game like flirtation as follows: "Cowboy: 'Come and see the barn'. Visitor: 'I've loved barns ever since I was a little girl'". At the social level a conversation about barns, at the psychological level one about sex play, the outcome of the game – which may be comic or tragic, heavy or light – will become apparent when a switch takes place and the ulterior motives of each become clear.
Between thirty and forty such games (as well as variations of each) were described and tabulated in Berne's best seller on the subject titled "Games People Play: The Psychology of Human Relationships". According to one transactional analyst, "Games are so predominant and deep-rooted in society that they tend to become institutionalized, that is, played according to rules that everybody knows about and more or less agrees to. The game of Alcoholic, a five-handed game, illustrates this...so popular that social institutions have developed to bring the various players together" such as Alcoholics Anonymous and Al-anon.
Psychological games vary widely in degrees of consequence, ranging from first-degree games where losing involves embarrassment or frustration, to third-degree games where consequences are life-threatening. Berne recognized however that "since by definition games are based on ulterior transactions, they must all have some element of exploitation", and the therapeutic ideal he offered was to stop playing games altogether.
See also
References
Sources
R.D. Laing, Self and Others (Penguin 1969)
External links
Sarah Strudwick (Nov 16, 2010) Dark Souls – Mind Games, Manipulation and Gaslighting
Mind control
Harassment and bullying
Psychological abuse
Transactional analysis
Psychological manipulation | Mind games | Biology | 743 |
34,253,455 | https://en.wikipedia.org/wiki/Peeling%20theorem | In general relativity, the peeling theorem describes the asymptotic behavior of the Weyl tensor as one goes to null infinity. Let be a null geodesic in a spacetime from a point p to null infinity, with affine parameter . Then the theorem states that, as tends to infinity:
where is the Weyl tensor, and abstract index notation is used. Moreover, in the Petrov classification, is type N, is type III, is type II (or II-II) and is type I.
References
External links
General relativity
Theorems in general relativity | Peeling theorem | Physics,Mathematics | 117 |
30,865,488 | https://en.wikipedia.org/wiki/Complementarity%20%28molecular%20biology%29 | In molecular biology, complementarity describes a relationship between two structures each following the lock-and-key principle. In nature complementarity is the base principle of DNA replication and transcription as it is a property shared between two DNA or RNA sequences, such that when they are aligned antiparallel to each other, the nucleotide bases at each position in the sequences will be complementary, much like looking in the mirror and seeing the reverse of things. This complementary base pairing allows cells to copy information from one generation to another and even find and repair damage to the information stored in the sequences.
The degree of complementarity between two nucleic acid strands may vary, from complete complementarity (each nucleotide is across from its opposite) to no complementarity (each nucleotide is not across from its opposite) and determines the stability of the sequences to be together. Furthermore, various DNA repair functions as well as regulatory functions are based on base pair complementarity. In biotechnology, the principle of base pair complementarity allows the generation of DNA hybrids between RNA and DNA, and opens the door to modern tools such as cDNA libraries.
While most complementarity is seen between two separate strings of DNA or RNA, it is also possible for a sequence to have internal complementarity resulting in the sequence binding to itself in a folded configuration.
DNA and RNA base pair complementarity
Complementarity is achieved by distinct interactions between nucleobases: adenine, thymine (uracil in RNA), guanine and cytosine. Adenine and guanine are purines, while thymine, cytosine and uracil are pyrimidines. Purines are larger than pyrimidines. Both types of molecules complement each other and can only base pair with the opposing type of nucleobase. In nucleic acid, nucleobases are held together by hydrogen bonding, which only works efficiently between adenine and thymine and between guanine and cytosine. The base complement A = T shares two hydrogen bonds, while the base pair G ≡ C has three hydrogen bonds. All other configurations between nucleobases would hinder double helix formation. DNA strands are oriented in opposite directions, they are said to be antiparallel.
A complementary strand of DNA or RNA may be constructed based on nucleobase complementarity. Each base pair, A = T vs. G ≡ C, takes up roughly the same space, thereby enabling a twisted DNA double helix formation without any spatial distortions. Hydrogen bonding between the nucleobases also stabilizes the DNA double helix.
Complementarity of DNA strands in a double helix make it possible to use one strand as a template to construct the other. This principle plays an important role in DNA replication, setting the foundation of heredity by explaining how genetic information can be passed down to the next generation. Complementarity is also utilized in DNA transcription, which generates an RNA strand from a DNA template. In addition, human immunodeficiency virus, a single-stranded RNA virus, encodes an RNA-dependent DNA polymerase (reverse transcriptase) that uses complementarity to catalyze genome replication. The reverse transcriptase can switch between two parental RNA genomes by copy-choice recombination during replication.
DNA repair mechanisms such as proof reading are complementarity based and allow for error correction during DNA replication by removing mismatched nucleobases. In general, damages in one strand of DNA can be repaired by removal of the damaged section and its replacement by using complementarity to copy information from the other strand, as occurs in the processes of mismatch repair, nucleotide excision repair and base excision repair.
Nucleic acids strands may also form hybrids in which single stranded DNA may readily anneal with complementary DNA or RNA. This principle is the basis of commonly performed laboratory techniques such as the polymerase chain reaction, PCR.
Two strands of complementary sequence are referred to as sense and anti-sense. The sense strand is, generally, the transcribed sequence of DNA or the RNA that was generated in transcription, while the anti-sense strand is the strand that is complementary to the sense sequence.
Self-complementarity and hairpin loops
Self-complementarity refers to the fact that a sequence of DNA or RNA may fold back on itself, creating a double-strand like structure. Depending on how close together the parts of the sequence are that are self-complementary, the strand may form hairpin loops, junctions, bulges or internal loops. RNA is more likely to form these kinds of structures due to base pair binding not seen in DNA, such as guanine binding with uracil.
Regulatory functions
Complementarity can be found between short nucleic acid stretches and a coding region or a transcribed gene, and results in base pairing. These short nucleic acid sequences are commonly found in nature and have regulatory functions such as gene silencing.
Antisense transcripts
Antisense transcripts are stretches of non coding mRNA that are complementary to the coding sequence. Genome wide studies have shown that RNA antisense transcripts occur commonly within nature. They are generally believed to increase the coding potential of the genetic code and add an overall layer of complexity to gene regulation. So far, it is known that 40% of the human genome is transcribed in both directions, underlining the potential significance of reverse transcription.
It has been suggested that complementary regions between sense and antisense transcripts would allow generation of double stranded RNA hybrids, which may play an important role in gene regulation. For example, hypoxia-induced factor 1α mRNA and β-secretase mRNA are transcribed bidirectionally, and it has been shown that the antisense transcript acts as a stabilizer to the sense script.
miRNAs and siRNAs
miRNAs, microRNA, are short RNA sequences that are complementary to regions of a transcribed gene and have regulatory functions. Current research indicates that circulating miRNA may be utilized as novel biomarkers, hence show promising evidence to be utilized in disease diagnostics. MiRNAs are formed from longer sequences of RNA that are cut free by a Dicer enzyme from an RNA sequence that is from a regulator gene. These short strands bind to a RISC complex. They match up with sequences in the upstream region of a transcribed gene due to their complementarity to act as a silencer for the gene in three ways. One is by preventing a ribosome from binding and initiating translation. Two is by degrading the mRNA that the complex has bound to. And three is by providing a new double-stranded RNA (dsRNA) sequence that Dicer can act upon to create more miRNA to find and degrade more copies of the gene. Small interfering RNAs (siRNAs) are similar in function to miRNAs; they come from other sources of RNA, but serve a similar purpose to miRNAs.
Given their short length, the rules for complementarity means that they can still be very discriminating in their targets of choice. Given that there are four choices for each base in the strand and a 20bp - 22bp length for a mi/siRNA, that leads to more than possible combinations. Given that the human genome is ~3.1 billion bases in length, this means that each miRNA should only find a match once in the entire human genome by accident.
Kissing hairpins
Kissing hairpins are formed when a single strand of nucleic acid complements with itself creating loops of RNA in the form of a hairpin. When two hairpins come into contact with each other in vivo, the complementary bases of the two strands form up and begin to unwind the hairpins until a double-stranded RNA (dsRNA) complex is formed or the complex unwinds back to two separate strands due to mismatches in the hairpins. The secondary structure of the hairpin prior to kissing allows for a stable structure with a relatively fixed change in energy. The purpose of these structures is a balancing of stability of the hairpin loop vs binding strength with a complementary strand. Too strong an initial binding to a bad location and the strands will not unwind quickly enough; too weak an initial binding and the strands will never fully form the desired complex. These hairpin structures allow for the exposure of enough bases to provide a strong enough check on the initial binding and a weak enough internal binding to allow the unfolding once a favorable match has been found.
---C G---
C G ---C G---
U A C G
G C U A
C G G C
A G C G
A A A G
C U A A
U CUU ---CCUGCAACUUAGGCAGG---
A GAA ---GGACGUUGAAUCCGUCC---
G A U U
U U U C
U C G C
G C C G
C G A U
A U G C
G C ---G C---
---G C---
Kissing hairpins meeting up at the top of the loops. The complementarity
of the two heads encourages the hairpin to unfold and straighten out to
become one flat sequence of two strands rather than two hairpins.
Bioinformatics
Complementarity allows information found in DNA or RNA to be stored in a single strand. The complementing strand can be determined from the template and vice versa as in cDNA libraries. This also allows for analysis, like comparing the sequences of two different species. Shorthands have been developed for writing down sequences when there are mismatches (ambiguity codes) or to speed up how to read the opposite sequence in the complement (ambigrams).
cDNA Library
A cDNA library is a collection of expressed DNA genes that are seen as a useful reference tool in gene identification and cloning processes. cDNA libraries are constructed from mRNA using RNA-dependent DNA polymerase reverse transcriptase (RT), which transcribes an mRNA template into DNA. Therefore, a cDNA library can only contain inserts that are meant to be transcribed into mRNA. This process relies on the principle of DNA/RNA complementarity. The end product of the libraries is double stranded DNA, which may be inserted into plasmids. Hence, cDNA libraries are a powerful tool in modern research.
Ambiguity codes
When writing sequences for systematic biology it may be necessary to have IUPAC codes that mean "any of the two" or "any of the three". The IUPAC code R (any purine) is complementary to Y (any pyrimidine) and M (amino) to K (keto). W (weak) and S (strong) are usually not swapped but have been swapped in the past by some tools. W and S denote "weak" and "strong", respectively, and indicate a number of the hydrogen bonds that a nucleotide uses to pair with its complementing partner. A partner uses the same number of the bonds to make a complementing pair.
An IUPAC code that specifically excludes one of the three nucleotides can be complementary to an IUPAC code that excludes the complementary nucleotide. For instance, V (A, C or G - "not T") can be complementary to B (C, G or T - "not A").
Ambigrams
Specific characters may be used to create a suitable (ambigraphic) nucleic acid notation for complementary bases (i.e. guanine = b, cytosine = q, adenine = n, and thymine = u), which makes it is possible to complement entire DNA sequences by simply rotating the text "upside down". For instance, with the previous alphabet, (GTCA) would read as (TGAC, reverse complement) if turned upside down.
Ambigraphic notations readily visualize complementary nucleic acid stretches such as palindromic sequences. This feature is enhanced when utilizing custom fonts or symbols rather than ordinary ASCII or even Unicode characters.
See also
Base pair
References
External links
Reverse complement tool
Reverse Complement Tool @ DNA.UTAH.EDU
Molecular biology | Complementarity (molecular biology) | Chemistry,Biology | 2,517 |
44,182,173 | https://en.wikipedia.org/wiki/ST%20motif | The ST motif is a commonly occurring feature in proteins and polypeptides. It consists of four or five amino acid residues with either serine or threonine as the first residue (residue i). It is defined by two internal hydrogen bonds. One is between the side chain oxygen of residue i and the main chain NH of residue i + 2 or i + 3; the other is between the main chain oxygen of residue i and the main chain NH of residue i + 3 or i + 4. Two websites are available for finding and examining ST motifs in proteins, Motivated Proteins: and PDBeMotif.
When one of the hydrogen bonds is between the main chain oxygen of residue i and the side chain NH of residue i + 3 the motif incorporates a beta turn. When one of the hydrogen bonds is between the side chain oxygen of residue i and the main chain NH of residue i + 2 the motif incorporates an ST turn.
As with ST turns, a significant proportion of ST motifs occur at the N-terminus of an alpha helix with the serine or threonine as the N cap residue. They have thus often been described as helix capping features.
A related motif is the asx motif which has aspartate or asparagine as the first residue.
Two well conserved threonines at α-helical N-termini occur as ST motifs and form part of the characteristic nucleotide binding sites of SF1 and SF2 type DNA and RNA helicases.
It has been suggested that the sequences SPXX or STXX are frequently found at DNA-binding sites and also that they are recognized as substrates by some protein kinases. Structural studies of polypeptides indicate that such tetrapeptides can adopt the hydrogen bonding pattern of the ST motif.
References
Protein structural motifs | ST motif | Biology | 372 |
14,217,493 | https://en.wikipedia.org/wiki/Ribose-5-phosphate%20isomerase | Ribose-5-phosphate isomerase (Rpi) encoded by the RPIA gene is an enzyme () that catalyzes the conversion between ribose-5-phosphate (R5P) and ribulose-5-phosphate (Ru5P). It is a member of a larger class of isomerases which catalyze the interconversion of chemical isomers (in this case structural isomers of pentose). It plays a vital role in biochemical metabolism in both the pentose phosphate pathway and the Calvin cycle. The systematic name of this enzyme class is D-ribose-5-phosphate aldose-ketose-isomerase.
Structure
Gene
RpiA in human beings is encoded on the second chromosome on the short arm (p arm) at position 11.2. Its encoding sequence is nearly 60,000 base pairs long. The only known naturally occurring genetic mutation results in ribose-5-phosphate isomerase deficiency, discussed below. The enzyme is thought to have been present for most of evolutionary history. Knock-out experiments conducted on the genes of various species meant to encode RpiA have indicated similar conserved residues and structural motifs, indicating ancient origins of the gene.
Protein
Rpi exists as two distinct proteins, termed RpiA and RpiB. Although RpiA and RpiB catalyze the same reaction, they show no sequence or overall structural homology. According to Jung et al., an assessment of RpiA using SDS-PAGE shows that the enzyme is a homodimer of 25 kDa subunits. The molecular mass of the RpiA dimer was found to be 49 kDa by gel filtration. Recently, the crystal structure of RpiA was determined. (please see )
Due to its role in the pentose phosphate pathway and the Calvin cycle, RpiA is highly conserved in most organisms, such as bacteria, plants, and animals. RpiA plays an essential role in the metabolism of plants and animals, as it is involved in the Calvin cycle which takes place in plants, and the pentose phosphate pathway which takes place in plants as well as animals.
All orthologs of the enzyme maintain an asymmetric tetramer quaternary structure with a cleft containing the active site. Each subunit consists of a five stranded β-sheet. These β-sheets are surrounded on both sides by α-helices. This αβα motif is not uncommon in other proteins, suggesting possible homology with other enzymes. The separate molecules of the enzyme are held together by highly polar contacts on the external surfaces of the monomers. It is presumed that the active site is located where multiple β-sheet C termini come together in the enzymatic cleft. This cleft is capable of closing upon recognition of the phosphate on the pentose (or an appropriate phosphate inhibitor). The active site is known to contain conserved residues equivalent to the E. coli residues Asp81, Asp84, and Lys94. These are directly involved in catalysis.
Mechanism
In the reaction, the overall consequence is the movement of a carbonyl group from carbon number 1 to carbon number 2; this is achieved by the reaction going through an enediol intermediate (Figure 1). Through site-directed mutagenesis, Asp87 of spinach RpiA was suggested to play the role of a general base in the interconversion of R5P to Ru5P.
The first step in the catalysis is the docking of the pentose into the active site in the enzymatic cleft, followed by allosteric closing of the cleft. The enzyme is capable of bonding with the open-chain or ring form of the sugar-phosphate. If it does bind the furanose ring, it next opens the ring. Then the enzyme forms the eneldiol which is stabilized by a lysine or arginine residue. Calculations have demonstrated that this stabilization is the most significant contributor to the overall catalytic activity of this isomerase and a number of others like it.
Function
The protein encoded by RPIA gene is an enzyme, which catalyzes the reversible conversion between ribose-5-phosphate and ribulose-5-phosphate in the pentose-phosphate pathway. This gene is highly conserved in most organisms. The enzyme plays an essential role in the carbohydrate metabolism. Mutations in this gene cause ribose 5-phosphate isomerase deficiency. A pseudogene is found on chromosome 18.
Pentose phosphate pathway
In the non-oxidative part of the pentose phosphate pathway, RPIA converts Ru5P to R5P which then is converted by ribulose-phosphate 3-epimerase to xylulose-5-phosphate (figure 3). The result of the reaction essentially is the conversion of the pentose phosphates to intermediates used in the glycolytic pathway. In the oxidative part of the pentose phosphate pathway, RpiA converts Ru5P to the final product, R5P through the isomerization reaction (figure 3). The oxidative branch of the pathway is a major source for NADPH which is needed for biosynthetic reactions and protection against reactive oxygen species.
Calvin cycle
In the Calvin cycle, the energy from the electron carriers is used in carbon fixation, the conversion of carbon dioxide and water into carbohydrates. RPIA is essential in the cycle, as Ru5P generated from R5P is subsequently converted to ribulose-1,5-bisphosphate (RuBP), the acceptor of carbon dioxide in the first dark reaction of photosynthesis (Figure 3). The direct product of RuBP carboxylase reaction is glyceraldehyde-3-phosphate; these are subsequently used to make larger carbohydrates. Glyceraldehyde-3-phosphate is converted to glucose which is later converted by the plant to storage forms (e.g., starch or cellulose) or used for energy.
Clinical significance
Ribose-5-phosphate isomerase deficiency is mutated in a rare disorder, Ribose-5-phosphate isomerase deficiency. The disease has only one known affected patient, diagnosed in 1999. It has been found to be caused by a combination of two mutations. The first is an insertion of a premature stop codon into the gene encoding the isomerase, and the second is a missense mutation. The molecular pathology is, as yet, unclear.
RpiA and hepatocarcinogenesis
Human ribose-5-phosphate isomerase A (RpiA) plays a role in human hepatocellular carcinoma (HCC). A significant increase in RpiA expression was detected both in tumor biopsies of HCC patients and in a liver cancer tissue array. Importantly, the clinicopathological analysis indicated that RpiA mRNA levels were highly correlated with clinical stage, grade, tumor size, types, invasion and alpha-fetoprotein levels in the HCC patients. In addition, the ability of RpiA to regulate cell proliferation and colony formation in different liver cancer cell lines required ERK signaling as well as the negative modulation of PP2A activity and that the effects of RpiA could be modulated by the addition of either a PP2A inhibitor or activator. It suggests that RpiA overexpression can induce oncogenesis in HCC.
RpiA and the malaria parasite
RpiA generated attention when the enzyme was found to play an essential role in the pathogenesis of the parasite Plasmodium falciparum, the causative agent of malaria. Plasmodium cells have a critical need for a large supply of the reducing power of NADPH via PPP in order to support their rapid growth. The need for NADPH is also required to detoxify heme, the product of hemoglobin degradation.
Furthermore, Plasmodium has an intense requirement for nucleic acid production to support its rapid proliferation. The R5P produced via increased pentose phosphate pathway activity is used to generate 5-phospho-D-ribose α-1-pyrophosphate (PRPP) needed for nucleic acid synthesis. It has been shown that PRPP concentrations are increased 56 fold in infected erythrocytes compared with uninfected erythrocytes. Hence, designing drugs that target RpiA in Plasmodium falciparum could have therapeutic potential for patients that suffer from malaria.
Interactions
RPIA has been shown to interact with PP2A.
Structural studies
As of late 2007, 15 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , , , , , , , , and .
References
EC 5.3.1
Enzymes of known structure
Pentose phosphate pathway | Ribose-5-phosphate isomerase | Chemistry | 1,836 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.