id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
16,189,704 | https://en.wikipedia.org/wiki/Project%20Valkyrie | The Valkyrie is a theoretical spacecraft designed by Charles Pellegrino and Jim Powell (a physicist at Brookhaven National Laboratory). The Valkyrie is theoretically able to accelerate to 92% the speed of light and decelerate afterward, carrying a small human crew to another star system.
Design
The Valkyrie's high performance is attributable to its innovative design. Instead of a solid spacecraft with a rocket at the back, Valkyrie is built more like a cable car train, with the crew quarters, fuel tanks, radiation shielding, and other vital components being pulled between front and aft engines on long tethers. This greatly reduces the mass of the ship, because it no longer requires heavy structural members and radiation shielding. This is a considerable advantage because in a rocket every extra kilogram of payload (dry mass) will require a corresponding extra amount of propellant or fuel.
The Valkyrie would have a crew module trailing 10 kilometers behind the engine. A small 20-cm-thick tungsten shield would hang 100 meters behind the engine, to help protect the trailing crew module from its harmful radiation. The fuel tank might be placed between the crew module and the engine, to further protect it. At the trailing end of the ship would be a second engine, which the ship would use to decelerate. The forward engine and the tank holding its fuel supply might be jettisoned before deceleration, to reduce fuel consumption. The tether system requires that the elements of the ship must be moved "up" or "down" the tethers depending on flight direction.
Engines
Initially, the Valkyrie's engine would work by using small quantities of antimatter to initiate an extremely energetic fusion reaction. A magnetic coil captures the exhaust products of this reaction, expelling them with an exhaust velocity of 12-20% the speed of light (35,000-60,000 km/s). As the spacecraft approaches 20% the speed of light, more antimatter is fed into the engines until it switches over to pure matter-antimatter annihilation. It will use this mode to accelerate the remainder of the way to .92 c. Pellegrino estimates that the ship would require 100 tons of matter and antimatter to reach 0.1-0.2c, with an undetermined excess of matter to ensure the antimatter is efficiently utilized. To reach a speed of .92 c and decelerate afterward, Valkyrie would require a mass ratio of 22 (or 2200 tons of fuel for a 100-ton spacecraft).
At such high speeds, incident debris would be a major hazard. While accelerating, Valkyrie uses a device that combines the functions of a particle shield and a liquid droplet radiator. Waste heat is dumped into liquid droplets that are cast out in front of the ship. As the ship accelerates the droplets (now cool) effectively fall back into the ship, so the system is self-recycling. During deceleration, the ship will be protected by ultra-thin umbrella shields, augmented by a dust shield, possibly made by grinding up pieces of the discarded first stage.
Criticism
The chief feasibility issue of Valkyrie (or for any antimatter-beam drive) lies in its requirement of tons of antimatter fuel. Antimatter cannot be produced at an efficiency of more than 50% (that is to say, to produce one gram of antimatter requires twice as much energy as you would get from annihilating that gram with a gram of matter). Since half a kilogram of antimatter would yield 9×1016 J if annihilated with an equal amount of matter, this quickly adds up to enormous energy requirements for its production. To produce the 50 tons of antimatter Valkyrie would require 1.8×1022 J. This is the same amount of energy that the entire human race currently uses in about forty years.
This may be solved by creating a truly enormous power plant for the antimatter factory, probably in the form of a vast array of solar panels with a combined area of millions of square kilometers or many fusion reactors. Alternately the antimatter-fusion hybrid drive the Valkyrie uses to accelerate up to 0.2 c would require much less antimatter and, with an exhaust velocity of 30–60,000 km·s−1, still compares quite favorably with competing engines such as the inertial confinement pulse drive used by Project Daedalus or Project Orion. The Valkyrie's lightweight construction could also be applied to a wide variety of space vehicles.
By using tethers there is no rigidity between ship elements and engines. Without active acceleration or thrust to pull and straighten the tethers the slightest imbalance, excess force, or the moving of the ship elements into different flight configurations pose a danger for collisions between ship elements and engines. As long term space flight at interstellar velocities causes erosion due to collision with particles, gas, dust and micrometeorites the tethers are literally lifelines. Changing course or turning the ship requires re-positioning or aligning every ship element and presumably consumes more fuel in doing so.
As the liquid droplet radiators (LDR) are deployed on the other side of propulsion and the main body, the droplets and the collectors are exposed to the other half of the heat energy from the gamma radiation from the antimatter annihilation. If the total area of the collectors are larger than the radiation shield the LDR would serve to cool itself rather than the shield for the ship's main components.
Trivia
A superficially-similar interstellar spacecraft is featured in the movie Avatar.
See also
Project Prometheus
Project Longshot
References
External links
Valkyrie Edited Guide Entry (BBC.com)
Valkyrie at Atomic Rockets
Hypothetical spacecraft
Interstellar travel
Antimatter | Project Valkyrie | [
"Physics",
"Astronomy",
"Technology"
] | 1,185 | [
"Exploratory engineering",
"Astronomical hypotheses",
"Antimatter",
"Hypothetical spacecraft",
"Interstellar travel",
"Matter"
] |
16,191,366 | https://en.wikipedia.org/wiki/Boletus%20barrowsii | Boletus barrowsii, also known in English as the white king bolete after its pale colored cap, is an edible and highly regarded fungus in the genus Boletus that inhabits western North America. Found under ponderosa pine and live oak in autumn, it was considered a color variant of the similarly edible B. edulis for many years.
Description
The cap is in diameter, initially convex in shape before flattening, with a smooth or slightly tomentose surface, and gray-white, white or buff color. The thick flesh is white and does not turn blue when bruised. The pores are initially whitish, later yellow. The spore print is olive brown, the spores are elliptical to spindle-shaped and 13–15 x 4–5 μm in dimensions. The stout stipe is white with a brown reticulated pattern, and may be high with an apical diameter of 2–6 cm (1–2 in). Like B. edulis, it is often found eaten by maggots. It has a strong odor while drying.
Similar species
In addition to B. edulis, the species could also be confused with the similarly pale-capped B. satanas, though the flesh of the latter stains blue when cut or bruised, and it has a reddish stem and pores. The latter species is poisonous when raw.
Taxonomy
The species was officially described by American mycologists Harry D. Thiers and Alexander H. Smith in 1976 from a specimen collected near Jacob Lake, Arizona, on August 21, 1971, by amateur mycologist Charles "Chuck" Barrows, who had studied the mushroom in New Mexico. It was previously held to be a white colour form of B. edulis. A 2010 molecular study found that B. barrowsii was sister to a lineage that gave rise to the species B. quercophilus of Costa Rica and B. nobilissimus of eastern North America.
Distribution and habitat
The white king bolete is ectomycorrhizal, found under ponderosa pine (Pinus ponderosa) inland, and coast live oak (Quercus agrifolia) closer to the west coast. Fruit bodies appear after rain, and will be more abundant if this occurs in early autumn rather than later in the year through to winter. It is abundant in the warmer parts of its range, namely Arizona and New Mexico, but also occurs in Colorado, west into California and north to British Columbia. It has been recorded from the San Marcos Foothills in Santa Barbara County.
Uses
The species is edible and highly regarded in New Mexico, Arizona, and Colorado, and was eaten for many years while assumed to be a form of B. edulis.
See also
List of Boletus species
List of North American boletes
References
Edible fungi
barrowsii
Fungi described in 1976
Fungi of North America
Taxa named by Harry Delbert Thiers
Taxa named by Alexander H. Smith
Fungus species | Boletus barrowsii | [
"Biology"
] | 605 | [
"Fungi",
"Fungus species"
] |
16,192,152 | https://en.wikipedia.org/wiki/The%20Macintosh%20Way | The Macintosh Way was the first book written by former Apple evangelist Guy Kawasaki. Subtitled "the art of guerrilla management", the book focused on technology marketing and management and includes many anecdotes culled from Kawasaki's experience during the early development of the Macintosh.
Chapter listing
First Blood
Macintosh Days
Environment
Great Products
Support
Marketing
User Groups
Evangelism
To Market, To Market
The Printed Word
Working with the Mothership
How to Give Good Demo
Presentation Manager
Trade Show Mavenship
How to Drive your (MS-DOS) Competitors Crazy
The Macintosh Guide to Dating and Marriage
Sayonara
References
External links
The Macintosh Way at Guy Kawasaki's site
1990 non-fiction books
Organizational culture
Books by computer and internet entrepreneurs
Books about Apple Inc. | The Macintosh Way | [
"Technology"
] | 149 | [
"Computing stubs",
"Computer book stubs"
] |
10,962,250 | https://en.wikipedia.org/wiki/Howard%20Jerome%20Keisler | Howard Jerome Keisler (born 3 December 1936) is an American mathematician, currently professor emeritus at University of Wisconsin–Madison. His research has included model theory and non-standard analysis.
His Ph.D. advisor was Alfred Tarski at Berkeley; his dissertation is Ultraproducts and Elementary Classes (1961).
Following Abraham Robinson's work resolving what had long been thought to be inherent logical contradictions in the literal interpretation of Leibniz's notation that Leibniz himself had proposed, that is, interpreting "dx" as literally representing an infinitesimally small quantity, Keisler published Elementary Calculus: An Infinitesimal Approach, a first-year calculus textbook conceptually centered on the use of infinitesimals, rather than the epsilon, delta approach, for developing the calculus.
He is also known for extending the Henkin construction (of Leon Henkin) to what are now called Henkin–Keisler models. He is also known for the Rudin–Keisler ordering along with Mary Ellen Rudin.
He held the named chair of Vilas Professor of Mathematics at Wisconsin.
Among Keisler's graduate students, several have made notable mathematical contributions, including Frederick Rowbottom who discovered Rowbottom cardinals. Several others have gone on to careers in computer science research and product development, including: Michael Benedikt, a professor of computer science at the University of Oxford, Kevin J. Compton, a professor of computer science at the University of Michigan, Curtis Tuckey, a developer of software-based collaboration environments; Joseph Sgro, a neurologist and developer of vision processor hardware and software, and Edward L. Wimmers, a database researcher at IBM Almaden Research Center.
In 2012 he became a fellow of the American Mathematical Society.
His son Jeffrey Keisler is a Fulbright Distinguished Chair at the University of Massachusetts, Boston, College of Management.
Publications
Chang, C. C.; Keisler, H. J. Continuous Model Theory. Annals of Mathematical Studies, 58, Princeton University Press, 1966. xii+165 pp.
Model Theory for Infinitary Logic, North-Holland, 1971
Chang, C. C.; Keisler, H. J. Model theory. Third edition. Studies in Logic and the Foundations of Mathematics, 73. North-Holland Publishing Co., Amsterdam, 1990. xvi+650 pp. ; 1st edition 1973; 2nd edition 1977
Elementary Calculus: An Infinitesimal Approach. Prindle, Weber & Schmidt, 1976/1986. Available online at .
An Infinitesimal Approach to Stochastic Analysis, American Mathematical Society Memoirs, 1984
Keisler, H. J.; Robbin, Joel. Mathematical Logic and Computability, McGraw-Hill, 1996
Fajardo, Sergio; Keisler, H. J. Model Theory of Stochastic Processes, Lecture Notes in Logic, Association for Symbolic Logic. 2002
See also
Criticism of non-standard analysis
Non-standard calculus
Elementary Calculus: An Infinitesimal Approach
Influence of non-standard analysis
References
External links
Keisler's home page
20th-century American mathematicians
21st-century American mathematicians
Living people
Model theorists
University of Wisconsin–Madison faculty
Fellows of the American Mathematical Society
1936 births | Howard Jerome Keisler | [
"Mathematics"
] | 653 | [
"Model theorists",
"Model theory"
] |
10,962,898 | https://en.wikipedia.org/wiki/Indium%28III%29%20selenide | Indium(III) selenide is a compound of indium and selenium. It has potential for use in photovoltaic devices and has been the subject of extensive research. The two most common phases, α and β, have a layered structure, while γ has a "defect wurtzite structure." In all, five polymorphs are known: α, β, γ, δ, κ. The α-β phase transition is accompanied by a change in electrical conductivity. The band gap of γ-In2Se3 is approximately 1.9 eV.
Preparation
The method of production influences the polymorph generated. For example, thin films of pure γ-In2Se3 have been produced from trimethylindium (InMe3) and hydrogen selenide via MOCVD techniques.
A conventional route entails heating the elements in a sealed tube:
See also
Gallium(III) selenide
Indium chalcogenides
Nanoparticle
General references
WebElements
Footnotes
External links
Indium Selenide Nanoparticles Used In Solar Energy Conversion.
Indium compounds
Selenides
Solar cells
Semiconductor materials | Indium(III) selenide | [
"Chemistry"
] | 241 | [
"Semiconductor materials"
] |
10,963,217 | https://en.wikipedia.org/wiki/Subtract%20a%20square | Subtract-a-square (also referred to as take-a-square) is a two-player mathematical subtraction game. It is played by two people with a pile of coins (or other tokens) between them. The players take turns removing coins from the pile, always removing a non-zero square number of coins. The game is usually played as a normal play game, which means that the player who removes the last coin wins. It is an impartial game, meaning that the set of moves available from any position does not depend on whose turn it is. Solomon W. Golomb credits the invention of this game to Richard A. Epstein.
Example
A normal play game starting with 13 coins is a win for the first player provided they start with a subtraction of 1:
player 1: 13 - 1*1 = 12
Player 2 now has three choices: subtract 1, 4 or 9. In each of these cases, player 1 can ensure that within a few moves the number 2 gets passed on to player 2:
player 2: 12 - 1*1 = 11 player 2: 12 - 2*2 = 8 player 2: 12 - 3*3 = 3
player 1: 11 - 3*3 = 2 player 1: 8 - 1*1 = 7 player 1: 3 - 1*1 = 2
player 2: 7 - 1*1 = 6 or: 7 - 2*2 = 3
player 1: 6 - 2*2 = 2 3 - 1*1 = 2
Now player 2 has to subtract 1, and player 1 subsequently does the same:
player 2: 2 - 1*1 = 1
player 1: 1 - 1*1 = 0
player 2 loses
Mathematical theory
In the above example, the number '13' represents a winning or 'hot' position, whilst the number '2' represents a losing or 'cold' position. Given an integer list with each integer labeled 'hot' or 'cold', the strategy of the game is simple: try to pass on a 'cold' number to your opponent. This is always possible provided you are being presented a 'hot' number. Which numbers are 'hot' and which numbers are 'cold' can be determined recursively:
the number 0 is 'cold', whilst 1 is 'hot'
if all numbers 1 .. N have been classified as either 'hot' or 'cold', then
the number N+1 is 'cold' if only 'hot' numbers can be reached by subtracting a positive square
the number N+1 is 'hot' if at least one 'cold' number can be reached by subtracting a positive square
Using this algorithm, a list of cold numbers is easily derived:
0, 2, 5, 7, 10, 12, 15, 17, 20, 22, 34, 39, 44, …
A faster divide and conquer algorithm can compute the same sequence of numbers, up to any threshold , in time .
There are infinitely many cold numbers. More strongly, the number of cold numbers up to some threshold must be at least proportional to the square root of , for otherwise there would not be enough of them to provide winning moves from all the hot numbers.
Cold numbers tend to end in 0, 2, 4, 5, 7, or 9. Cold values that end with other digits are quite uncommon. This holds in particular for cold numbers ending in 6. Out of all the over 180,000 cold numbers less than 40 million, only one ends in a 6: 11,356.
No two cold numbers can differ by a square, because if they did then a move from the larger of the two to the smaller would be winning, contradicting the assumption that they are both cold. Therefore, by the Furstenberg–Sárközy theorem, the natural density of the cold numbers is zero. That is, for every , and for all sufficiently large , the fraction of the numbers up to that are cold is less than .
More strongly, for every there are
cold numbers up to The exact growth rate of the cold numbers remains unknown, but experimentally the number of cold positions up to any given threshold appears to be roughly .
Extensions
The game subtract-a-square can also be played with multiple numbers. At each turn the player to make a move first selects one of the numbers, and then subtracts a square from it. Such a 'sum of normal games' can be analysed using the Sprague–Grundy theorem. This theorem states that each position in the game subtract-a-square may be mapped onto an equivalent nim heap size. Optimal play consists of moving to a collection of numbers such that the nim-sum of their equivalent nim heap sizes is zero, when this is possible. The equivalent nim heap size of a position may be calculated as the minimum excluded value of the equivalent sizes of the positions that can be reached by a single move.
For subtract-a-square positions of values 0, 1, 2, ... the equivalent nim heap sizes are
0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 3, 2, 3, 4, … .
In particular, a position of subtract-a-square is cold if and only if its equivalent nim heap size is zero.
It is also possible to play variants of this game using other allowed moves than the square numbers. For instance, Golomb defined an analogous game based on the Moser–de Bruijn sequence, a sequence that grows at a similar asymptotic rate to the squares, for which it is possible to determine more easily the set of cold positions and to define an easily computed optimal move strategy.
Additional goals may also be added for the players without changing the winning conditions. For example, the winner can be given a "score" based on how many moves it took to win (the goal being to obtain the lowest possible score) and the loser given the goal to force the winner to take as long as possible to reach victory. With this additional pair of goals and an assumption of optimal play by both players, the scores for starting positions 0, 1, 2, ... are
0, 1, 2, 3, 1, 2, 3, 4, 5, 1, 4, 3, 6, 7, 3, 4, 1, 8, 3, 5, 6, 3, 8, 5, 5, 1, 5, 3, 7, … .
Misère game
Subtract-a-square can also be played as a misère game, in which the player to make the last subtraction loses. The recursive algorithm to determine 'hot' and 'cold' numbers for the misère game is the same as that for the normal game, except that for the misère game the number 1 is 'cold' whilst 2 is 'hot'. It follows that the cold numbers for the misère variant are the cold numbers for the normal game shifted by 1:
Misère play 'cold' numbers:
1, 3, 6, 8, 11, 13, 16, 18, 21, 23, 35, 40, 45, ...
See also
Nim
Wythoff's game
References
Mathematical games
Combinatorial game theory | Subtract a square | [
"Mathematics"
] | 1,534 | [
"Mathematical games",
"Recreational mathematics",
"Combinatorics",
"Game theory",
"Combinatorial game theory"
] |
10,964,172 | https://en.wikipedia.org/wiki/Flush%20deck | In naval architecture, a flush deck is a ship deck that is continuous from stem to stern.
History
Flush decks have been in use since the times of the ancient Egyptians. Greco-Roman Trireme often had a flush deck but may have also had a fore and aft castle deck. Flush decks were also common on medieval and Renaissance galleys but some also featured fore and aft castle decks. The medieval Brigantine and later Brig and Snow ships also featured flush decks.
Two different meanings of "flush"
"Flush deck" with "flush" in its generic meaning of "even or level; forming an unbroken plane", is sometimes applied to vessels, as in describing yachts lacking a raised pilothouse for instance. "Flush deck aircraft carrier" uses "flush deck" in this generic sense.
"Flush deck" in its more specific maritime-architecture sense denotes (for instance) the flush deck destroyers described above: the flush decks are broken by masts, guns, funnels, and other structures and impediments, and are far from being unbroken planes. "Flush deck" in this sense only signifies that the main deck runs the length of the ship and does not end before the stem (with a separate raised forecastle deck forward) or before the stern (with a separate raised or, as seen on many modern warships, lowered quarterdeck rearward).
Types
Flush deck aircraft carriers are those with no island superstructure, so that the top deck of the vessel consists of only an unbroken flight deck.
"Flush deckers" is a common nickname for a series of American destroyers built in large quantities during or shortly after World War I – the , , and classes – so called because they lacked the raised forecastle of preceding American destroyers, thus the main deck was a flush deck.
References
Shipbuilding
Naval architecture
Bangladeshi inventions
Indian inventions | Flush deck | [
"Engineering"
] | 370 | [
"Naval architecture",
"Shipbuilding",
"Marine engineering"
] |
10,964,271 | https://en.wikipedia.org/wiki/Johan%27s%20Ark | Johan's Ark is a Noah's Ark-themed barge in Dordrecht, Netherlands, which was built by the Dutch building contractor, carpenter and creationist Johan Huibers. It is a full-scale interpretation of the biblical Ark, featuring animal models, including cows, penguins, a crocodile, and a giraffe. It opened to the public in 2012.
Construction
Huibers built his ark with eight helpers in four years. It is divided in seven stories. The wooden construction is carried on a hidden floating platform from steel made up of 21 LASH barges. Previously the LASH barges were cargo containers, towed or pushed as floating barges over inland waterways, while carried on large ships over rough seas. Hence the ark can be towed by tugboats over the rivers, but it is not seaworthy. It could travel the seas on top of a pontoon or transport ship. The wood volume is equivalent to 12000 trees. While the Bible specified that the Ark had to be built from the unknown gopher wood, this ark is made of American Cedar and Pine. The ark is 119m (390 ft or 266 cubits) long, 30m (98 ft or 66 cubits) wide, and 23m (75 ft or 50 cubits) high. The cost of building it was 4 million euros.
Half-scale version
A few years earlier Huibers built a half-scale interpretation of the Ark, in the river port of Schagen, 50 km north of Amsterdam. Huibers did the work mostly with his own hands, using modern tools and occasional help from his son, in one and a half years. Its size, adapted for sailing the Dutch canals and locks, was long, wide, and high. The cost to build was 1 million euro. In 2007 Huibers opened the doors to visitors. After a few months the vessel was towed by tugboat through the canals and moored in 21 harbors in the Netherlands. The ark was sold to Dutch artist Aad Peters in 2010. He tours through Germany, Denmark and Norway. Although the ark isn't really seaworthy and can't handle waves higher than two meters, it was successfully towed across the sea from Denmark to Norway. On 10 June 2016, the ark collided with a moored vessel (NoCGV Nornen) while under tow in Oslo harbor. The ark suffered severe damage to its wooden cladding.
Visit to Ipswich
In November 2019 Aad Peters brought the Ark to Ipswich, Suffolk. He explained that this was because of Brexit. He explained that he felt that the story of the Judgment of Solomon was important when faced with the sort of social division such as experienced around Brexit. Shortly after arriving in Ipswich the vessel was impounded in the dock by coastguard officers because "load line certificates [were] missing, no tonnage information, and a range of other concerns." In June 2021 the vessel was still being detained in Ipswich due to “serious concerns” about its condition and seaworthiness. It was released on the 1st of July, and it reached Vlissingen in the Netherlands five days later.
See also
Flood myth and List of flood myths
Noah's Ark replicas and derivatives
References
External links
Official website (cached at Internet Archive)
Creationism
Buildings and structures in Dordrecht
Replicas and derivatives of Noah's Ark
Religious museums in the Netherlands
Schagen | Johan's Ark | [
"Biology"
] | 689 | [
"Creationism",
"Biology theories",
"Obsolete biology theories"
] |
10,965,310 | https://en.wikipedia.org/wiki/Prefoldin | Prefoldin (GimC) is a superfamily of proteins used in protein folding complexes. It is classified as a heterohexameric molecular chaperone in both archaea and eukarya, including humans. A prefoldin molecule works as a transfer protein in conjunction with a molecule of chaperonin to form a chaperone complex and correctly fold other nascent proteins. One of prefoldin's main uses in eukarya is the formation of molecules of actin for use in the eukaryotic cytoskeleton.
Purpose and uses
Prefoldin is one family of chaperone proteins found in the domains of eukarya and archaea. Prefoldin acts in combination with other molecules to promote protein folding in cells where there are many other competing pathways for folding. Chaperone proteins perform non-covalent assembly of other polypeptide-containing structures in vivo. They are implicated in the folding of most other proteins.
In archaea, prefoldins are believed to function in combination with group II chaperonins in de novo protein folding. In eukarya however, prefoldins have acquired a more specific function: they are used to establish correct tubular assembly for many tubular proteins, such as actin. Actin accounts for 5-10% of all protein found in eukaryotic cells, which therefore means that prefoldin is quite prevalent in the cells. Actin is made of two strings of beads wound round each other and is one of the three main parts of the cytoskeleton of eukaryotic cells. Prefoldin bonds specifically to cytosolic chaperonin protein. This complex of prefoldin and chaperonin then forms molecules of actin in the cytosol. The prefoldin acts as a transporter molecule that transports bound, unfolded target proteins to the chaperonin (C-CPN) molecule.
For example, the prefoldin that is used in the formation of actin also transfers α or β tubulin to a cytosolic chaperonin. The prefoldin, however, does not form a ternary complex with tubulin and chaperonin. Once the tubulins are in contact with the chaperonin, the prefoldin automatically lets go and leaves the active site, due to its high affinity for the chaperonin molecule. Once the prefoldin is in contact with the chaperonin protein, it loses its affinity for the unfolded target protein.
Prefoldin is triggered only to bind to nonnative target proteins in the cytosol so that it will only bind to unfolded proteins. Unlike many other molecular chaperones, prefoldin does not use chemical energy, in the form of adenosine triphosphate (ATP), to promote protein folding.
Discovery
Prefoldin was found by the laboratory of Nicholas J. Cowan from the Department of Biochemistry at the New York University Medical Center. It was discovered using chromatography. Unfolded labeled β-actin from bovine testes was put into solution. This solution contained an excess of cytosolic chaperonin (C-CPN), a eukaryotic chaperone protein necessary for actin folding. After gel filtration of the actin, the actin complex, consisting of actin and its bonded proteins, began to form and the molecular weight of the complex was observed. Gel electrophoresis was used to analyze the protein complex, the complex formed a single band that was excised and ran on an SDS gel. It resolved into five bands, therefore proving that a heterooligomeric protein is used to bind to unfolded actin.
An archaeal homolog of prefoldin that also functions as a molecular chaperone has been identified. Eukaryotic prefoldin likely evolved from archaea, as it is not present (or has been lost) from bacteria.
Structure
Prefoldin is a hetero hexameric protein consisting of two α subunits and four β subunits. The beta subunits contain 120 amino acid residues each, while the α subunits contain 140 amino acid residues each. Each subunit was found to have a width of 8.4 nm in the archaea Methanococcus thermoautrophicum. The height was calculated at 1.8-2.6 nm. The subunits are arranged by hydrophobic interactions with two β barrels at the center and coiled-coil α helices protruding down from them as if it were a jellyfish.
The lower "tentacles" of the jellyfish shape is the interface between prefoldin and chaperonin.
References
Proteins
Cell biology | Prefoldin | [
"Chemistry",
"Biology"
] | 968 | [
"Biomolecules by chemical classification",
"Proteins",
"Cell biology",
"Molecular biology"
] |
10,965,511 | https://en.wikipedia.org/wiki/Position%20sensor | A position sensor is a sensor that detects an object's position. A position sensor may indicate the absolute position of the object (its location) or its relative position (displacement) in terms of linear travel, rotational angle or three-dimensional space. Common types of position sensors include the following:
Capacitive displacement sensor
Eddy-current sensor
Hall effect sensor
Inductive sensor
Laser Doppler vibrometer (optical)
Linear variable differential transformer (LVDT)
Photodiode array
Piezo-electric transducer (piezo-electric)
Position encoders:
Absolute encoder
Incremental encoder
Linear encoder
Rotary encoder
Potentiometer
Proximity sensor (optical)
String potentiometer (also known as a string potentiometer, string encoder or cable position transducer)
Ultrasonic sensor
See also
List of length, distance, or range measuring devices
Positioning system
Literature
David S. Nyce: Linear Position Sensors: Theory and Application, New Jersey, John Wiley & Sons Inc. (2004)
Measuring instruments | Position sensor | [
"Technology",
"Engineering"
] | 217 | [
"Measuring instruments"
] |
10,965,841 | https://en.wikipedia.org/wiki/DeSanctis%E2%80%93Cacchione%20syndrome | DeSanctis–Cacchione syndrome is a genetic disorder characterized by the skin and eye symptoms of xeroderma pigmentosum (XP) occurring in association with microcephaly, progressive intellectual disability, slowed growth and sexual development, deafness, choreoathetosis, ataxia and quadriparesis.
Genetics
In at least some case, the gene lesion involves a mutation in the CSB gene.
It can be associated with ERCC6.
Diagnosis
Treatment
See also
Xeroderma pigmentosum
List of cutaneous conditions
References
External links
Genodermatoses
DNA replication and repair-deficiency disorders
Rare syndromes
Syndromes affecting the skin
Syndromes affecting the eye
Syndromes affecting head size
Syndromes with intellectual disability
Syndromes affecting the nervous system
Syndromes affecting hearing | DeSanctis–Cacchione syndrome | [
"Biology"
] | 161 | [
"Senescence",
"DNA replication and repair-deficiency disorders"
] |
10,965,923 | https://en.wikipedia.org/wiki/David%20Vignoni | David Vignoni (born 1980) is an Italian graphic designer specialising in icon and digital product design.
Vignoni was born in Cesena, Italy. He is the creator of the Nuvola icon set, which has been used in many projects, including script.aculo.us and Prototype JavaScript Framework. He has designed icons for several web sites, including openSUSE. He has also designed icon sets for applications such as Flumotion, Samba 2000 , and Strata (an insurance application), as well as for various KDE projects, such as Kontact and the KDE Edutainment Project. David is one of the founders of the Oxygen Project, which is the default icon theme on the KDE flagship desktop.
In 2005, Vignoni graduated from the University of Bologna with a B.S. degree in computer science. He has worked as a consultant for SuSE Linux, as an independent graphic designer, and at Meebo. Previously, he worked as lead designer at Hailo. He lives and works in London.
References
External links
David Vignoni's Web site
People from Cesena
Italian artists
Italian graphic designers
1980 births
Living people
KDE | David Vignoni | [
"Technology"
] | 245 | [
"Computing stubs",
"World Wide Web stubs"
] |
10,966,179 | https://en.wikipedia.org/wiki/Numerals%20in%20Unicode | A numeral (often called number in Unicode) is a character that denotes a number. The decimal number digits are used widely in various writing systems throughout the world, however the graphemes representing the decimal digits differ widely. Therefore Unicode includes 22 different sets of graphemes for the decimal digits, and also various decimal points, thousands separators, negative signs, etc. Unicode also includes several non-decimal numerals such as Aegean numerals, Roman numerals, counting rod numerals, Mayan numerals, Cuneiform numerals and ancient Greek numerals. There is also a large number of typographical variations of the Western Arabic numerals provided for specialized mathematical use and for compatibility with earlier character sets, such as ² or ②, and composite characters such as ½.
Numerals by numeric property
Grouped by their numerical property as used in a text, Unicode has four values for Numeric Type. First there is the "not a number" type. Then there are decimal-radix numbers, commonly used in Western style decimals (plain 0–9), there are numbers that are not part of a decimal system such as Roman numbers, and decimal numbers in typographic context, such as encircled numbers. Not noted is a numbering like "A. B. C." for chapter numbering.
Hexadecimal digits
Hexadecimal digits in Unicode are not separate characters; existing letters and numbers are used. These characters have marked Character properties Hex_digit=Yes, and ASCII_Hex_digit=Yes when appropriate.
Numerals by script
Hindu–Arabic numerals
The Hindu–Arabic numeral system involves ten digits representing 0–9. Unicode includes the Western Arabic numerals in the Basic Latin (or ASCII derived) block. The digits are repeated in several other scripts: Eastern Arabic, Balinese, Bengali, Devanagari, Ethiopic, Gujarati, Gurmukhi, Telugu, Khmer, Lao, Limbu, Malayalam, Mongolian, Myanmar, New Tai Lue, Nko, Oriya, Telugu, Thai, Tibetan, Osmanya. Unicode includes a numeric value property for each digit to assist in collation and other text processing operations. However, there is no mapping between the various related digits.
Although Arabic is written from right to left, while English is written left to right, in both languages numbers are written with the most significant digit on the left and the least significant on the right.
Fractions
The fraction slash character (U+2044) allows authors using Unicode to compose any arbitrary fraction along with the decimal digits. This was intended to instruct font rendering to make the surrounding digits smaller and raise them on the left and lower them on the right, but this is rarely implemented. (A workaround is to use the super/subscript characters described below, but only Arabic numerals are available.) Unicode also includes a handful of vulgar fractions as compatibility characters, but discourages their use.
Decimal fractions
Several characters in Unicode can serve as a decimal separator depending on the locale. Decimal fractions are represented in text as a sequence of decimal digit numerals with a decimal separator separating the whole-number portion from the fractional portion. For example, the decimal fraction for ¼ is expressed as zero-point-two-five ("0.25"). Unicode has no dedicated general decimal separator but unifies the decimal separator function with other punctuation characters. So the "." used in "0.25" is the same period character (U+002E) used to end the sentence. However, cultures vary in the glyph or grapheme used for a decimal separator. So in some locales, the comma (U+002C) may be used instead: "0,25". Still other locales use a space (or non-breaking space) for "0 25". The Arabic writing system includes a dedicated character for a decimal separator that looks much like a comma "٫" (U+066B) which when combined with the Arabic digits to express one-quarter appears as: "٠٫٢٥".
Characters for mathematical constants
Currently, three Unicode characters semantically represent mathematical constants: , the , and (of unknown significance). Other mathematical constants can be represented using characters that have multiple semantic uses. For example, although Unicode includes a character for natural exponent ℯ (U+212F) its UCS canonical name derives from its glyph: ; and the mathematical constant π, 3.141592.., is represented by .
Rich text and other compatibility numerals
The Western Arabic numerals also appear among the compatibility characters as rich text variant forms including bold, double-struck, monospace, sans-serif and sans-serif bold, along with fullwidth variants for legacy vertical text support.
Rich text parenthesized, circled and other variants are also included in the blocks Enclosed CJK Letters and Months; Enclosed Alphanumerics, Superscripts and Subscripts; Number Forms; and Dingbats.
Suzhou (huāmǎ/Sūzhōu mǎzi) numerals
The huāmǎ ()/Sūzhōu mǎzi () system is a variation of the rod numeral system. Rod numerals are closely related to the counting rods and the abacus, which is why the numeric symbols for 1, 2, 3, 6, 7 and 8 in the huāmǎ system are represented in a similar way as on the abacus. Nowadays, the huāmǎ system is only used for displaying prices in Chinese markets or on traditional handwritten invoices.
The digits of the Suzhou numerals are in the CJK Symbols and Punctuation block at U+3021—U+3029, U+3007, U+5341, U+5344, and U+5345. In Unicode 3.0 these characters are incorrectly called Hangzhou style numerals. In the Unicode 4.0, an erratum was added which stated:All references to "Hangzhou" in the Unicode standard have been corrected to "Suzhou" except for the character names themselves, which cannot be changed once assigned, according to the Unicode Stability Policy. (This policy allows software to use the names as unique identifiers.)
Japanese and Korean numerals
Ancient Greek numerals
Unicode provides support for several variants of Greek numerals, assigned to the Supplementary Multilingual Plane from U+10140 through U+1018F.
Attic numerals were used by ancient Greeks, possibly from the 7th century BC. They were also known as Herodianic numerals because they were first described in a 2nd-century manuscript by Herodian. They are also known as acrophonic numerals because all of the symbols used derive from the first letters of the words that the symbols represent: 'one', 'five', 'ten', 'hundred', 'thousand' and 'ten thousand'. See Greek numerals and acrophony.
Roman numerals
Roman numerals originated in ancient Rome, adapted from Etruscan numerals. The system used in classical antiquity was slightly modified in the Middle Ages to produce the system we use today. It is based on certain letters which are given values as numerals.
Roman numerals are commonly used today in numbered lists (in outline format), clockfaces, pages preceding the main body of a book, chord triads in music analysis (Roman numeral analysis), the numbering of movie and video game sequels, book publication dates, successive political leaders or children with identical names, and the numbering of some sport events, such as the Olympic Games or the Super Bowl.
Unicode has a number of characters specifically designated as Roman numerals, as part of the Number Forms range from U+2160 to U+2188. This range includes both upper- and lowercase numerals, as well as pre-combined characters for numbers up to 12 (Ⅻ or XII). One reason for the existence of pre-combined numbers is to facilitate the setting of multiple-letter numbers (such as VIII) on a single horizontal line in Asian vertical text. The Unicode standard, however, includes special Roman numeral code points for compatibility only, stating that "[f]or most purposes, it is preferable to compose the Roman numerals from sequences of the appropriate Latin letters".
Additionally, characters exist for archaic forms of 1000, 5000, 10,000, large reversed C (Ɔ), late 6 (ↅ, similar to Greek Stigma: Ϛ), early 50 (ↆ, similar to down arrow ↓⫝⊥), 50,000, and 100,000. The small reversed c, ↄ, is not intended to be used in Roman numerals, but as lower case Claudian letter Ↄ.
If using blackletter or script typefaces, Roman numerals are set in Roman type. Such typefaces may contain Roman numerals matching the style of the typeface in the Unicode range U+2160–217F; if they don't exist, a matching Antiqua typeface is used for Roman numerals.
Unicode has characters for Roman fractions in the Ancient Symbols block: sextans, uncia, semuncia, sextula, dimidia sextula, siliqua, and as.
Counting rod numerals
Counting rod numerals are included in their own block in the Supplementary Multilingual Plane (SMP) as of Unicode 5.0. There are nine "horizontal" digits (U+1D360 to U+1D368) and nine "vertical" digits (U+1D369 to U+1D371), the horizontal digits are used for odd powers of ten and the vertical digits for even powers of ten. Zero should be represented by U+3007 (〇, ideographic number zero) and the negative sign should be represented by U+20E5 (combining reverse solidus overlay). This block also contains other counting-rod-like symbols, such as the well-known tally mark for 5 . As these were recently added to the character set and are not in the BMP, font support may still be limited.
See also
Number Forms (Unicode block)
References
Unicode
Numerals | Numerals in Unicode | [
"Mathematics"
] | 2,162 | [
"Numeral systems",
"Numerals"
] |
10,966,272 | https://en.wikipedia.org/wiki/Sting%20%28fixture%29 | In experimental fluid mechanics, a sting is a test fixture on which models are mounted for testing, e.g. in a wind tunnel. A sting is usually a long shaft attaching to the downstream end of the model so that it does not much disturb the flow over the model. The rear end of a sting usually
has a conical fairing blending into the (wind tunnel) model support structure.
For minimum aerodynamic interference a sting should be as long as possible and have as small a diameter as possible, within the structural safety limits. Critical length of a sting (beyond which its influence on the flow around the model is small) is mostly dependent on Reynolds number. If the flow at the rear end of a model (model base) is laminar, the critical sting length can be as much as 12-15 base diameters. If the flow at model base is turbulent, critical sting length reduces to 3-5 model base diameters. Source also suggests a sting diameter of no more than about 30% of model base diameter. However, this may not be possible in wind tunnels with high dynamic pressures because large aerodynamic loads would cause unacceptably large deflections and/or stresses in the sting. Shorter stings of larger relative diameters must be used in such cases.
A good rule-of-thumb is that, for acceptably low and test-conditions-independent aerodynamic interference in a high-Reynolds-number, high-dynamic-pressure wind tunnel, a sting should have a diameter "d" not larger than 30% to 50% of model base diameter "D" and should have a length "L" of at least three model base diameters, e.g. as specified for the AGARD-C calibration model), see figure.
If the test object (model) is to be placed at high angles of attack relative to the airstream (i.e. at an attitude beyond the operating range of the model support mechanism), a bent sting can be used, see figure. Bent stings usually produce higher aerodynamic interference than straight stings. If the test object (model) has a "boattail" rear end without a well-defined base through which a sting shaft can enter the model, a so-called Z-sting can be used, having a form reminiscent of the Latin letter "Z". The part of the sting entering the model is a thin aerodynamically shaped blade so as to minimize disturbance of the flow; see figure.
Stings often attach, at the front end, to internal wind tunnel balances to measure the forces on the model. Therefore, most stings have a central bore through which the cables from a balance or other in-model instrumentation can be conducted without exposure to the airflow.
When a model is mounted on a wind tunnel balance attached to a sting, care must be taken that no parts of the model touch the sting during a wind tunnel test; the only support of the model must be through the balance.
See also
Wind tunnel
Reynolds number
References
Fluid mechanics | Sting (fixture) | [
"Engineering"
] | 616 | [
"Civil engineering",
"Fluid mechanics"
] |
10,966,810 | https://en.wikipedia.org/wiki/Grundy%27s%20game | Grundy's game is a two-player mathematical game of strategy. The starting configuration is a single heap of objects, and the two players take turn splitting a single heap into two heaps of different sizes. The game ends when only heaps of size two and smaller remain, none of which can be split unequally. The game is usually played as a normal play game, which means that the last person who can make an allowed move wins.
Illustration
A normal play game starting with a single heap of 8 is a win for the first player provided they start by splitting the heap into heaps of 7 and 1:
player 1: 8 → 7+1
Player 2 now has three choices: splitting the 7-heap into 6 + 1, 5 + 2, or 4 + 3. In each of these cases, player 1 can ensure that on the next move he hands back to his opponent a heap of size 4 plus heaps of size 2 and smaller:
player 2: 7+1 → 6+1+1 player 2: 7+1 → 5+2+1 player 2: 7+1 → 4+3+1
player 1: 6+1+1 → 4+2+1+1 player 1: 5+2+1 → 4+1+2+1 player 1: 4+3+1 → 4+2+1+1
Now player 2 has to split the 4-heap into 3 + 1, and player 1 subsequently splits the 3-heap into 2 + 1:
player 2: 4+2+1+1 → 3+1+2+1+1
player 1: 3+1+2+1+1 → 2+1+1+2+1+1
player 2 has no moves left and loses
Mathematical theory
The game can be analysed using the Sprague–Grundy theorem. This requires the heap sizes in the game to be mapped onto equivalent nim heap sizes. This mapping is captured in the On-Line Encyclopedia of Integer Sequences as :
Heap size : 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ...
Equivalent Nim heap : 0 0 0 1 0 2 1 0 2 1 0 2 1 3 2 1 3 2 4 3 0 ...
Using this mapping, the strategy for playing the game Nim can also be used for Grundy's game. Whether the sequence of nim-values of Grundy's game ever becomes periodic is an unsolved problem. Elwyn Berlekamp, John Horton Conway and Richard Guy have conjectured that the sequence does become periodic eventually, but despite the calculation of the first 235 values by Achim Flammenkamp, the question has not been resolved.
See also
Nim
Sprague–Grundy theorem
Wythoff's game
Subtract a square
References
External links
Grundy's game on MathWorld
Sprague-Grundy values for Grundy's Game by A. Flammenkamp
Mathematical games
Combinatorial game theory | Grundy's game | [
"Mathematics"
] | 635 | [
"Mathematical games",
"Recreational mathematics",
"Combinatorics",
"Game theory",
"Combinatorial game theory"
] |
10,967,556 | https://en.wikipedia.org/wiki/Tomotherapy | Tomotherapy is a type of radiation therapy treatment machine. In tomotherapy a thin radiation beam is modulated as it rotates around the patient, while they are moved through the bore of the machine. The name comes from the use of a strip-shaped beam, so that only one “slice” (Greek prefix “tomo-”) of the target is exposed at any one time by the radiation. The external appearance of the system and movement of the radiation source and patient can be considered analogous to a CT scanner (computed tomography), which uses lower doses of radiation for imaging. Like a conventional machine used for X-ray external beam radiotherapy (often referred to as a linear accelerator or linac, their main component), it [the tomotherapy machine] generates the radiation beam, but the external appearance of the machine, patient positioning, and treatment delivery differ. Conventional linacs do not work on a slice-by-slice basis but typically have a large area beam which can also be resized and modulated.
General principles
The treatment field's length (the width of the radiation slice) is adjustable using collimator jaws. In static-jaw delivery, the field length remains constant during a treatment. In dynamic-jaw delivery, the field length changes so that it begins and ends at its minimum setting.
Tomotherapy treatment times vary compared to normal radiation therapy treatment times. Tomotherapy treatment times can be as low as 6.5 minutes for common prostate treatment, excluding extra time for imaging. Modern tomotherapy and conventional linac systems incorporate one or both of megavoltage X-ray or kilovoltage X-ray imaging systems, enabling image-guided radiation therapy (IGRT). In tomotherapy, images are acquired in a very similar manner to a CT scanner, thanks to their closely related design.
There are few head-to-head comparisons of tomotherapy and other IMRT techniques, however there is some evidence that a conventional linac using VMAT can provide faster treatment whereas tomotherapy is better able to spare surrounding healthy tissue while delivering a uniform dose.
Helical delivery
In helical tomotherapy, the linac rotates on its gantry at a constant speed while the beam is delivered; so that from the patient's perspective, the shape traced out by the linac is helical.
While helical tomotherapy can treat very long volumes without a need to abut fields in the longitudinal direction, it does display a distinct artifact due to "thread effect" when treating non-central tumors. Thread effect can be suppressed during planning through good pitch selection.
Fixed-angle delivery
Fixed-angle tomotherapy uses multiple tomotherapy beams, each delivered from a separate fixed gantry angle, in which only the couch moves during beam delivery. This is branded as TomoDirect, but has also been called topotherapy.
The technology enables fixed beam treatments by moving the patient through the machine bore while maintaining specified beam angles.
Clinical considerations
Lung cancer, head and neck tumors, breast cancer, prostate cancer, stereotactic radiosurgery (SRS) and stereotactic body radiotherapy (SBRT) are some examples of treatments commonly performed using tomotherapy.
In general, radiation therapy (or radiotherapy) has developed with a strong reliance on homogeneity of dose throughout the tumor. Tomotherapy embodies the sequential delivery of radiation to different parts of the tumor which raises two important issues. First, this method is known as "field matching" and brings with it the possibility of a less-than-perfect match between two adjacent fields with a resultant hot and/or cold spot within the tumor. The second issue is that if the patient or tumor moves during this sequential delivery, then again, a hot or cold spot will result. The first problem is reduced by use of a helical motion, as in spiral computed tomography.
Some research has suggested tomotherapy provides more conformal treatment plans and decreased acute toxicity.
Non-helical static beam techniques such as IMRT and TomoDirect are well suited to whole breast radiation therapy. These treatment modes avoid the low-dose integral splay and long treatment times associated with helical approaches by confining dose delivery to tangential angles.
This risk is accentuated in younger patients with early-stage breast cancer, where cure rates are high and life expectancy is substantial.
Static beam angle approaches aim to maximize the therapeutic ratio by ensuring that the tumor control probability (TCP) significantly outweighs the associated normal tissue complication probability (NTCP).
History
The tomotherapy technique was developed in the early 1990s at the University of Wisconsin–Madison by Professor Thomas Rockwell Mackie and Paul Reckwerdt. A small megavoltage x-ray source was mounted in a similar fashion to a CT x-ray source, and the geometry provided the opportunity to provide CT images of the body in the treatment setup position. Although original plans were to include kilovoltage CT imaging, current models use megavoltage energies. With this combination, the unit was one of the first devices capable of providing modern image-guided radiation therapy (IGRT).
The first implementation of tomotherapy was the Corvus system developed by Nomos Corporation, with the first patient treated in April 1994. This was the first commercial system for planning and delivering intensity modulated radiation therapy (IMRT). The original system, designed solely for use in the brain, incorporated a rigid skull-based fixation system to prevent patient motion between the delivery of each slice of radiation. But some users eschewed the fixation system and applied the technique to tumors in many different parts of the body.
Mobile tomotherapy
Due to their internal shielding and small footprint, TomoTherapy Hi-Art and TomoTherapy TomoHD treatment machines were the only high energy radiotherapy treatment machines used in relocatable radiotherapy treatment suites. Two different types of suites were available: TomoMobile developed by TomoTherapy Inc. which was a moveable truck; and Pioneer, developed by UK-based Oncology Systems Limited. The latter was developed to meet the requirements of UK and European transport law requirements and was a contained unit placed on a concrete pad, delivering radiotherapy treatments in less than five weeks.
See also
Radiation therapy
Radiosurgery
References
External links
Radiation therapy procedures
Medical physics | Tomotherapy | [
"Physics"
] | 1,338 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
10,967,617 | https://en.wikipedia.org/wiki/Bissulfosuccinimidyl%20suberate | Bissulfosuccinimidyl suberate (BS3) is a crosslinker used in biological research. It is a water-soluble version of disuccinimidyl suberate.
Crosslinkers
Crosslinkers are chemical reagents that play a crucial role in the preparation of conjugates used in biological research particularly immuno-technologies and protein studies. Crosslinkers are designed to covalently interact with molecules of interest, resulting in conjugation. A spacer arm, generally consisting of several atoms, separates the two molecules, and the nature and length of this spacer is important to consider when designing an assay involving the selected crosslinker. Bissulfosuccinimidyl suberate is an example of a homobifunctional crosslinker.
Characteristics
Water-soluble: BS3 is hydrophilic due to its terminal sulfonyl substituents and as a result dissociates in water, eliminating the need to use organic solvents which interfere with protein structure and function. Because organic solvents need not be used when BS3 is used as the crosslinker, it is ideal for investigations into protein structure and function in physiologic conditions.
Non-cleavable: The BS3 crosslinker has an 8-atom spacer is non-cleavable and the molecule is not cell membrane permeable. BS3 binds irreversibly to its conjugate molecules, meaning that once BS3 creates covalent linkages to its target molecules, those associations are not easily broken.
Membrane impermeable: Since BS3 is a charged molecule, it cannot freely pass through cellular membranes which makes it an ideal crosslinker for cell surface proteins.
Homobifunctional: BS3 is a homobifunctional crosslinker in that it has two identical reactive groups, i.e. the N-hydroxysulfosuccinimide (sulfo-NHS) esters, and only one step is necessary to establish crosslinking between conjugate molecules.
Amine reactive: BS3 is amine-reactive in that its N-hydroxysulfosuccinimide (NHS) esters at each end react specifically with primary amines to form stable amide bonds in a nucleophilic acyl substitution-type reaction in which the N-hydroxysulfosuccinimide acts as the leaving group. BS3 is particularly useful in protein-related applications in that it can react with the primary amines on the side chain of lysine residues and the N-terminus of polypeptide chains. This crosslinker can also be used to stabilize protein-protein interactions for further analysis by immunoprecipitation or crosslinking mass spectroemtry.
Deuterated BS3
The deuterated crosslinker bis(sulfosuccinimidyl) 2,2,7,7-suberate-d4 is the "heavy" BS3 crosslinking agent that contains 4 deuterium atoms. When used in mass spectrometry studies, BS3-d4 provides a 4 dalton shift compared to crosslinked proteins with the non-deuterated analog (BS3-d0). Thus, "heavy" and "light" crosslinker analogs can be used for isotopically labeling protein and peptides in mass spectrometry research applications.
Applications
Cell-surface receptor-ligand studies
Crosslinking biomolecules on cells
Fixation of protein complexes prior to protein interaction analysis
Disuccinimidyl suberate
Disuccinimidyl suberate (DSS) is the non-water-soluble analog of BS3. DSS and BS3 express the same crosslinking ability toward primary amines. The major structural difference between these two molecules is that DSS does not contain the sulfonate substituents at either end of the molecule, and it is this difference that is responsible for the uncharged, non-polar nature of the DSS molecule. Due to the hydrophobic nature of this crosslinker it must be dissolved in an organic solvent such as dimethylsulfoxide before being added to an aqueous sample. Because of the ability of DSS to cross cell membranes, it is best suited for applications where intracellular crosslinking is needed.
References
Sulfonic acids
Succinimides | Bissulfosuccinimidyl suberate | [
"Chemistry"
] | 920 | [
"Functional groups",
"Sulfonic acids"
] |
10,967,782 | https://en.wikipedia.org/wiki/TICAM1 | TIR domain containing adaptor molecule 1 (TICAM1; formerly known as TIR-domain-containing adapter-inducing interferon-β or TRIF) is an adapter in responding to activation of toll-like receptors (TLRs). It mediates the rather delayed cascade of two TLR-associated signaling cascades, where the other one is dependent upon a MyD88 adapter.
Toll-like receptors (TLRs) recognize specific components of microbial invaders and activate an immune response to these pathogens. After these receptors recognize highly conserved pathogenic patterns, a downstream signaling cascade is activated in order to stimulate the release of inflammatory cytokines and chemokines as well as to upregulate the expression of immune cells. All TLRs have a TIR domain that initiates the signaling cascade through TIR adapters. Adapters are platforms that organize downstream signaling cascades leading to a specific cellular response after exposure to a given pathogen.
Structure
TICAM1 is primarily active in the spleen and is often regulated when MyD88 is deficient in the liver, indicating organ-specific regulation of signaling pathways. Curiously, there is a lack of redundancy within the TLR4 signaling pathway that leads to microbial evasion of immune response in the host after mutations occur within intermediates of the pathway. Three TRAF-binding motifs present in the amino terminal region of TICAM1 are necessary for association with TRAF6. Destruction of these motifs reduced the activation of NF-κB, a transcription factor that is also activated by the carboxy-terminal domain of TICAM1 in the upregulation of cytokines and co-stimulatory immune molecules. This domain recruits receptor-interacting protein (RIP1) and RIP3 through the RIP homotypic interaction motif. Cells deficient for RIP1 gene display attenuated TLR3 activation of NF-κB, indicating the use of the RIP1 gene in downstream TICAM1 activation, in contrast to other TLRs that use IRAK protein for the activation of NF-κB.
Areas of research
Investigations into the function of TICAM1 are of great significance to various fields of biomedical research. The pathogenesis of infectious disease, septic shock, tumor growth, and rheumatoid arthritis all have close ties with TLR signaling pathways, specifically to that of TICAM1 . Better understanding of the TICAM1 pathway will be therapeutically useful in the development of vaccines and treatments that can control associated inflammation and antiviral responses. Experiments involving wild-type and TICAM1-deficient mice are critical for understanding the coordinated responses of TLR pathways. It is necessary to study the coordinated effects of these pathways in order to understand the complex responses initiated by TICAM1.
References
External links
Toll-like receptor signaling pathway - Reference pathway (KO) from the KEGG website.
Immune system | TICAM1 | [
"Biology"
] | 603 | [
"Immune system",
"Organ systems"
] |
10,968,503 | https://en.wikipedia.org/wiki/Theming | Theming is the use of an overarching theme to create a holistic and integrated spatial organization of a consumer venue. A theme is a unifying or dominant idea or motif on which any new construction idea, new style generation, any product is designed. It is the process of designing and constructing an object or space so that the particular subject or idea on which the style of something is based is made clear through the “synthesis of recognizable symbols with spatial forms.”
Theming is applied to an environment in order to create a memorable and meaningful experience for individuals or groups that visit the space, and can be expressed through the use of architecture, decor, signage, music and sound design, costuming, integrated technology, special effects, and other techniques. Theming is increasingly used to create physical spaces for "experiential marketing,” in which consumers can connect and interact with a brand.
Historically, most large-scale themed environments were primarily designed for entertainment, so the industry that creates these venues is known as themed entertainment. Examples include theme parks, water parks, museums, zoos, visitor centers, casinos, theme restaurants, and resorts. Theming is also increasingly used on smaller scale projects, including parties and product launches, to make these events more impactful.
Common themes include holidays (such as Halloween, Christmas, and Valentine’s Day), historical eras (such as the medieval period and the American frontier), cultures (such as Ancient Greece and Polynesian culture), and literary genres (such as fantasy and science fiction).
History
Theming has been used in public spaces at least as far back as the World’s Fairs of the Nineteenth Century. Professor Susan Ingram argues that the Great Exhibition of 1851 in London was, in effect, the world’s first theme park, utilizing theming to further its pro-industrial message, and reproducing foreign lands as spectacle. The World's Columbian Exposition of 1893 in Chicago introduced a separate midway, filled not only with attractions like the first Ferris Wheel, but also exhibits of cultures from around the world, including reproductions of villages from many nations. Themed simulations, including the Italian Capri Grotto and a Hawaiian volcano, were made possible for the first time by combining electricity, theatrical displays, and mechanical devices.
Themed dining can also trace its roots to the late 1800s. In the 1890s, at least three different elaborately themed nightclubs were operating Paris, using themes of death, hell, and heaven. Soon after, in response to growing automobile usage, theming was applied to roadside architecture in the United States, and buildings themselves became advertisements aimed at passing motorists. Beginning in the 1920s, a number of novelty architecture buildings were constructed in and around Hollywood, including the famous Brown Derby restaurants and Bulldog Cafe. At the same time, the popular Egyptian Revival movement saw a range of buildings themed to Ancient Egypt, including everything from apartments to Grauman's Egyptian Theatre. Dozens of so-called “programmatic” or “mimetic” style structures were built in the Los Angeles area in the interwar years of 1918–1941, many of them restaurants, including buildings shaped like animals, food, and vehicles.
The forerunners to today’s themed mega-resorts were the El Rancho Vegas, opened in 1941, and the Last Frontier, opened in 1942, the first two properties on the Las Vegas Strip, both with Wild West themes. They were followed by even more elaborately themed hotels, including Caesars Palace in 1966 and Circus Circus in 1968.
The term “theme park” came into use circa 1960, likely to describe the many parks built across the United States and around the world following the successful opening of Disneyland in 1955. Though arguably not the first theme park, Disneyland was the first amusement park to combine multiple named areas (“lands”) with different themes. Theme parks have followed this pattern ever since, including some that have explicitly copied Disneyland’s design.
Theming has also been applied to retail environments. The advent of mass production led to the creation of large department stores in Europe in the late Nineteenth Century, and in an early example of theming, many used elaborate displays and windows to attract shoppers. In the 1980s, Banana Republic reinforced its brand as a travel and safari clothing company by theming its stores with Jeeps and jungle foliage. Beginning in 1987, the Disney Store chain used theming to popularize the idea of “retail-tainment,” creating a new category of entertainment stores, later copied by competitors. Today, as a response to the growth of online shopping, both individual stores and entire retail complexes like malls are turning to theming to attract customers to physical locations.
Scholarship
In 1997, urbanist Mark Gottdiener’s The Theming of America: Dreams, Visions, and Commercial Spaces was published. It is considered by many to be the first serious work to explore the origins, nature, and future of themed environments. A revised second edition was published in 2001.
Also in 1997, the Canadian Centre for Architecture in Montreal presented The Architecture of Reassurance: Designing the Disney Theme Parks, the first exhibition of some 350 objects from the archives of Walt Disney Imagineering, including plans, drawings, paintings and models for the Disney theme parks and their attractions. Professor Karal Ann Marling curated the exhibit and wrote the principal essay for the accompanying 224 page book, which also included essays by Disney Imagineer Marty Sklar, historian Neil Harris, art historian Erika Doss, geographer Yi-Fu Tuan, and critic Greil Marcus, as well as an interview with architect Frank Gehry.
Author Scott A. Lukas has written and edited numerous books and articles on themed entertainment, including his first, The Themed Space: Locating Culture, Nation, and Self, published in 2007. He teaches on the subject of theme parks and themed spaces, video games, popular film, and various forms of popular culture and remaking.
In 2010, Dean Peter Weishar and Professor George Head began work on a themed entertainment design program at Savannah College of Art and Design (SCAD) in Savannah, Georgia. In the fall of 2012, the SCAD School of Film, Digital Media and Performing Arts separated into two schools: the School of Digital Media and the School of Entertainment Arts, which began offering the nation’s first M.F.A. in themed entertainment design. Peter Weishar went on to create the Themed Experience Institute program at Florida State University.
Criticism
As perhaps the best known example of theming, the theme park Disneyland has often been a target for criticism. In his overwhelmingly negative review, Disneyland and Las Vegas, published in The Nation upon the opening of the park, writer Julian Halevy lamented:
Noted author Ray Bradbury responded with a letter to the editor, published three years later, titled Not Child Enough:
Another notable criticism of theming, again targeting Disneyland and its guests, can be found in French sociologist Jean Baudrillard’s 1981 treatise Simulacra and Simulation:
Along with Baudrillard, the Italian writer Umberto Eco helped develop the idea of “hyperreality,” or the world of "the Absolute Fake," in which imitations don't merely reproduce reality, but try to improve on it. Eco traveled to tourist attractions across the United States and wrote frequently about "America's obsession with simulacra and counterfeit reality.”
More recently, concerns have been raised about theming’s role in influencing consumers, sometimes subconsciously, as part of experiential retailing or “shoppertainment.” Kim Einhorm, director of Theme Traders, points out that “theming becomes an invisible form of branding.” Indeed, because theming has become such a commonplace aspect of so many people’s everyday lives, the public is often unwilling or unable to effectively understand its consequences. Some have even argued that the growth of experiential marketing is contributing to a degraded quality of life by eliminating “contemplative time.”
Industry
In 1920, following the dissolution of several earlier organizations, the National Association of Amusement Parks (NAAP) was formed. In 1934 it merged with the American Association of Pools and Beaches (AAPB) to form the National Association of Amusement Parks, Pools, and Beaches (NAAPPB). After several name changes, it became the International Association of Amusement Parks and Attractions in 1962. Today, IAAPA represents more than 5,300 members from more than 100 countries, including many companies and individuals in the themed entertainment industry.
The Themed Entertainment Association was founded in 1991 to organize small businesses in the industry. Today it has some 1,300 members, and divisions around the world. It hosts annual conferences and presents awards to individuals, parks, attractions, exhibits, and experiences.
A number of former employees of Walt Disney Imagineering, Disney’s in-house design and construction subsidiary, went on to form their own themed entertainment companies, some of which later collaborated with Disney on theme park projects. Gary Goddard left Imagineering to start what became the Goddard Group, now known as Legacy | GGE. Bill Novey oversaw the special effects for Epcot Center and Tokyo Disneyland before leaving to start Art & Technology, Inc. Bob Rogers left to found BRC Imagination Arts. Bran Ferren founded Associates & Ferren, which was acquired by Disney in 1993. Ferren eventually left Disney to start another company, Applied Minds, LLC. Phil Hettema worked for both Disney and Universal Creative before starting The Hettema Group.
Other companies serve organizations and individuals looking to incorporate theming into offices, product launch events, and even parties. Theme Traders is a London-based event theming company that serves this niche.
Examples
Theme Parks
Disneyland (Anaheim, California, US)
Europa-Park (Rust, Germany)
Lotte World (Seoul, South Korea)
Efteling (Kaatsheuvel, The Netherlands)
Theme Restaurants
Buns and Guns (Beirut, Lebanon)
Rainforest Cafe (Worldwide)
Rollercoaster Restaurant (Europe / Middle East)
Themed Hotels
Chimelong Hengqin Bay Hotel (Zhuhai, China)
Hard Days Night Hotel (Liverpool, England)
Luxor Las Vegas (Las Vegas, Nevada, US)
The Red Caboose Motel (Strasburg, Pennsylvania, US)
Themed Retail Brand Stores
American Girl Place (US / Canada / U.A.E.)
M&M's World (US / England / China)
See also
Theme park
Theme restaurant
Themed Entertainment Association
Themed Decor for Children's Spaces
References
Architectural design
Semiotics
Promotion and marketing communications | Theming | [
"Engineering"
] | 2,156 | [
"Design",
"Architectural design",
"Architecture"
] |
10,968,696 | https://en.wikipedia.org/wiki/CWP1 | Cell Wall Protein 1 (CWP1) is a gene of Saccharomyces cerevisiae and the Saccharomyces cerevisiae-Saccharomyces bayanus hybrid, Saccharomyces pastorianus. It is closely related to the CWP2 gene and produces a small protein associated with the budding scar, known as cwp1p.
References
Proteins
Saccharomyces cerevisiae genes | CWP1 | [
"Chemistry"
] | 95 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
10,968,708 | https://en.wikipedia.org/wiki/Zune%20%28widget%20toolkit%29 | Zune is an object-oriented GUI toolkit which is part of the AROS (AROS Research Operating System) project and nearly a clone, at both an API and look-and-feel level, of Magic User Interface (MUI), a well-known Amiga shareware product by Stefan Stuntz.
Zune is based on the BOOPSI system, the framework inherited from AmigaOS for object-oriented programming in C. Zune classes don't derive from existing BOOPSI gadget classes; instead, the Notify class (base class of the Zune hierarchy) derives from the BOOPSI root class.
Wanderer
Wanderer is a complete user interface based on the Zune widget set. The desktop is similar in design to the Workbench window manager, but incorporating desktop environment features rather than just window management. Its featureset draws heavily upon the Zune toolkit. It supports theming and various icon formats such as the Amiga planar style (AmigaOS 3.1), Iconcolor (AmigaOS 3.5) and PNG icons. The icons seen in Wanderer by default were designed by Adam Chodorowski, who based them largely upon the GNOME Gorilla icon set
References
External links
Article about AROS on OSNews.com
Zune Application Development Manual
Amiga APIs
AROS software
Widget toolkits | Zune (widget toolkit) | [
"Technology"
] | 278 | [
"Operating system stubs",
"Computing stubs"
] |
10,969,154 | https://en.wikipedia.org/wiki/Bildung | (, "education", "formation", etc.) refers to the German tradition of self-cultivation (as related to the German for: creation, image, shape), wherein philosophy and education are linked in a manner that refers to a process of both personal and cultural maturation. This maturation is a harmonization of the individual's mind and heart and in a unification of selfhood and identity within the broader society, as evidenced with the literary tradition of Bildungsroman.
In this sense, the process of harmonization of mind, heart, selfhood and identity is achieved through personal transformation, which presents a challenge to the individual's accepted beliefs. In Hegel's writings, the challenge of personal growth often involves an agonizing alienation from one's "natural consciousness" that leads to a reunification and development of the self. Similarly, although social unity requires well-formed institutions, it also requires a diversity of individuals with the freedom (in the positive sense of the term) to develop a wide-variety of talents and abilities and this requires personal agency. However, rather than an end state, both individual and social unification is a process that is driven by unrelenting negations.
In this sense, education involves the shaping of the human being with regard to their own humanity as well as their innate intellectual skills. So, the term refers to a process of becoming that can be related to a process of becoming within existentialism.
The term also corresponds to the Humboldtian model of higher education from the work of Prussian philosopher and educational administrator Wilhelm von Humboldt (1767–1835). Thus, in this context, the concept of education becomes a lifelong process of human development, rather than mere training in gaining certain external knowledge or skills. Such training in skills is known by the German words Erziehung, and Ausbildung. in contrast is seen as a process wherein an individual's spiritual and cultural sensibilities as well as life, personal and social skills are in process of continual expansion and growth. is seen as a way to become more free due to higher self-reflection. Von Humboldt wrote with respect to in 1793/1794:
Education [], truth and virtue" must be disseminated to such an extent that the "concept of mankind" takes on a great and dignified form in each individual (GS, I, p. 284). However, this shall be achieved personally by each individual, who must "absorb the great mass of material offered to him by the world around him and by his inner existence, using all the possibilities of his receptiveness; he must then reshape that material with all the energies of his own activity and appropriate it to himself so as to create an interaction between his own personality and nature in a most general, active and harmonious form
Most explicitly in Hegel's writings, the tradition rejects the pre-Kantian metaphysics of being for a post-Kantian metaphysics of experience. Much of Hegel's writings were about the nature of education (both and Erziehung), reflecting his own role as a teacher and administrator in German secondary schools, and in his more general writings. More recently, Gadamer and McDowell have used the concept in their writings.
Bildung in Germany today
Professor of philosophy Julian Nida-Rümelin has challenged the idea that Bildung is no more than 'normal' education.
See also
Bildungsbürgertum
Coming of age
Cultural literacy
Etiquette
General knowledge
High culture
Manners
Prudence
Wisdom
References
Bruford, W.H. (1975). The German Tradition of Self-Cultivation: Bildung from Humboldt to Thomas Mann, London: Cambridge University Press.
Wood, Allen W. (1998). "Hegel on Education," Amélie O. Rorty (ed.) Philosophers on Education. London: Routledge, 1998.
Alves, Alexandre (2019). "The German Tradition of Self-Cultivation (Bildung) and its Historical Meaning", Educação & Realidade 44(2).https://www.scielo.br/j/edreal/a/HLLcPFh84zpNNdDrrvnBWvb/?lang=en
Education in Germany
Philosophy of education
German words and phrases
Personal development
Concepts in aesthetics | Bildung | [
"Biology"
] | 889 | [
"Personal development",
"Behavior",
"Human behavior"
] |
10,969,277 | https://en.wikipedia.org/wiki/Rank%20product | The rank product is a biologically motivated rank test for the detection of differentially expressed genes in replicated microarray experiments.
It is a simple non-parametric statistical method based on ranks of fold changes. In addition to its use in expression profiling, it can be used to combine ranked lists in various application domains, including proteomics, metabolomics, statistical meta-analysis, and general feature selection.
Calculation of the rank product
Given n genes and k replicates, let be the rank of gene g in the i-th replicate.
Compute the rank product via the geometric mean:
Determination of significance levels
Simple permutation-based estimation is used to determine how likely a given RP value or better is observed in a random experiment.
generate p permutations of k rank lists of length n.
calculate the rank products of the n genes in the p permutations.
count how many times the rank products of the genes in the permutations are smaller or equal to the observed rank product. Set c to this value.
calculate the average expected value for the rank product by: .
calculate the percentage of false positives as : where is the rank of gene g in a list of all n genes sorted by increasing .
Exact probability distribution and accurate approximation
Permutation re-sampling requires a computationally demanding number of permutations to get reliable estimates of the p-values for the most differentially expressed genes, if n is large. Eisinga, Breitling and Heskes (2013) provide the exact probability mass distribution of the rank product statistic. Calculation of the exact p-values offers a substantial improvement over permutation approximation, most significantly for that part of the distribution rank product analysis is most interested in, i.e., the thin right tail. However, exact statistical significance of large rank products may take unacceptable long amounts of time to compute. Heskes, Eisinga and Breitling (2014) provide a method to determine accurate approximate p-values of the rank product statistic in a computationally fast manner.
See also
Ranking
Schulze method
Comparison of electoral systems
Arrow's impossibility theorem
References
Breitling, R., Armengaud, P., Amtmann, A., and Herzyk, P. (2004) Rank Products: A simple, yet powerful, new method to detect differentially regulated genes in replicated microarray experiments, FEBS Letters, 573:83–-92
Gene expression
Nonparametric statistics
Microarrays | Rank product | [
"Chemistry",
"Materials_science",
"Biology"
] | 518 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Gene expression",
"Bioinformatics",
"Molecular biology techniques",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
10,969,293 | https://en.wikipedia.org/wiki/Aquaphilia%20%28fetish%29 | Aquaphilia (literally "water lover" from the Latin aqua and Greek φιλειν (philein)) is a form of sexual fetishism that involves images of people swimming or posing underwater, and sexual activity in or under water.
References
Sexual fetishism | Aquaphilia (fetish) | [
"Biology"
] | 59 | [
"Behavior",
"Sexuality stubs",
"Sexuality"
] |
10,969,318 | https://en.wikipedia.org/wiki/Adjacency%20pairs | In linguistics, an adjacency pair is an example of conversational turn-taking. An adjacency pair is composed of two utterances by two speakers, one after the other. The speaking of the first utterance (the first-pair part, or the first turn) provokes a responding utterance (the second-pair part, or the second turn). Adjacency pairs are a component of pragmatic variation in the study of linguistics, and are considered primarily to be evident in the "interactional" function of pragmatics. Adjacency pairs exist in every language and vary in context and content among each, based on the cultural values held by speakers of the respective language. Oftentimes, they are contributed by speakers in an unconscious way, as they are an intrinsic part of the language spoken at-hand and are therefore embedded in speakers' understanding and use of the language. Thus, adjacency pairs may present their challenges when a person begins learning a language not native to them, as the cultural context and significance behind the adjacency pairs may not be evident to a speaker outside of the primary culture associated with the language.
Usage
Adjacency pairs are most commonly found in what Schegloff and Sacks described as a "single conversation," a unit of communication in which a single person speaks and a second person replies to the first speaker's utterance. While the turn-taking mechanism of single conversation uses silence to indicate that the next speaker's turn may begin, adjacency pairs are used to show that both speakers are finished with the conversation and that the ensuing silence does not require either of the speakers to take another turn.
The prevalent use of adjacency pairs in greetings and terminal exchanges demonstrate the adjacency pair's primary function of being an organizational unit of conversation. Without the signal and expected response of the two utterances, the silence of one speaker may be never filled by the second speaker, or filled incorrectly. Adjacency pairs also convey politeness and a willingness from one speaker to acknowledge the feelings of the second speaker. For example, in English the greeting "How are you?" is mostly commonly followed by "I'm doing well," thus creating an adjacency pair that demonstrates a polite interest from one speaker and a reciprocal acknowledgment of that interest from the other. Failure to reply politely to the greeting "How are you?" is usually a sign of bad manners or an unwillingness to converse, thus showing how an adjacency pair is necessary to establish a working rapport between two speakers.
Examples of pairs
Many actions in conversation are accomplished through established adjacency pairs, examples of which include:
call/beckon → response
"Waiter!" → "Yes, sir"
complaint → excuse/remedy
"It's awfully cold in here" → "Oh, sorry, I'll close the window"
compliment → acceptance/refusal
"I really like your new haircut!!" → "Oh, thanks"
degreeting → degreeting response
"See you!" → "Yeah, see you later!"
inform → acknowledge
"Your phone is over there" → "I know"
greeting → greeting response
"Hiya!" → "Oh, hi!"
offer → acceptance/rejection
"Would you like to visit the museum with me this evening?" → "I'd love to!"
question → answer
"What does this big red button do?" → "It causes two-thirds of the universe to implode"
request → acceptance/rejection
"Is it OK if I borrow this book?" → "I'd rather you didn't, it's due back at the library tomorrow"
Cultural significance
In some contexts, adjacency pairs may act as an indicator of varying demographic elements. For instance, restaurants are a setting notorious for the adjacency pair that presents a 'thank you', followed by some response eliciting acceptance of the gratitude displayed by the 'thank you'. A variety of responses to the statement 'thank you' have been recorded, and an English speaker's choice of response may imply details of his dialect and ultimately his place of origin. For instance, the employment of 'you're welcome' as the second half of this adjacency pair is most often indicative of an English speaker's residence within the United States. American English is the English dialect most highly associated with 'you're welcome' as a response to 'thank you', while other dialects of English (e.g. British and Irish English) may deem this phrase more formal than other options. The phrase 'my pleasure' is also most commonly associated with American English. British English speakers, in contrast, often omit a response to 'thanks' when it is presented to them.
Additionally, the "'thank you' followed by an acknowledgement of gratitude" adjacency pair may work as an indicator of socioeconomic status based on when/in what context an English speaker decides to propose a 'thank you' statement. Nine restaurants in Los Angeles- representative of three different socioeconomic backgrounds- were studied by scholar Larssyn Staley from the University of Zurich to create an understanding of this idea. The results indicated that the offer of gratitude displayed in the 'thank you' statement is evident most prominently in non-verbal acts of service (e.g. presenting the check after the meal or wiping the table between courses) particularly among customers dining at restaurants in the highest and mid-socioeconomic levels. However, no 'thank you' comment was offered for non-verbal service acts in the restaurant representing the lowest socioeconomic level. The category most susceptible to 'thank you' comments from customers in the restaurant with the lowest socioeconomic association was the a verbal, explicit offer of service (e.g. Is there anything more I can get for you?) and this same category received the second highest quantity of 'thank you' offers in both the highest and mid-socioeconomic settings.
Three-part interchange
A three-part exchange occurs after the first speaker in a conversation adds an additional response to the former two utterances. The third part serves many conversational functions, including evaluation of the response, recognition of an acceptable response, and comprehension of the response. Additionally, the third part can initiate topic bounding, a technique used to end a conversational exchange. In face-to-face communication, the third utterance can also be expressed non-verbally. Conversational transcripts may leave out non-verbal third part responses, falsely indicating that a third part is missing from the conversation
Much like adjacency pairs themselves, the various types of three-part interchanges may be associated most closely with specific social settings and contextual situations. The evaluative three-part interchange (example displayed below) is commonly found in education settings, particularly within elementary education. The use of the evaluative three-part interchange has proven itself useful in such a setting as it helps teachers to ascertain themselves as both educators and "evaluators", in that the interchange grants them the opportunity to ask their students questions to which they already know the answers. In doing so, the teacher has the capacity to offer evaluation of a response as he can determine whether or not an answer is acceptable based on his own understanding of what answer is "correct". Adversely, if a teacher were to ask a question for which he did not know the answer, he would lose the ability to contribute the third part of this interchange as it would not be appropriate for him to determine the quality of the answer, as he himself has no certainty in its validity. Thus, the evaluative three-part interchange is often indicative of a classroom setting where this educator and evaluator combination is frequently perpetuated.
Examples of three-part interchanges
Evaluative
"What is the capital of China?"
"Beijing."
"Good work."
Recognition of acceptability
"Where are you going?
"To the store."
"I'll come, too."
Comprehension
"Is he home yet?"
"No."
"Okay."
Topic bounding
"Can you look this over?"
"I'm busy."
"I'll ask you again later."
See also
Conversation analysis
Pragmatics
Speech act
References
External links
Adjacency pairs with (dis)preferred seconds
Short definition of an adjacency pair
Human communication | Adjacency pairs | [
"Biology"
] | 1,750 | [
"Human communication",
"Behavior",
"Human behavior"
] |
10,969,324 | https://en.wikipedia.org/wiki/Octanoyl-CoA | Octanoyl-coenzyme A is the endpoint of beta oxidation in peroxisomes. It is produced alongside acetyl-CoA and transferred to the mitochondria to be further oxidized into acetyl-CoA.
See also
Caprylic acid, the eight-carbon saturated fatty acid known by the systematic name octanoic acid.
References
Thioesters of coenzyme A | Octanoyl-CoA | [
"Chemistry",
"Biology"
] | 87 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
10,969,364 | https://en.wikipedia.org/wiki/Ruthenium%28IV%29%20oxide%20%28data%20page%29 | This page provides supplementary chemical data on ruthenium(IV) oxide.
Thermodynamic properties
Spectral data
Structure and properties data
Material Safety Data Sheet
The handling of this chemical may require notable safety precautions. Safety information can be found on the Material Safety Datasheet (MSDS) for this chemical or from a reliable source, such as SIRI.
References
Chemical data pages
Chemical data pages cleanup | Ruthenium(IV) oxide (data page) | [
"Chemistry"
] | 82 | [
"Chemical data pages",
"nan"
] |
10,969,498 | https://en.wikipedia.org/wiki/Quinaria | A quinaria (plural: quinariae) is a Roman unit of area, roughly equal to . Its primary use was to measure the cross-sectional area of pipes in Roman water distribution systems. A "one quinaria" pipe is in diameter.
In Roman times, there was considerable ambiguity regarding the origin of the name, and the actual value of a quinaria. According to Frontinus:
...Those who refer (the quinaria) to Vitruvius and the plumbers, declare that it was so named from the fact that a flat sheet of lead 5 digits wide, made up into a round pipe, forms this ajutage. But this is indefinite, because the plate, when made up into a round shape, will be extended on the exterior surface and contracted on the interior surface. The most probable explanation is that the quinaria received its name from having a diameter of 5/4 of a digit...
In other words, Vitruvius claimed that the name was derived from a pipe created from a flat sheet of lead "5 digits wide", roughly , but Frontinus contested the definitiveness of this because the exterior circumference of the resulting pipe would be larger than the interior circumference. According to Frontinus, the name and value is derived from a pipe having a diameter of "5/4 of a digit". Using Vitruvius' standard, the value of a quinaria is , and the resulting pipe would have a diameter of .
The importance of this measure was that water taxes in ancient Rome were based on the size of the supply pipe.
See also
Ancient Roman weights and measures
Water theft#Roman period
Notes and references
External links
The Quinaria, part of the Encyclopædia Romana
Units of area
Human-based units of measurement
Ancient Roman units of measurement | Quinaria | [
"Mathematics"
] | 383 | [
"Quantity",
"Units of area",
"Units of measurement"
] |
10,969,891 | https://en.wikipedia.org/wiki/Soci%C3%A9t%C3%A9%20d%27astronomie%20de%20Montr%C3%A9al | The Société d'astronomie de Montréal (Montreal Astronomy Society) is an astronomy club. It was founded in 1968 after the Montreal French Centre of the Royal Astronomical Society of Canada (founded in 1947) obtained a provincial charter.
For many years, it was the largest French-language astronomy club in Quebec, until the mid-1980s, when many other clubs in the Greater Montreal area started gaining members.
The Société has published for many years an Annuaire astronomique, an ephemeris book. Its members were also being offered telescope mirror polishing courses, in the Montreal Botanical Gardens meeting room, which was occupied by the Société until 1980, where it had to move.
Nowadays, the Société is based in the Villeray—Saint-Michel—Parc-Extension borough (district Saint-Michel). It organizes every year, together with the Club d'astronomie Orion de Saint-Timothée, the Concours annuel de fabricants de télescopes amateurs (CAFTA), a telescope-making contest.
President List
1968 : Philippe Mailloux,
1969-70 : André Aird,
1971-72 : Jacques Lebrun,
1973 : Henri Simard,
1974-76 : Jacques Dumas,
1977-78 : Henri Coïa,
1979-80 : Lucien Coallier,
1981 : Maurice Provencher,
1982 : Lucien Coallier,
1983 : Rolland Lacroix,
1984-85 : Pierre Bastien,
1986-88 : Marc-André Gélinas,
1989-90 : Jean-Pierre Urbain,
1991 : Patrice Gérin-Roze,
1992 : Marc-André Gélinas,
1993 : Pierre Paquette,
1994 : Lorraine Morin,
1995 : Maurice Provencher,
1996-97 : Marc Fortin,
1998-99 : François Chevrefils,
2000-02 : Patrice Scattolin,
2003 : Bruno Beaupré,
2004-06 : Hugues Lacombe,
2007 : Michel Boucher
References
External links
Official Site in French;
CAFTA in French.
Astronomy organizations
Amateur astronomy
Astronomy in Canada
Organizations based in Montreal
1968 establishments in Quebec
Scientific organizations established in 1968
1968 in Montreal | Société d'astronomie de Montréal | [
"Astronomy"
] | 436 | [
"Astronomy stubs",
"Astronomy organizations",
"Astronomy organization stubs"
] |
10,969,921 | https://en.wikipedia.org/wiki/Mizpah%20%28emotional%20bond%29 | Mizpah (מִצְפָּה miṣpāh, mitspah) is Hebrew for "watchtower". It is mentioned in the biblical story of Jacob and Laban, where a pile of stones marks an agreement between two people, with God as their watching witness.
Biblical narrative
Jacob had secretly fled the house of Laban, his father-in-law, in the middle of the night, taking flocks of animals, all his other assets, and his two wives and their children (the daughters and grandchildren of Laban) with him, intending never to return. Laban discovered this and pursued Jacob. After discussion, the two decided to formalize the separation.
Laban admitted that his daughters had voluntarily left, saying, "Yet what can I do today about these daughters of mine, or about the children they have borne?" (Genesis 31:43 (NIVUK)). He agreed to let Jacob go in peace, but exacted a promise from Jacob to never abuse his daughters or take additional wives (Genesis 31:50).
The two men then determined to erect a pile of stones, a figurative watchtower, called a mizpah, to commemorate this promise, even though no person was present other than the two men when it was made, for "God is witness between you and me". Both of the men also agreed that they would consider the mizpah a border between their respective territories, and that they would not pass the watchtower to visit one another "to do evil".
Usage
Since that time, the mizpah has come to connote an emotional bond between people who are separated (either physically or by death). Mizpah jewelry is often made in the form of a coin-shaped pendant cut in two with a zig-zag line bearing the words "The LORD watch between me and thee, when we are absent one from another". This is worn to signify the bond. Additionally, the word "mizpah" is often used as the name for a cemetery and can often be found on headstones in cemeteries and on other memorials:And [it was called] Mizpah (Watchtower); for he said, The Lord watch between me and thee, when we are absent one from another.
References
External links
NASB Version of the Covenant of Mizpah
Interpersonal relationships
Book of Genesis | Mizpah (emotional bond) | [
"Biology"
] | 493 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
10,970,590 | https://en.wikipedia.org/wiki/GemIdent | GemIdent is an interactive image recognition program that identifies regions of interest in images and photographs. It is specifically designed for images with few colors, where the objects of interest look alike with small variation. For example, color image segmentation of:
Oranges from a tree
Stained cells from microscopic images
GemIdent also packages data analysis tools to investigate spatial relationships among the objects identified.
History
GemIdent was developed at Stanford University by Adam Kapelner from June, 2006 until January, 2007 in the lab of Dr. Peter Lee under the tutelage of Professor Susan Holmes. The concept was inspired by data Kohrt et al. who analyzed immune profiles of lymph nodes in breast cancer patients. Hence, GemIdent works well when identifying cells in IHC-stained tissue imaged via automated light microscopy when the nuclear background stain and membrane/cytoplasmic stain are well-defined. In 2008, it was adapted to support multispectral imaging techniques.
Methodology
GemIdent uses supervised learning to perform automated
identification of regions of interest in the images. Therefore, the user must do a substantial amount of work first supplying the relevant colors, then pointing out examples of the objects or regions themselves as well as negatives (training set creation).
When a user clicks on a pixel, many scores are generated using the surrounding color information via Mahalanobis Ring Score attribute generation (read the JSS paper for a detailed exposition). These scores are then used to build a random forest machine-learning classifier which will then classify pixels in any given image.
After classification, there may be mistakes. The user can return to training and point out the specific mistakes and then reclassify. These training-classifying-retraining-reclassifying iterations (considered interactive boosting) can result in a highly accurate segmentation.
Recent applications
In 2010, Setiadi et al. analyzed histological sections of lymph nodes looking at spatial densities of B and T cells. "Cell numbers do not capture the full range of information encoded within tissues".
Source code
The Java source code is now open source under GPL2.
Examples
The raw photograph (left), a superimposed mask showing the pixel classification results (center), and finally the photograph is marked with the centroids of the object of interest - the oranges (right)
The raw microscopic image of a stained lymph node (left) from the Kohrt study, a superimposed mask showing the pixel classification results (center), and finally the image is marked with the centroids of the object of interest - the cancer nuclei (right)
This example illustrates GemIdent's ability to find multiple phenotypes in the same image: the raw microscopic image of a stained lymph node (top left) from the Kohrt study, a superimposed mask showing the pixel classification results (top right), and finally the image marked with the centroids of the objects of interest - the cancer nuclei (in green stars), the T-cells (in yellow stars), and non-specific background nuclei (in cyan stars).
The command-line data analysis and visualization interface in action analyzing results of a classification of a lymph node from the Kohrt study. The histogram displays the distribution of distances from T-cells to neighboring cancer cells. The binary image of cancer membrane is the result of a pixel-only classification. The open PDF document is the autogenerated report of the analysis which includes a thumbnail view of the entire lymph node, counts and Type I error rates for all phenotypes, as well as a transcript of the analyses performed.
References
External links
Computer vision software
Microscopy
Cell biology
Graphics software | GemIdent | [
"Chemistry",
"Biology"
] | 768 | [
"Cell biology",
"Microscopy"
] |
10,971,394 | https://en.wikipedia.org/wiki/Uncia%20%28unit%29 | The (plural: , lit. "a twelfth") was a Roman unit of length, weight, and volume. It survived as the Byzantine liquid ounce (, oungía) and the origin of the English inch, ounce, and fluid ounce.
The Roman inch was equal to of a Roman foot (), which was standardized under Agrippa to about 0.97 inches or 24.6 millimeters.
The Roman ounce was of a Roman pound.
See also
Ancient Roman weights and measures
References
Units of length
Human-based units of measurement
Ancient Roman units of measurement | Uncia (unit) | [
"Mathematics"
] | 117 | [
"Quantity",
"Units of measurement",
"Units of length"
] |
10,971,432 | https://en.wikipedia.org/wiki/Methylglyoxal%20pathway | The methylglyoxal pathway is an offshoot of glycolysis found in some prokaryotes, which converts glucose into methylglyoxal and then into pyruvate. However unlike glycolysis the methylglyoxal pathway does not produce adenosine triphosphate, ATP. The pathway is named after the substrate methylglyoxal which has three carbons and two carbonyl groups located on the 1st carbon and one on the 2nd carbon. Methylglyoxal is, however, a reactive aldehyde that is very toxic to cells, it can inhibit growth in E. coli at milimolar concentrations. The excessive intake of glucose by a cell is the most important process for the activation of the methylglyoxal pathway.
The Methylglyoxal pathway
The methylglyoxal pathway is activated by the increased intercellular uptake of carbon containing molecules such as glucose, glucose-6-phosphate, lactate, or glycerol. Methylglyoxal is formed from dihydroxyacetone phosphate (DHAP) by the enzyme methylglyoxal synthase, giving off a phosphate group.
Methylglyoxal is then converted into two different products, either D-lactate, and L-lactate. Methylglyoxal reductase and aldehyde dehydrogenase convert methylglyoxal into lactaldehyde and, eventually, L-lactate. If methylglyoxal enters the glyoxylase pathway, it is converted into lactoylguatathione and eventually D-lactate. Both D-lactate, and L-lactate are then converted into pyruvate. The pyruvate that is created most often goes on to enter the Krebs cycle (Weber 711–13).
Enzymes and regulation
The potentially hazardous effects of methylglyoxal require regulation of the reactions with this substrate. Synthesis of methylglyoxal is regulated by levels of DHAP and phosphate concentrations. High concentrations of DHAP encourage methylglyoxal synthase to produce methylglyoxal, while high phosphate concentrations inhibit the enzyme, and therefore the production of more methylglyoxal. The enzyme triose phosphate isomerase affects the levels of DHAP by converting glyceraldehyde 3-phosphate (GAP) into DHAP. The usual pathway converting GAP to pyruvate starts with the enzyme glyceraldehyde 3-phosphate dehydrogenase (Weber 711–13). Low phosphate levels inhibit GAP dehydrogenase; GAP is instead converted into DHAP by triosephosphate isomerase. Again, increased levels of DHAP activate methylglyoxal synthase and methylglyoxal production (Weber 711–13).
The oscillation of Methylglyoxal concentration in feast concentrations
Jan Weber, Anke Kayser, and Ursula Rinas, performed an experiment to test what happened to the methylglyoxal pathway when E. coli was in the presence of a constantly high concentration of glucose. The concentration of methylglyoxal increased until it reached 20 μmol. Methylglyoxal concentration then began to decrease, once it reached this level. The decrease in the concentration of methylglyoxal was connected to the drop in respiratory activity. When respiration activity increased the concentration of methylglyoxal increased again, until it reached the 20 μmol concentration (Weber 714–15).
Why does the Methylglyoxal pathway exist?
This pathway does not produce any ATP, this pathway does not replace glycolysis, it runs simultaneously to glycolysis and is only initiated with an increased concentration of sugar phosphates. One believed purpose of the methylglyoxal pathway is to help release the stress of elevated sugar phosphate concentration. Also when methylglyoxal is formed from DHAP, an inorganic phosphate is given off which can be used to replenish a low concentration of needed inorganic phosphate. The methylglyoxal pathway is a rather dangerous tactic, both because less energy is produced and a toxic compound, methylglyoxal is formed. (Weber 715).
References
Weber, Jan, Anke Kayser, and Ursula Rinas. Metabolic Flux Analysis of Escherichia Coli In. Vers. 151: 707-716. 6 Dec. 2004. Microbiology. 10 Apr. 2007 <http://mic.sgmjournals.org/cgi/reprint/151/3/707>.
Saadat, D., Harrison, D.H.T. "Methylglyoxal Synthase From Escherichia Coli." RCSB Protein Data Base. 24 Apr. 2007. RCSB Protein Data Base. 25 Apr. 2007 <http://www.pdb.org/pdb/explore.do?structureId=1B93>.
"Methylglyoxal Synthase From Escherichia Coli." RCSB Protein Data Base. 24 Apr. 2007. RCSB Protein Data Base. 25 Apr. 2007 <http://www.pdb.org/pdb/explore.do?structureId=1B93>.
Yun, M., C.-G. Park, J.-Y Kim, and H.-W. Park. "Structural Anayysis of Glyeraldehyde 3-Phosphate Dehydrogenase from Escherichia coli: Direct Evidence for Substrate Binding and Cofactor-Induced Confromational Changes. RCSB Protein Data Base. 24 Apr. 2007. RCSB Protein Data Base. 30 Apr. 2007 <http://www.pdb.org/pdb/explore.do?structureId=1DC4>.
Cellular respiration | Methylglyoxal pathway | [
"Chemistry",
"Biology"
] | 1,187 | [
"Biochemistry",
"Cellular respiration",
"Metabolism"
] |
341,598 | https://en.wikipedia.org/wiki/Cybercrime | Cybercrime encompasses a wide range of criminal activities that are carried out using digital devices and/or networks. These crimes involve the use of technology to commit fraud, identity theft, data breaches, computer viruses, scams, and expanded upon in other malicious acts. Cybercriminals exploit vulnerabilities in computer systems and networks to gain unauthorized access, steal sensitive information, disrupt services, and cause financial or reputational harm to individuals, organizations, and governments.
In 2000, the tenth United Nations Congress on the Prevention of Crime and the Treatment of Offenders classified cyber crimes into five categories: unauthorized access, damage to computer data or programs, sabotage to hinder the functioning of a computer system or network, unauthorized interception of data within a system or network, and computer espionage.
Internationally, both state and non-state actors engage in cybercrimes, including espionage, financial theft, and other cross-border crimes. Cybercrimes crossing international borders and involving the actions of at least one nation-state are sometimes referred to as cyberwarfare. Warren Buffett has described that cybercrime is the "number one problem with mankind", and that it "poses real risks to humanity".
The World Economic Forum's (WEF) 2020 Global Risks Report highlighted that organized cybercrime groups are joining forces to commit criminal activities online, while estimating the likelihood of their detection and prosecution to be less than 1 percent in the US. There are also many privacy concerns surrounding cybercrime when confidential information is intercepted or disclosed, legally or otherwise.
The World Economic Forum’s 2023 Global Risks Report ranked cybercrime as one of the top 10 risks facing the world today and for the next 10 years. If viewed as a nation state, cybercrime would count as the third largest economy in the world. In numbers, cybercrime is predicted to cause over 9 trillion US dollars in damages worldwide in 2024.
Classifications
Computer crime encompasses a broad range of activities, including computer fraud, financial crimes, scams, cybersex trafficking, and ad-fraud.
Computer fraud
Computer fraud is the act of using a computer to take or alter electronic data, or to gain unlawful use of a computer or system. Computer fraud that involves the use of the internet is also called internet fraud. The legal definition of computer fraud varies by jurisdiction, but typically involves accessing a computer without permission or authorization.
Forms of computer fraud include hacking into computers to alter information, distributing malicious code such as computer worms or viruses, installing malware or spyware to steal data, phishing, and advance-fee scams.
Other forms of fraud may be committed using computer systems, including bank fraud, carding, identity theft, extortion, and theft of classified information. These types of crimes often result in the loss of personal or financial information.
Fraud Factory
Fraud factory is a collection of large fraud organizations usually involving cyber fraud and human trafficking operations.
Cyberterrorism
The term cyberterrorism refers to acts of terrorism committed through the use of cyberspace or computer resources. Acts of disruption of computer networks and personal computers through viruses, worms, phishing, malicious software, hardware, or programming scripts can all be forms of cyberterrorism.
Government officials and information technology (IT) security specialists have documented a significant increase in network problems and server scams since early 2001. In the United States there is an increasing concern from agencies such as the Federal Bureau of Investigation (FBI) and the Central Intelligence Agency (CIA).
Cyberextortion
Cyberextortion occurs when a website, e-mail server, or computer system is subjected to or threatened with attacks by malicious hackers, often through denial-of-service attacks. Cyber extortionists demand money in return for promising to stop the attacks and provide "protection". According to the FBI, cyber extortionists are increasingly attacking corporate websites and networks, crippling their ability to operate, and demanding payments to restore their service. More than 20 cases are reported each month to the FBI, and many go unreported in order to keep the victim's name out of the public domain. Perpetrators often use a distributed denial-of-service attack. However, other cyberextortion techniques exist, such as doxing and bug poaching. An example of cyberextortion was the Sony Hack of 2014.
Ransomware
Ransomware is a type of malware used in cyberextortion to restrict access to files, sometimes threatening permanent data erasure unless a ransom is paid. Ransomware is a global issue, with more than 300 million attacks worldwide in 2021. According to the 2022 Unit 42 Ransomware Threat Report, in 2021 the average ransom demand in cases handled by Norton climbed 144 percent to $2.2 million, and there was an 85 percent increase in the number of victims who had their personal information shown on dark web information dumps. A loss of nearly $400 million in 2021 and 2022 is just one of the statistics showing the impact of ransomware attacks on everyday people.
Cybersex trafficking
Cybersex trafficking is the transportation of victims for such purposes as coerced prostitution or the live streaming of coerced sexual acts or rape on webcam. Victims are abducted, threatened, or deceived and transferred to "cybersex dens". The dens can be in any location where the cybersex traffickers have a computer, tablet, or phone with an internet connection. Perpetrators use social media networks, video conferences, dating pages, online chat rooms, apps, dark web sites, and other platforms. They use online payment systems and cryptocurrencies to hide their identities. Millions of reports of cybersex incidents are sent to authorities annually. New legislation and police procedures are needed to combat this type of cybercrime.
There are an estimated 6.3 million victims of cybersex trafficking, according to a recent report by the International Labour Organization. This number includes about 1.7 million child victims. An example of cybersex trafficking is the 2018–2020 Nth room case in South Korea.
Cyberwarfare
According to the U.S. Department of Defense, cyberspace has emerged as an arena for national-security threats through several recent events of geostrategic importance, including the attack on Estonia's infrastructure in 2007, allegedly by Russian hackers. In August 2008, Russia again allegedly conducted cyberattacks against Georgia. Fearing that such attacks may become a normal part of future warfare among nation-states, military commanders see a need to develop cyberspace operations.
Computers as a tool
When an individual is the target of cybercrime, the computer is often the tool rather than the target. These crimes, which typically exploit human weaknesses, usually do not require much technical expertise. These are the types of crimes which have existed for centuries in the offline world. Criminals have simply been given a tool that increases their pool of potential victims and makes them all the harder to trace and apprehend.
Crimes that use computer networks or devices to advance other ends include:
Fraud and identity theft (although this increasingly uses malware, hacking or phishing, making it an example of "computer as target" as well as "computer as tool")
Information warfare
Phishing scams
Spam
Propagation of illegal, obscene, or offensive content, including harassment and threats
The unsolicited sending of bulk email for commercial purposes (spam) is unlawful in some jurisdictions.
Phishing is mostly propagated via email. Phishing emails may contain links to other websites that are affected by malware. Or they may contain links to fake online banking or other websites used to steal private account information.
Obscene or offensive content
The content of websites and other electronic communications may be distasteful, obscene, or offensive for a variety of reasons. In some instances, it may be illegal. What content is unlawful varies greatly between countries, and even within nations. It is a sensitive area in which the courts can become involved in arbitrating between groups with strong beliefs.
One area of internet pornography that has been the target of the strongest efforts at curtailment is child pornography, which is illegal in most jurisdictions in the world.
Ad-fraud
Ad-frauds are particularly popular among cybercriminals, as such frauds are lucrative and unlikely to be prosecuted. Jean-Loup Richet, a professor at the Sorbonne Business School, classified the large variety of ad-frauds committed by cybercriminals into three categories: identity fraud, attribution fraud, and ad-fraud services.
Identity fraud aims to impersonate real users and inflate audience numbers. The techniques used for identity fraud include traffic from bots (coming from a hosting company, a data center, or compromised devices); cookie stuffing; falsification of user characteristics, such as location and browser type; fake social traffic (misleading users on social networks into visiting the advertised website); and fake social media accounts that make a bot appear legitimate.
Attribution fraud impersonates the activities of real users, such as clicks and conversations. Many ad-fraud techniques belong to this category: the use of hijacked and malware-infected devices as part of a botnet; click farms (companies where low-wage employees are paid to click or engage in conversations); incentivized browsing; video placement abuse (delivered in display banner slots); hidden ads (which will never be viewed by real users); domain spoofing (ads served on a fake website); and clickjacking, in which the user is forced to click on an ad.
Ad-fraud services include all online infrastructure and hosting services that might be needed to undertake identity or attribution fraud. Services can involve the creation of spam websites (fake networks of websites that provide artificial backlinks); link building services; hosting services; or fake and scam pages impersonating a famous brand.
Online harassment
Whereas content may be offensive in a non-specific way, harassment directs obscenities and derogatory comments at specific individuals, often focusing on gender, race, religion, nationality, or sexual orientation.
Committing a crime using a computer can lead to an enhanced sentence. For example, in the case of United States v. Neil Scott Kramer, the defendant was given an enhanced sentence according to the U.S. Sentencing Guidelines Manual §2G1.3(b)(3) for his use of a cell phone to "persuade, induce, entice, coerce, or facilitate the travel of, the minor to engage in prohibited sexual conduct." Kramer appealed the sentence on the grounds that there was insufficient evidence to convict him under this statute because his charge included persuading through a computer device and his cellular phone technically is not a computer. Although Kramer tried to argue this point, the U.S. Sentencing Guidelines Manual states that the term "computer" means "an electronic, magnetic, optical, electrochemical, or other high-speed data processing device performing logical, arithmetic, or storage functions, and includes any data storage facility or communications facility directly related to or operating in conjunction with such device."
In the United States, at least 41 states have passed laws and regulations that regard extreme online harassment as a criminal act. These acts can also be prosecuted on the federal level, because of US Code 18 Section 2261A, which states that using computers to threaten or harass can lead to a sentence of up to 20 years.
Several countries besides the US have also created laws to combat online harassment. In China, a country with over 20 percent of the world's internet users, in response to the Human Flesh Search Engine bullying incident, the Legislative Affairs Office of the State Council passed a strict law against cyberbullying. The United Kingdom passed the Malicious Communications Act, which states that sending messages or letters electronically that the government deems "indecent or grossly offensive" and/or language intended to cause "distress and anxiety" can lead to a prison sentence of six months and a potentially large fine. Australia, while not directly addressing the issue of harassment, includes most forms of online harassment under the Criminal Code Act of 1995. Using telecommunication to send threats, harass, or cause offense is a direct violation of this act.
Although freedom of speech is protected by law in most democratic societies, it does not include all types of speech. Spoken or written threats can be criminalized because they harm or intimidate. This applies to online or network-related threats.
Cyberbullying has increased drastically with the growing popularity of online social networking. As of January 2020, 44 percent of adult internet users in the United States had "personally experienced online harassment". Online harassment of children often has negative and even life-threatening effects. According to a 2021 survey, 41 percent of children develop social anxiety, 37 percent develop depression, and 26 percent have suicidal thoughts.
The United Arab Emirates was found to have purchased the NSO Group's mobile spyware Pegasus for mass surveillance and a campaign of harassment of prominent activists and journalists, including Ahmed Mansoor, Princess Latifa, Princess Haya, and others. Ghada Owais was one of the many high-profile female journalists and activists who were targeted. She filed a lawsuit against UAE ruler Mohamed bin Zayed Al Nahyan along with other defendants, accusing them of sharing her photos online.
Drug trafficking
Darknet markets are used to buy and sell recreational drugs online. Some drug traffickers use encrypted messaging tools to communicate with drug mules or potential customers. The dark web site Silk Road, which started operations in 2011, was the first major online marketplace for drugs. It was permanently shut down in October 2013 by the FBI and Europol. After Silk Road 2.0 went down, Silk Road 3 Reloaded emerged. However, it was just an older marketplace named Diabolus Market that used the Silk Road name in order to get more exposure from the Silk Road brand's earlier success.
Darknet markets have had a rise in traffic in recent years for many reasons, such as the anonymous purchases and often a system of reviews by other buyers. There are many ways in which darknet markets can financially drain individuals. Vendors and customers alike go to great lengths to keep their identities a secret while online. Commonly used tools for hiding their online presence include virtual private networks (VPNs), Tails, and the Tor Browser. Darknet markets entice customers by making them feel comfortable. Although people can easily gain access to a Tor browser, actually gaining access to an illicit market is not as simple as typing it in on a search engine, as one would with Google. Darknet markets have special links that change frequently, ending in .onion as opposed to the typical .com, .net, and .org domain extensions. To add to privacy, the most prevalent currency on these markets is Bitcoin, which allows transactions to be anonymous.
A problem that marketplace users sometimes face is exit scamming. That is, a vendor with a high rating acts as if they are selling on the market and have users pay for products they never receive. The vendor then closes their account after receiving money from multiple buyers and never sending what was paid for. The vendors, all of whom are involved in illegal activities, have no reason not to engage in exit scamming when they no longer want to be a vendor. In 2019, an entire market known as Wall Street Market allegedly exit scammed, stealing $30 million dollars in bitcoin.
The FBI has cracked down on these markets. In July 2017, the FBI seized one of the biggest markets, commonly called Alphabay, which re-opened in August 2021 under the control of DeSnake, one of the original administrators. Investigators pose as buyers and order products from darknet vendors in the hope that the vendors leave a trail the investigators can follow. In one case an investigator posed as a firearms seller, and for six months people purchased from them and provided home addresses. The FBI was able to make over a dozen arrests during this six-month investigation. Another crackdown targeted vendors selling fentanyl and opiates. With thousands of people dying each year due to drug overdose, investigators have made internet drug sales a priority. Many vendors do not realize the extra criminal charges that go along with selling drugs online, such as money laundering and illegal use of the mail. In 2019, a vendor was sentenced to 10 years in prison after selling cocaine and methamphetamine under the name JetSetLife. But despite the large amount of time investigators spend tracking down people, in 2018 only 65 suspects who bought and sold illegal goods on some of the biggest markets were identified. Meanwhile, thousands of transactions take place daily on these markets.
Emerging trends in Cybercrime
Through rapid technological advances, the tactics of cybercriminals are ever evolving with instances of AI (artificial intelligence) being used and exploited for criminal activity. These trends highlight the dynamic nature of cybercrime, emphasizing the need for evolving countermeasures to combat future online threats. The use of AI has been able to replicate voices to impersonate, fraudulently obtain money and other finical related crimes. The dark web is seeing an increase in artificial chatbots specifically designed to aid hackers and help with various phishing techniques. Cybercriminals can now use AI deepfakes to pose as individuals who may be connected or have authority over the victim of the attack. Personal data is something that in the future will be more accessible than ever, with almost everything having a history that is possible to access on black markets, fueling issues such as identity theft, finical fraud, and targeted advertisements.
Notable incidents
One of the highest-profile banking computer crimes occurred over a course of three years beginning in 1970. The chief teller at the Park Avenue branch of New York's Union Dime Savings Bank embezzled over $1.5 million from hundreds of accounts.
In 2014, the Sony Pictures Entertainment hack not only exposed sensitive company data but also led to extortion demands, marking one of the most publicized corporate cyberattacks to date. For more detailed insights on cyber blackmail and notable incidents, visit [C9 Journal](https://c9journal.com/cyber-blackmail-definition-prevention-and-response/).
A hacking group called MOD (Masters of Deception) allegedly stole passwords and technical data from Pacific Bell, Nynex, and other telephone companies as well as several big credit agencies and two major universities. The damage caused was extensive; one company, Southwestern Bell, suffered losses of $370,000.
In 1983, a 19-year-old UCLA student used his PC to break into a Defense Department International Communications system.
Between 1995 and 1998 the Newscorp satellite pay-to-view encrypted SKY-TV service was hacked several times during an ongoing technological arms race between a pan-European hacking group and Newscorp. The original motivation of the hackers was to watch Star Trek reruns in Germany, which was something which Newscorp did not have the copyright permission to allow.
On 26 March 1999, the Melissa worm infected a document on a victim's computer, then automatically emailed that document and a copy of the virus to other people.
In February 2000, an individual going by the alias of MafiaBoy began a series of denial-of-service attacks against high-profile websites, including Yahoo!, Dell, Inc., E*TRADE, eBay, and CNN. About 50 computers at Stanford University, along with computers at the University of California at Santa Barbara, were among the zombie computers sending pings in the distributed denial-of-service attacks. On 3 August 2000, Canadian federal prosecutors charged MafiaBoy with 54 counts of illegal access to computers.
The Stuxnet worm corrupted SCADA microprocessors, particularly the types used in Siemens centrifuge controllers.
The Russian Business Network (RBN) was registered as an internet site in 2006. Initially, much of its activity was legitimate. But apparently the founders soon discovered that it was more profitable to host illegitimate activities and to offer its services to criminals. The RBN has been described by VeriSign as "the baddest of the bad". It provides web hosting services and internet access to all kinds of criminal and objectionable activities that earn up to $150 million in one year. It specializes in personal identity theft for resale. It is the originator of MPack and an alleged operator of the now defunct Storm botnet.
On 2 March 2010, Spanish investigators arrested three men suspected of infecting over 13 million computers around the world. The botnet of infected computers included PCs inside more than half of the Fortune 1000 companies and more than 40 major banks, according to investigators.
In August 2010, the US Department of Homeland Security shut down the international pedophile ring Dreamboard. The website had approximately 600 members and may have distributed up to 123 terabytes of child pornography (roughly equivalent to 16,000 DVDs). To date this is the single largest US prosecution of an international child pornography ring; 52 arrests were made worldwide.
In January 2012, Zappos.com experienced a security breach compromising the credit card numbers, personal information, and billing and shipping addresses of as many as 24 million customers.
In June 2012, LinkedIn and eHarmony were attacked, and 65 million password hashes were compromised. Thirty thousand passwords were cracked, and 1.5 million eHarmony passwords were posted online.
In December 2012, the Wells Fargo website experienced a denial-of-service attack that potentially compromised 70 million customers and 8.5 million active viewers. Other banks thought to be compromised included Bank of America, J. P. Morgan, U.S. Bank, and PNC Financial Services.
On 23 April 2013, the Twitter account of the Associated Press was hacked. The hacker posted a hoax tweet about fictitious attacks on the White House that they claimed left then-President Obama injured. The hoax tweet resulted in a brief plunge of 130 points in the Dow Jones Industrial Average, the removal of $136 billion from the S&P 500 index, and the temporary suspension of AP's Twitter account. The Dow Jones later restored its session gains.
In May 2017, 74 countries logged a ransomware cybercrime called "WannaCry".
Illicit access to camera sensors, microphone sensors, phonebook contacts, all internet-enabled apps, and metadata of mobile telephones running Android and iOS was reportedly provided by Israeli spyware that was found to be in operation in at least 46 nation-states around the world. Journalists, royalty, and government officials were among the targets. Earlier accusations that Israeli weapons companies were meddling in international telephony and smartphones have been eclipsed by the 2018 Pegasus spyware revelations.
In December 2019, US intelligence officials and The New York Times revealed that ToTok, a messaging application widely used in the United Arab Emirates, is a spying tool for the UAE. An investigation revealed that the Emirati government was attempting to track every conversation, movement, relationship, appointment, sound, and image of those who installed the app on their phones.
Combating computer crime
Due to cybercriminals using the internet for cross-border attacks and crimes, the process of prosecuting cybercriminals has been difficult. The number of vulnerabilities that a cybercriminal could use as points of opportunity to exploit has also increased over the years. From 2008 to 2014 alone, there has been a 17.75% increase in vulnerabilities across all online devices. The internet's expansive reach causes the damage inflicted to people to be magnified since many methods of cybercrime have the opportunity to reach many people. The availability of virtual spaces has allowed cybercrime to become an everyday occurrence. In 2018, the Internet Crime Complaint Center received 351,937 complaints of cybercrime, which led to $2.7 billion lost.
Investigation
In a criminal investigation, a computer can be a source of evidence (see digital forensics). Even when a computer is not directly used for criminal purposes, it may contain records of value to criminal investigators in the form of a logfile. In many countries, Internet Service Providers are required by law to keep their logfiles for a predetermined amount of time.
There are many ways for cybercrime to take place, and investigations tend to start with an IP Address trace; however, that does not necessarily enable detectives to solve a case. Different types of high-tech crime may also include elements of low-tech crime, and vice versa, making cybercrime investigators an indispensable part of modern law enforcement. Methods of cybercrime detective work are dynamic and constantly improving, whether in closed police units or in the framework of international cooperation.
In the United States, the FBI and the Department of Homeland Security (DHS) are government agencies that combat cybercrime. The FBI has trained agents and analysts in cybercrime placed in their field offices and headquarters. In the DHS, the Secret Service has a Cyber Intelligence Section that works to target financial cybercrimes. They combat international cybercrime and work to protect institutions such as banks from intrusions and information breaches. Based in Alabama, the Secret Service and the Alabama Office of Prosecution Services work together to train professionals in law enforcement at the National Computer Forensic Institute. The NCFI provides "state and local members of the law enforcement community with training in cyber incident response, investigation, and forensic examination in cyber incident response, investigation, and forensic examination."
Investigating cyber crime within the United States and globally often requires partnerships. Within the United States, cyber crime may be investigated by law enforcement, the Department of Homeland Security, among other federal agencies. However, as the world becomes more dependent on technology, cyber attacks and cyber crime are going to expand as threat actors will continue to exploit weaknesses in protection and existing vulnerabilities to achieve their end goals, often being data theft or exfiltration. To combat cybercrime, the United States Secret Service maintains an Electronic Crimes Task Force which extends beyond the United States as it helps to locate threat actors that are located globally and performing cyber related crimes within the United States. The Secret Service is also responsible for the National Computer Forensic Institute which allows law enforcement and people of the court to receive cyber training and information on how to combat cyber crime. The United States Immigration and Customs Enforcement is responsible for the Cyber Crimes Center (C3) providing cyber crime related services for federal, state, local and international agencies. Finally, the United States also has resources relating to Law Enforcement Cyber Incident Reporting to allow local and state agencies to understand how, when, and what should be reported as a cyber incident to the federal government.
Because cybercriminals commonly use encryption and other techniques to hide their identity and location, it can be difficult to trace a perpetrator after a crime is committed, so prevention measures are crucial.
Prevention
The Department of Homeland Security also instituted the Continuous Diagnostics and Mitigation (CDM) Program. The CDM Program monitors and secures government networks by tracking network risks and informing system personnel so that they can take action. In an attempt to catch intrusions before the damage is done, the DHS created the Enhanced Cybersecurity Services (ECS). The Cyber Security and Infrastructure Security Agency approves the private partners that provide intrusion detection and prevention services through the ECS.
Cybersecurity professionals have been skeptical of prevention-focused strategies. The mode of use of cybersecurity products has also been called into question. Shuman Ghosemajumder has argued that individual companies using a combination of products for security is not a scalable approach and has advocated for the use of cybersecurity technology primarily at the platform level.
On a personal level, there are some strategies available to defend against cybercrime:
Keeping your software and operating system update to benefit from security patches
Using anti-virus software that can detect and remove malicious threats
Use strong passwords with a variety of characters that aren't easy to guess
Refrain from opening attachments from spam emails
Do not click on links from scam emails
Do not give out personal information over the internet unless you can verify that the destination is safe
Contact companies about suspicious requests of your information
Legislation
Because of weak laws, cybercriminals operating from developing countries can often evade detection and prosecution. In countries such as the Philippines, laws against cybercrime are weak or sometimes nonexistent. Cybercriminals can then strike from across international borders and remain undetected. Even when identified, these criminals can typically avoid being extradited to a country such as the US that has laws that allow for prosecution. For this reason, agencies such as the FBI have used deception and subterfuge to catch criminals. For example, two Russian hackers had been evading the FBI for some time. The FBI set up a fake computing company based in Seattle, Washington. They proceeded to lure the two Russian men into the United States by offering them work with this company. Upon completion of the interview, the suspects were arrested. Clever tricks like that are sometimes a necessary part of catching cybercriminals when weak laws and limited international cooperation make it impossible otherwise.
The first cyber related law in the United States was the Privacy Act of 1974 which was only required for federal agencies to follow to ensure privacy and protection of personally identifiable information (PII). However, since 1974, in the United States other laws and regulations have been drafted and implemented, but there is still a gap in responding to current cyber related crime. The most recent cyber related law, according to NIST, was the NIST Small Business Cybersecurity Act, which came out in 2018, and provides guidelines to small businesses to ensure that cybersecurity risks are being identified and addressed accurately.
During President Barack Obama's presidency three cybersecurity related bills were signed into order in December 2014. The first was the Federal Information Security Modernization Act of 2014, the second was the National Cybersecurity Protection Act of 2014, and the third was the Cybersecurity Enhancement Act of 2014. Although the Federal Information Security Modernization Act of 2014 was just an update of an older version of the act, it focused on the practices federal agencies were to abide by relating to cybersecurity. While the National Cybersecurity Protection Act of 2014 was aimed toward increasing the amount of information sharing that occurs across the federal and private sector to improve cybersecurity amongst the industries. Finally, the Cybersecurity Enhancement Act of 2014 relates to cybersecurity research and education.
In April 2015, then-President Barack Obama released an executive order that allows the US to freeze the assets of convicted cybercriminals and block their economic activity within the United States.
The European Union adopted cybercrime directive 2013/40/EU, which was elaborated upon in the Council of Europe's Convention on Cybercrime.
It is not only the US and the European Union that have been introducing measures against cybercrime. On 31 May 2017, China announced that its new cybersecurity law was taking effect.
In Australia, legislation to combat cybercrime includes the Criminal Code Act 1995, the Telecommunications Act 1997, and the Enhancing Online Safety Act 2015.
Penalties
Penalties for computer-related crimes in New York State can range from a fine and a short period of jail time for a Class A misdemeanor, such as unauthorized use of a computer, up to 3 to 15 years in prison for a Class C felony, such as computer tampering in the first degree.
However, some former cybercriminals have been hired as information security experts by private companies due to their inside knowledge of computer crime, a phenomenon which theoretically could create perverse incentives. A possible counter to this is for courts to ban convicted hackers from using the internet or computers, even after they have been released from prisonthough as computers and the internet become more and more central to everyday life, this type of punishment becomes more and more draconian. Nuanced approaches have been developed that manage cyber offenders' behavior without resorting to total computer or internet bans. These approaches involve restricting individuals to specific devices which are subject to monitoring or searches by probation or parole officers.
Awareness
Cybercrime is becoming more of a threat in our society. According to Accenture's State of Cybersecurity, security attacks increased 31% from 2020 to 2021. The number of attacks per company increased from 206 to 270. Due to this rising threat, the importance of raising awareness about measures to protect information and the tactics criminals use to steal that information is paramount. However, despite cybercrime becoming a mounting problem, many people are not aware of the severity of this problem. This could be attributed to a lack of experience and knowledge of technological issues. There are 1.5 million cyber-attacks annually, which means that there are over 4,000 attacks a day, 170 attacks every hour, or nearly three attacks every minute, with studies showing that only 16 percent of victims had asked the people who were carrying out the attacks to stop. Comparitech's 2023 study shows that cybercrime victims have peaked to 71 million annually, which means there is a cyberattack every 39 seconds. Anybody who uses the internet for any reason can be a victim, which is why it is important to be aware of how to be protected while online.
Intelligence
As cybercrime proliferated, a professional ecosystem evolved to support individuals and groups seeking to profit from cybercrime activities. The ecosystem has become quite specialized, and includes malware developers, botnet operators, professional cybercrime groups, groups specializing in the sale of stolen content, and so forth. A few of the leading cybersecurity companies have the skills and resources to follow the activities of these individuals and groups. A wide variety of information that can be used for defensive purposes is available from these sources, for example, technical indicators such as hashes of infected files and malicious IPs/URLs, as well as strategic information profiling the goals and techniques of the profiled groups. Much of it is freely available, but consistent, ongoing access typically requires a subscription. Some in the corporate sector see a crucial role for artificial intelligence in the future development of cybersecurity.
Interpol's Cyber Fusion Center began a collaboration with key cybersecurity players to distribute information on the latest online scams, cyber threats, and risks to internet users. Since 2017, reports on social engineering frauds, ransomware, phishing, and other attacks have been distributed to security agencies in over 150 countries.
Spread of cybercrime
The increasing prevalence of cybercrime has resulted in more attention to computer crime detection and prosecution.
Hacking has become less complex as hacking communities disseminate their knowledge through the internet. Blogs and social networks have contributed substantially to information sharing, so that beginners can benefit from older hackers' knowledge and advice.
Furthermore, hacking is cheaper than ever. Before the cloud computing era, in order to spam or scam, one needed a variety of resources, such as a dedicated server; skills in server management, network configuration, and network maintenance; and knowledge of internet service provider standards. By comparison, a software-as-a-service for mail is a scalable and inexpensive bulk e-mail-sending service for marketing purposes that could be easily set up for spam. Cloud computing could help cybercriminals leverage their attacks, whether brute-forcing a password, improving the reach of a botnet, or facilitating a spamming campaign.
Agencies
ASEAN
Australian High Tech Crime Centre
Cyber Crime Investigation Cell, a wing of Mumbai Police, India
Cyber Crime Unit (Hellenic Police), established in Greece in 2004
EUROPOL
INTERPOL
National Cyber Crime Unit, in the United Kingdom
National Security Agency, in the United States
National Special Crime Unit, in Denmark.
National White Collar Crime Center, in the United States
Cyber Terror Response Center - Korea National Police Agency
Cyber Police Department - Japan National Police Agency
Siber suçlarla mücadele - Turkish Cyber Agency
See also
References
Cyber Crime. (n.d.). [Folder]. Federal Bureau of Investigation. Retrieved April 24, 2024, from https://www.fbi.gov/investigate/cyber
Herrero, J., Torres, A., Vivas, P., & Urueña, A. (2022). Smartphone Addiction, Social Support, and Cybercrime Victimization: A Discrete Survival and Growth Mixture Model: Psychosocial Intervention. Psychosocial Intervention, 31(1), 59–66. https://doi.org/10.5093/pi2022a3
Further reading
Balkin, J., Grimmelmann, J., Katz, E., Kozlovski, N., Wagman, S. & Zarsky, T. (2006) (eds) Cybercrime: Digital Cops in a Networked Environment, New York University Press, New York.
Bowker, Art (2012) "The Cybercrime Handbook for Community Corrections: Managing Risk in the 21st Century" Charles C. Thomas Publishers, Ltd. Springfield.
Brenner, S. (2007) Law in an Era of Smart Technology, Oxford: Oxford University Press
Broadhurst, R., and Chang, Lennon Y.C. (2013) "Cybercrime in Asia: trends and challenges", in B. Hebenton, SY Shou, & J. Liu (eds), Asian Handbook of Criminology (pp. 49–64). New York: Springer ()
Chang, L.Y. C. (2012) Cybercrime in the Greater China Region: Regulatory Responses and Crime Prevention across the Taiwan Strait. Cheltenham: Edward Elgar. ()
Chang, Lennon Y.C., & Grabosky, P. (2014) "Cybercrime and establishing a secure cyber world", in M. Gill (ed) Handbook of Security (pp. 321–339). NY: Palgrave.
Csonka P. (2000) Internet Crime; the Draft council of Europe convention on cyber-crime: A response to the challenge of crime in the age of the internet? Computer Law & Security Report Vol.16 no.5.
Easttom, C. (2010) Computer Crime Investigation and the Law
Fafinski, S. (2009) Computer Misuse: Response, regulation and the law Cullompton: Willan
Glenny, M. DarkMarket : cyberthieves, cybercops, and you, New York, NY : Alfred A. Knopf, 2011.
Grabosky, P. (2006) Electronic Crime, New Jersey: Prentice Hall
Halder, D., & Jaishankar, K. (2016). Cyber Crimes against Women in India. New Delhi: SAGE Publishing. .
Jaishankar, K. (Ed.) (2011). Cyber Criminology: Exploring Internet Crimes and Criminal behavior. Boca Raton, FL, US: CRC Press, Taylor, and Francis Group.
McQuade, S. (2006) Understanding and Managing Cybercrime, Boston: Allyn & Bacon.
McQuade, S. (ed) (2009) The Encyclopedia of Cybercrime, Westport, CT: Greenwood Press.
Parker D (1983) Fighting Computer Crime, U.S.: Charles Scribner's Sons.
Pattavina, A. (ed) Information Technology and the Criminal Justice System, Thousand Oaks, CA: Sage.
Richet, J.L. (2013) From Young Hackers to Crackers, International Journal of Technology and Human Interaction (IJTHI), 9(3), 53–62.
Robertson, J. (2 March 2010). Authorities bust 3 in infection of 13m computers. Retrieved 26 March 2010, from Boston News: Boston.com
Rolón, D. N. Control, vigilancia y respuesta penal en el ciberespacio, Latin American's New Security Thinking, Clacso, 2014, pp. 167/182
Walden, I. (2007) Computer Crimes and Digital Investigations, Oxford: Oxford University Press.
Wall, D.S. (2007) Cybercrimes: The transformation of crime in the information age, Cambridge: Polity.
Williams, M. (2006) Virtually Criminal: Crime, Deviance and Regulation Online, Routledge, London.
Yar, M. (2006) Cybercrime and Society, London: Sage.
External links
International Journal of Cyber Criminology
Common types of cyber attacks
Countering ransomware attacks
Government resources
Cybercrime.gov from the United States Department of Justice
National Institute of Justice Electronic Crime Program from the United States Department of Justice
FBI Cyber Investigators home page
US Secret Service Computer Fraud
Australian High Tech Crime Centre
UK National Cyber Crime Unit from the National Crime Agency
Crime by type
Computer security
Organized crime activity
Harassment and bullying | Cybercrime | [
"Biology"
] | 8,411 | [
"Harassment and bullying",
"Behavior",
"Aggression"
] |
341,658 | https://en.wikipedia.org/wiki/Apathy | Apathy, also referred to as indifference, is a lack of feeling, emotion, interest, or concern about something. It is a state of indifference, or the suppression of emotions such as concern, excitement, motivation, or passion. An apathetic individual has an absence of interest in or concern about emotional, social, spiritual, philosophical, virtual, or physical life and the world. Apathy can also be defined as a person's lack of goal orientation. Apathy falls in the less extreme spectrum of diminished motivation, with abulia in the middle and akinetic mutism being more extreme than both apathy and abulia.
The apathetic may lack a sense of purpose, worth, or meaning in their life. People with severe apathy tend to have a lower quality of life and are at a higher risk for mortality and early institutionalization. They may also exhibit insensibility or sluggishness. In positive psychology, apathy is described as a result of the individuals' feeling they do not possess the level of skill required to confront a challenge (i.e. "flow"). It may also be a result of perceiving no challenge at all (e.g., the challenge is irrelevant to them, or conversely, they have learned helplessness). Apathy is usually felt only in the short term, but sometimes it becomes a long-term or even lifelong state, often leading to deeper social and psychological issues.
Apathy should be distinguished from reduced affect display, which refers to reduced emotional expression but not necessarily reduced emotion.
Pathological apathy, characterized by extreme forms of apathy, is now known to occur in many different brain disorders, including neurodegenerative conditions often associated with dementia such as Alzheimer's disease, Parkinson's disease, and psychiatric disorders such as schizophrenia. Although many patients with pathological apathy also have depression, several studies have shown that the two syndromes are dissociable: apathy can occur independent of depression and vice versa.
Etymology
Although the word apathy was first used in 1594 and is derived from the Greek (apatheia), from (apathēs, "without feeling" from a- ("without, not") and pathos ("emotion")), it is important not to confuse the two terms. Also meaning "absence of passion," "apathy" or "insensibility" in Greek, the term apatheia was used by the Stoics to signify a (desirable) state of indifference toward events and things that lie outside one's control (that is, according to their philosophy, all things exterior, one being only responsible for one's own representations and judgments). In contrast to apathy, apatheia is considered a virtue, especially in Orthodox monasticism. In the Philokalia the word dispassion is used for apatheia, so as not to confuse it with apathy.
History and other views
Christians have historically condemned apathy as a deficiency of love and devotion to God and his works. This interpretation of apathy is also referred to as Sloth and is listed among the Seven Deadly Sins. Clemens Alexandrinus used the term to draw to gnostic Christianity philosophers who aspired after virtue.
The modern concept of apathy became more well known after World War I, when it was one of the various forms of "shell shock", now better known as post-traumatic stress disorder (PTSD). Soldiers who lived in the trenches amidst the bombing and machine gun fire, and who saw the battlefields strewn with dead and maimed comrades, developed a sense of disconnected numbness and indifference to normal social interaction when they returned from combat.
In 1950, US novelist John Dos Passos wrote: "Apathy is one of the characteristic responses of any living organism when it is subjected to stimuli too intense or too complicated to cope with. The cure for apathy is comprehension."
Social origin
There may be other factors contributing to a person's apathy.
Apathy has been socially viewed as worse than things such as hate or anger. Not caring whatsoever, in the eyes of some, is even worse than having distaste for something. Author Leo Buscaglia is quoted as saying "I have a very strong feeling that the opposite of love is not hate-it's apathy. It's not giving a damn." Helen Keller stated that apathy is the "worst of them all" when it comes to the various evils in the world. French social commentator and political thinker Charles de Montesquieu stated that "the tyranny of a prince in an oligarchy is not so dangerous to the public welfare as the apathy of a citizen in the democracy." As can be seen by these quotes and various others, the social implications of apathy are great. Many people believe that not caring at all can be worse for society than individuals who are overpowering or hateful.
In the school system
Apathy in students, especially those in high school, is a growing problem. It causes teachers to lower standards in order to try to engage their students. Apathy in schools is most easily recognized by students being unmotivated or, quite commonly, being motivated by outside factors. For example, when asked about their motivation for doing well in school, fifty percent of students cited outside sources such as "college acceptance" or "good grades". On the contrary, only fourteen percent cited "gaining an understanding of content knowledge or learning subject material" as their motivation to do well in school. As a result of these outside sources, and not a genuine desire for knowledge, students often do the minimum amount of work necessary to get by in their classes. This then leads to average grades and test grades but no real grasping of knowledge. Many students cited that "assignments/content was irrelevant or meaningless" and that this was the cause of their apathetic attitudes toward their schooling, leading to teacher and parent frustration. Other causes of apathy in students include situations within their home life, media influences, peer influences, school struggles and failures. Some of the signs of apathetic students include declining grades, skipping classes, routine illness, and behavioral changes both in school and at home. In order to combat this, teachers have to be aware that students have different motivation profiles; i.e. they are motivated by different factors or stimuli.
Bystander
Also known as the bystander effect, bystander apathy occurs when, during an emergency, those standing by do nothing to help but instead stand by and watch. Sometimes this can be caused by one bystander observing other bystanders and imitating their behavior. If other people are not acting in a way that makes the situation seem like an emergency that needs attention, often other bystanders will act in the same way. The diffusion to responsibility can also be to blame for bystander apathy. The more people that are around in emergency situations, the more likely individuals are to think that someone else will help so they do not need to. This theory was popularized by social psychologists in response to the 1964 Kitty Genovese murder. The murder took place in New York and the victim, Genovese, was stabbed to death as bystanders reportedly stood by and did nothing to stop the situation or even call the police. Latané and Darley are the two psychologists who did research on this theory. They performed different experiments that placed people into situations where they had the opportunity to intervene or do nothing. The individuals in the experiment were either by themselves, with a stranger(s), with a friend, or with a confederate. The experiments ultimately led them to the conclusion that there are many social and situational factors that are behind whether a person will react in an emergency situation or simply remain apathetic to what is occurring.
Measurement
Several different questionnaires and clinical interview instruments have been used to measure pathological apathy or, more recently, apathy in healthy people.
Apathy Evaluation Scale
Developed by Robert Marin in 1991, the Apathy Evaluation Scale (AES) was the first method developed to measure apathy in clinical populations. Centered around evaluation, the scale can either be self-informed or other-informed. The three versions of the test include self, informant such as a family member, and clinician. The scale is based around questionnaires that ask about topics including interest, motivation, socialization, and how the individual spends their time. The individual or informant answers on a scale of "not at all", "slightly", "somewhat" or "a lot". Each item on the evaluation is created with positive or negative syntax and deals with cognition, behavior, and emotion. Each item is then scored and, based on the score, the individual's level of apathy can be evaluated.
Apathy Motivation Index
The Apathy Motivation Index (AMI) was developed to measure different dimensions of apathy in healthy people. Factor analysis identified three distinct axes of apathy: behavioural, social and emotional. The AMI has since been used to examine apathy in patients with Parkinson's disease who, overall, showed evidence of behavioural and social apathy, but not emotional apathy. Patients with Alzheimer's disease, Parkinson's disease, subjective cognitive impairment and limbic encephalitis have also been assessed using the AMI, and their self-reports of apathy were compared with those of caregivers using the AMI caregiver scale.
Dimensional Apathy Scale
The Dimensional Apathy Scale (DAS) is a multidimensional apathy instrument for measuring subtypes of apathy in different clinical populations and healthy adults. It was developed using factor analysis, quantifying Executive apathy (lack of motivation for planning, organising and attention), Emotional apathy (emotional indifference, neutrality, flatness or blunting) and Initiation apathy (lack of motivation for self-generation of thought/action). There is a self-rated version of the DAS and an informant/carer-rated version of the DAS. Further a clinical brief DAS has also been developed. It has been validated for use in stroke, Huntington's disease, motor neurone disease, Multiple Sclerosis, dementia, Parkinson's disease and schizophrenia, showing to differentiate profiles of apathy subtypes between these conditions.
Medical aspects
Depression
Mental health journalist and author John McManamy argues that although psychiatrists do not explicitly deal with the condition of apathy, it is a psychological problem for some depressed people, in which they get a sense that "nothing matters", the "lack of will to go on and the inability to care about the consequences". He describes depressed people who "...cannot seem to make [themselves] do anything", who "can't complete anything", and who do not "feel any excitement about seeing loved ones". He acknowledges that the Diagnostic and Statistical Manual of Mental Disorders does not discuss apathy.
In a Journal of Neuropsychiatry and Clinical Neurosciences article from 1991, Robert Marin, MD, claimed that pathological apathy occurs due to brain damage or neuropsychiatric illnesses such as Alzheimer's, Parkinson's, Huntington's disease, or stroke. Marin argues that apathy is a syndrome associated with many different brain disorders. This has now been shown to be the case across a range of neurological and psychiatric conditions.
A review article by Robert van Reekum, MD, et al. from the University of Toronto in the Journal of Neuropsychiatry (2005) claimed that an obvious relationship between depression and apathy exists in some populations. However, although many patients with depression also have apathy, several studies have shown that apathy can occur independently of depression, and vice versa.
Apathy can be associated with depression, a manifestation of negative disorders in schizophrenia, or a symptom of various somatic and neurological disorders. Sometimes apathy and depression are viewed as the same thing, but actually take different forms depending on someone's mental condition.
Alzheimer's disease
Depending upon how it has been measured, apathy affects 19–88% percent of individuals with Alzheimer's disease (mean prevalence of 49% across different studies). It is a neuropsychiatric symptom associated with functional impairment. Brain imaging studies have demonstrated changes in the anterior cingulate cortex, orbitofrontal cortex, dorsolateral prefrontal cortex and ventral striatum in Alzheimer's patients with apathy. Cholinesterase inhibitors, used as the first line of treatment for the cognitive symptoms associated with dementia, have also shown some modest benefit for behavior disturbances such as apathy. The effects of donepezil, galantamine and rivastigmine have all been assessed but, overall, the findings have been inconsistent, and it is estimated that apathy in ~60% of Alzheimer's patients does not respond to treatment with these drugs. Methylphenidate, a dopamine and noradrenaline reuptake blocker, has received increasing interest for the treatment of apathy. Management of apathetic symptoms using methylphenidate has shown promise in randomized placebo controlled trials of Alzheimer's patients. A phase III multi-centered randomized placebo-controlled trial of methylphenidate for the treatment of apathy has reported positive effects.
Parkinson's disease
Overall, ~40% of Parkinson's disease patients suffer from apathy, with prevalence rates varying from 16 to 62%, depending on the study. Apathy is increasingly recognized to be an important non-motor symptom in Parkinson's disease. It has a significant negative impact on quality of life. In some patients, apathy can be improved by dopaminergic medication. There is also some evidence for a positive effect of cholinesterase inhibitors such as Rivastigmine on apathy. Diminished sensitivity to reward may be a key component of the syndrome in Parkinson's disease.
Frontotemporal dementia
Pathological apathy is considered to be one of the diagnostic features of behavioural variant frontotemporal dementia, occurring in the majority of people with this condition. Both hypersensitivity to effort as well as blunting of sensitivity to reward may be components of behavioural apathy in frontotemporal dementia.
Anxiety
While apathy and anxiety may appear to be separate, and different, states of being, there are many ways that severe anxiety can cause apathy. First, the emotional fatigue that so often accompanies severe anxiety leads to one's emotions being worn out, thus leading to apathy. Second, the low serotonin levels associated with anxiety often lead to less passion and interest in the activities in one's life, which can be seen as apathy. Third, negative thinking and distractions associated with anxiety can ultimately lead to a decrease in one's overall happiness which can then lead to an apathetic outlook about one's life. Finally, the difficulty enjoying activities that individuals with anxiety often face can lead to them doing these activities much less often and can give them a sense of apathy about their lives. Even behavioral apathy may be found in individuals with anxiety in the form of them not wanting to make efforts to treat their anxiety.
Other
Often, apathy is felt after witnessing horrific acts, such as the killing or maiming of people during a war, e.g. posttraumatic stress disorder. It is also known to be a distinct psychiatric syndrome that is associated with many conditions, more prominently recognized in the elderly, some of which are: CADASIL syndrome, depression, Alzheimer's disease, Chagas disease, Creutzfeldt–Jakob disease, dementia (and dementias such as Alzheimer's disease, vascular dementia, and frontotemporal dementia), Korsakoff's syndrome, excessive vitamin D, hypothyroidism, hyperthyroidism, general fatigue, Huntington's disease, Pick's disease, progressive supranuclear palsy (PSP), brain damage, schizophrenia, schizoid personality disorder, bipolar disorder, autism spectrum disorders, ADHD, and others. Some medications and the heavy use of drugs such as opiates may bring apathy as a side effect.
See also
Acedia
Callous and unemotional traits
Compassion fatigue
Detachment (philosophy)
Political apathy
Reduced affect display
Notes
References
External links
The Roots of Apathy – Essay By David O. Solmitz
Apathy – McMan's Depression and Bipolar Web, by John McManamy
Problem behavior
Emotions
Narcissism
Psychological attitude
Disorders of diminished motivation
Symptoms or signs involving mood or affect | Apathy | [
"Biology"
] | 3,421 | [
"Behavior",
"Problem behavior",
"Human behavior"
] |
341,668 | https://en.wikipedia.org/wiki/30%20%28number%29 | 30 (thirty) is the natural number following 29 and preceding 31.
In mathematics
30 is an even, composite, pronic number. With 2, 3, and 5 as its prime factors, it is a regular number and the first sphenic number, the smallest of the form , where is a prime greater than 3. It has an aliquot sum of 42; within an aliquot sequence of thirteen composite numbers (30, 42, 54, 66, 78, 90, 144, 259, 45, 33, 15, 9, 4, 3, 1, 0) to the Prime in the 3-aliquot tree. From 1 to the number 30, this is the longest Aliquot Sequence.
It is also:
A semiperfect number, since adding some subsets of its divisors (e.g., 5, 10 and 15) equals 30.
A primorial.
A Harshad number in decimal.
Divisible by the number of prime numbers (10) below it.
The largest number such that all coprimes smaller than itself, except for 1, are prime.
The sum of the first four squares, making it a square pyramidal number.
The number of vertices in the Tutte–Coxeter graph.
The measure of the central angle and exterior angle of a dodecagon, which is the petrie polygon of the 24-cell.
The number of sides of a triacontagon, which in turn is the petrie polygon of the 120-cell and 600-cell.
The number of edges of a dodecahedron and icosahedron, of vertices of an icosidodecahedron, and of faces of a rhombic triacontahedron.
The sum of the number of elements of a 5-cell: 5 vertices, 10 edges, 10 faces, and 5 cells.
The Coxeter number of E8.
A largely composite number, as it has 8 divisors and no smaller number has more than 8 divisors
Furthermore,
In a group , such that , where does not divide , and has a subgroup of order , 30 is the only number less than 60 that is neither a prime nor of the aforementioned form. Therefore, 30 is the only candidate for the order of a simple group less than 60, in which one needs other methods to specifically reject to eventually deduce said order.
The SI prefix for 1030 is Quetta- (Q), and for 10−30 (i.e., the reciprocal of 1030) quecto (q). These numbers are the largest and smallest number to receive an SI prefix to date.
In other fields
Thirty is:
Used (as –30–) to indicate the end of a newspaper (or broadcast) story, a copy editor's typographical notation
The number of days in the months April, June, September and November (and in unusual circumstances February—see February 30). Although the number of days in a month vary, 30 is used to estimate months elapsing.
In years of marriage, the pearl wedding anniversary
History and literature
Age 30 is when Jewish priests traditionally start their service (according to Numbers 4:3).
One of the rallying cries of the 1960s student/youth protest movement was the slogan, "Don't trust anyone over thirty".
In The Myth of Sisyphus the French existentialist Albert Camus comments that the age of thirty is a crucial period in the life of a man, for at that age he gains a new awareness of the meaning of time.
References
Integers | 30 (number) | [
"Mathematics"
] | 732 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
341,671 | https://en.wikipedia.org/wiki/40%20%28number%29 | 40 (forty) is the natural number following 39 and preceding 41.
Though the word is related to four (4), the spelling forty replaced fourty during the 17th century and is now the standard form.
Mathematics
40 is an abundant number.
Swiss mathematician Leonhard Euler noted 40 prime numbers generated by the quadratic polynomial , with values . These forty prime numbers are the same prime numbers that are generated using the polynomial with values of from 1 through 40, and are also known in this context as Euler's "lucky" numbers.
Forty is the only integer whose English name has its letters in alphabetical order.
In religion
The number 40 is found in many traditions without any universal explanation for its use. In Jewish, Christian, Islamic, and other Middle Eastern traditions it is taken to represent a large, approximate number, similar to "umpteen".
Sumerian
Enki ( /ˈɛŋki/) or Enkil (Sumerian: dEN.KI(G)𒂗𒆠) is a god in Sumerian mythology, later known as Ea in Akkadian and Babylonian mythology. He was originally patron god of the city of Eridu, but later the influence of his cult spread throughout Mesopotamia and to the Canaanites, Hittites and Hurrians. He was the deity of crafts (gašam); mischief; water, seawater, lake water (a, aba, ab), intelligence (gestú, literally "ear") and creation (Nudimmud: nu, likeness, dim mud, make bear). He was associated with the southern band of constellations called stars of Ea, but also with the constellation AŠ-IKU, the Field (Square of Pegasus). Beginning around the second millennium BCE, he was sometimes referred to in writing by the numeric ideogram for "40", occasionally referred to as his "sacred number".
Judaism
In the Hebrew Bible, forty is often used for time periods, forty days or forty years, which separate "two distinct epochs".
Rain fell for "forty days and forty nights" during the Flood (Genesis 7:4).
Noah waited for forty days after the tops of mountains were seen after the flood, before releasing a raven (Genesis 8:5–7).
Spies were sent by Moses to explore the land of Canaan (promised to the children of Israel) for "forty days" (Numbers 13:2, 25).
The Hebrew people lived in the lands outside of the promised land for "forty years". This period of years represents the time it takes for a new generation to arise (Numbers 32:13).
Several early Hebrew leaders and kings are said to have ruled for "forty years", that is, a generation. Examples include Eli (1 Samuel 4:18), Saul (Acts 13:21), David (2 Samuel 5:4), and Solomon (1 Kings 11:42).
Goliath challenged the Israelites twice a day for forty days before David defeated him (1 Samuel 17:16).
Moses spent three consecutive periods of "forty days and forty nights" on Mount Sinai:
He went up on the seventh day of Sivan, after God gave the Torah to the Jewish people, in order to learn the Torah from God, and came down on the seventeenth day of Tammuz, when he saw the Jews worshiping the Golden Calf and broke the tablets (Deuteronomy 9:11).
He went up on the eighteenth day of Tammuz to beg forgiveness for the people's sin and came down without God's atonement on the twenty-ninth day of Av (Deuteronomy 9:25).
He went up on the first day of Elul and came down on the tenth day of Tishrei, the first Yom Kippur, with God's atonement (Deuteronomy 10:10).
A mikvah consists of 40 se'ah (approximately ) of water
The prophet Elijah had to walk 40 days and 40 nights before arriving at mount Horeb (1 Kings 19:8).
40 lashes is one of the punishments meted out by the Sanhedrin (Deuteronomy 25:3), though in actual practice only 39 lashes were administered.
(Numbers 14:33–34) alludes to the same with ties to the prophecy in The Book of Daniel. "For forty years—one year for each of the forty days you explored the land—you will suffer for your sins and know what it is like to have Me against you."
One of the prerequisites for a man to study Kabbalah is that he is forty years old.
"The registering of these men was carried on cruelly, zealously, assiduously, from the rising of the sun to its going down, and was not brought to an end in forty days" (3 Maccabees 4:15).
Jonah warns Nineveh that "Forty days more, and Nineveh shall be overthrown." (Jonah 3:4)
Islam
Prophet Muhammad was forty years old when he first received the revelation delivered by the archangel Gabriel. The Quran states that a person reaches the age of maturity at 40.
Hinduism
In Hinduism, some popular religious prayers consist of forty shlokas or dohas (couplets, stanzas). The most common being the Hanuman Chalisa (chaalis is the Hindi term for 40).
In the Hindu system some of the popular fasting periods consist 40 days and is called the period One 'Mandala Kalam' Kalam means a period and Mandala Kalam means a period of 40 days. For example, the devotees (male and female) of Swami Ayyappa, the name of a Hindu god very popular in Kerala, India (Sabarimala Swami Ayyappan) strictly observed forty days fasting and visit (since female devotees of a certain biological age group would not be able to perform the continuous 40-day-austerities, they would not enter into the god's temple until September 2018) with their holy submission or offerings on 41st or a convenient day after a minimum 40 days practice of fasting. The offering is called "Kaanikka".
In other fields
Forty is also:
Kyrgyzstan is a country in Central Asia and is derived from the Kyrgyz word meaning "Land of forty tribes". The number 40 is significant in Kyrgyz traditions and appears in many areas of Kyrgyz culture.
In Tamil literary tradition, 40 (nāṟpatu: நாற்பது) and 400 (nāṉūṟu: நானூறு) have a special significance. Many short works of the post-Sangam period have 40 poems. Some well-known works with 40 poems are naṉṉeṟi, iṉṉā nāṟpatu, kaḷavaḻi nāṟpatu, iṉiyavai nāṟpatu, kār nāṟpatu.
the number of years of marriage celebrated by the ruby wedding anniversary
Forty-shilling freeholders, a nickname, given to those who qualified for a franchise, the right to vote, based on their interest in land and/or property with an annual rental value of 40s. Introduced in England in 1430, it applied there and in many associated territories, in various forms, up to 1918.
References
Further reading
External links
Integers | 40 (number) | [
"Mathematics"
] | 1,508 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
341,682 | https://en.wikipedia.org/wiki/Square%20root%20of%202 | The square root of 2 (approximately 1.4142) is the positive real number that, when multiplied by itself or squared, equals the number 2. It may be written in mathematics as or . It is an algebraic number, and therefore not a transcendental number. Technically, it should be called the principal square root of 2, to distinguish it from the negative number with the same property.
Geometrically, the square root of 2 is the length of a diagonal across a square with sides of one unit of length; this follows from the Pythagorean theorem. It was probably the first number known to be irrational. The fraction (≈ 1.4142857) is sometimes used as a good rational approximation with a reasonably small denominator.
Sequence in the On-Line Encyclopedia of Integer Sequences consists of the digits in the decimal expansion of the square root of 2, here truncated to 65 decimal places:
History
The Babylonian clay tablet YBC 7289 (–1600 BC) gives an approximation of in four sexagesimal figures, , which is accurate to about six decimal digits, and is the closest possible three-place sexagesimal representation of , representing a margin of error of only –0.000042%:
Another early approximation is given in ancient Indian mathematical texts, the Sulbasutras (–200 BC), as follows: Increase the length [of the side] by its third and this third by its own fourth less the thirty-fourth part of that fourth. That is,
This approximation, diverging from the actual value of by approximately +0.07%, is the seventh in a sequence of increasingly accurate approximations based on the sequence of Pell numbers, which can be derived from the continued fraction expansion of . Despite having a smaller denominator, it is only slightly less accurate than the Babylonian approximation.
Pythagoreans discovered that the diagonal of a square is incommensurable with its side, or in modern language, that the square root of two is irrational. Little is known with certainty about the time or circumstances of this discovery, but the name of Hippasus of Metapontum is often mentioned. For a while, the Pythagoreans treated as an official secret the discovery that the square root of two is irrational, and, according to legend, Hippasus was murdered for divulging it, though this has little to any substantial evidence in traditional historian practice. The square root of two is occasionally called Pythagoras's number or Pythagoras's constant.
Ancient Roman architecture
In ancient Roman architecture, Vitruvius describes the use of the square root of 2 progression or ad quadratum technique. It consists basically in a geometric, rather than arithmetic, method to double a square, in which the diagonal of the original square is equal to the side of the resulting square. Vitruvius attributes the idea to Plato. The system was employed to build pavements by creating a square tangent to the corners of the original square at 45 degrees of it. The proportion was also used to design atria by giving them a length equal to a diagonal taken from a square, whose sides are equivalent to the intended atrium's width.
Decimal value
Computation algorithms
There are many algorithms for approximating as a ratio of integers or as a decimal. The most common algorithm for this, which is used as a basis in many computers and calculators, is the Babylonian method for computing square roots, an example of Newton's method for computing roots of arbitrary functions. It goes as follows:
First, pick a guess, ; the value of the guess affects only how many iterations are required to reach an approximation of a certain accuracy. Then, using that guess, iterate through the following recursive computation:
Each iteration improves the approximation, roughly doubling the number of correct digits. Starting with , the subsequent iterations yield:
Rational approximations
A simple rational approximation (≈ 1.4142857) is sometimes used. Despite having a denominator of only 70, it differs from the correct value by less than (approx. ).
The next two better rational approximations are (≈ 1.4141414...) with a marginally smaller error (approx. ), and (≈ 1.4142012) with an error of approx .
The rational approximation of the square root of two derived from four iterations of the Babylonian method after starting with () is too large by about ; its square is ≈ .
Records in computation
In 1997, the value of was calculated to 137,438,953,444 decimal places by Yasumasa Kanada's team. In February 2006, the record for the calculation of was eclipsed with the use of a home computer. Shigeru Kondo calculated one trillion decimal places in 2010. Other mathematical constants whose decimal expansions have been calculated to similarly high precision include , , and the golden ratio. Such computations provide empirical evidence of whether these numbers are normal.
This is a table of recent records in calculating the digits of .
Proofs of irrationality
Proof by infinite descent
One proof of the number's irrationality is the following proof by infinite descent. It is also a proof of a negation by refutation: it proves the statement " is not rational" by assuming that it is rational and then deriving a falsehood.
Assume that is a rational number, meaning that there exists a pair of integers whose ratio is exactly .
If the two integers have a common factor, it can be eliminated using the Euclidean algorithm.
Then can be written as an irreducible fraction such that and are coprime integers (having no common factor) which additionally means that at least one of or must be odd.
It follows that and . ( ) ( are integers)
Therefore, is even because it is equal to . ( is necessarily even because it is 2 times another whole number.)
It follows that must be even (as squares of odd integers are never even).
Because is even, there exists an integer that fulfills .
Substituting from step 7 for in the second equation of step 4: , which is equivalent to .
Because is divisible by two and therefore even, and because , it follows that is also even which means that is even.
By steps 5 and 8, and are both even, which contradicts step 3 (that is irreducible).
Since we have derived a falsehood, the assumption (1) that is a rational number must be false. This means that is not a rational number; that is to say, is irrational.
This proof was hinted at by Aristotle, in his Analytica Priora, §I.23. It appeared first as a full proof in Euclid's Elements, as proposition 117 of Book X. However, since the early 19th century, historians have agreed that this proof is an interpolation and not attributable to Euclid.
Proof using reciprocals
Assume by way of contradiction that were rational. Then we may write as an irreducible fraction in lowest terms, with coprime positive integers . Since , it follows that can be expressed as the irreducible fraction . However, since and differ by an integer, it follows that the denominators of their irreducible fraction representations must be the same, i.e. . This gives the desired contradiction.
Proof by unique factorization
As with the proof by infinite descent, we obtain . Being the same quantity, each side has the same prime factorization by the fundamental theorem of arithmetic, and in particular, would have to have the factor 2 occur the same number of times. However, the factor 2 appears an odd number of times on the right, but an even number of times on the left—a contradiction.
Application of the rational root theorem
The irrationality of also follows from the rational root theorem, which states that a rational root of a polynomial, if it exists, must be the quotient of a factor of the constant term and a factor of the leading coefficient. In the case of , the only possible rational roots are and . As is not equal to or , it follows that is irrational. This application also invokes the integer root theorem, a stronger version of the rational root theorem for the case when is a monic polynomial with integer coefficients; for such a polynomial, all roots are necessarily integers (which is not, as 2 is not a perfect square) or irrational.
The rational root theorem (or integer root theorem) may be used to show that any square root of any natural number that is not a perfect square is irrational. For other proofs that the square root of any non-square natural number is irrational, see Quadratic irrational number or Infinite descent.
Geometric proofs
A simple proof is attributed to Stanley Tennenbaum when he was a student in the early 1950s. Assume that , where and are coprime positive integers. Then and are the smallest positive integers for which . Now consider two squares with sides and , and place two copies of the smaller square inside the larger one as shown in Figure 1. The area of the square overlap region in the centre must equal the sum of the areas of the two uncovered squares. Hence there exist positive integers and such that . Since it can be seen geometrically that and , this contradicts the original assumption.
Tom M. Apostol made another geometric reductio ad absurdum argument showing that is irrational. It is also an example of proof by infinite descent. It makes use of classic compass and straightedge construction, proving the theorem by a method similar to that employed by ancient Greek geometers. It is essentially the same algebraic proof as in the previous paragraph, viewed geometrically in another way.
Let be a right isosceles triangle with hypotenuse length and legs as shown in Figure 2. By the Pythagorean theorem, . Suppose and are integers. Let be a ratio given in its lowest terms.
Draw the arcs and with centre . Join . It follows that , and and coincide. Therefore, the triangles and are congruent by SAS.
Because is a right angle and is half a right angle, is also a right isosceles triangle. Hence implies . By symmetry, , and is also a right isosceles triangle. It also follows that .
Hence, there is an even smaller right isosceles triangle, with hypotenuse length and legs . These values are integers even smaller than and and in the same ratio, contradicting the hypothesis that is in lowest terms. Therefore, and cannot be both integers; hence, is irrational.
Constructive proof
While the proofs by infinite descent are constructively valid when "irrational" is defined to mean "not rational", we can obtain a constructively stronger statement by using a positive definition of "irrational" as "quantifiably apart from every rational". Let and be positive integers such that (as satisfies these bounds). Now and cannot be equal, since the first has an odd number of factors 2 whereas the second has an even number of factors 2. Thus . Multiplying the absolute difference by in the numerator and denominator, we get
the latter inequality being true because it is assumed that , giving (otherwise the quantitative apartness can be trivially established). This gives a lower bound of for the difference , yielding a direct proof of irrationality in its constructively stronger form, not relying on the law of excluded middle. This proof constructively exhibits an explicit discrepancy between and any rational.
Proof by Pythagorean triples
This proof uses the following property of primitive Pythagorean triples:
If , , and are coprime positive integers such that , then is never even.
This lemma can be used to show that two identical perfect squares can never be added to produce another perfect square.
Suppose the contrary that is rational. Therefore,
where and
Squaring both sides,
Here, is a primitive Pythagorean triple, and from the lemma is never even. However, this contradicts the equation which implies that must be even.
Multiplicative inverse
The multiplicative inverse (reciprocal) of the square root of two is a widely used constant, with the decimal value:
It is often encountered in geometry and trigonometry because the unit vector, which makes a 45° angle with the axes in a plane, has the coordinates
Each coordinate satisfies
Properties
One interesting property of is
since
This is related to the property of silver ratios.
can also be expressed in terms of copies of the imaginary unit using only the square root and arithmetic operations, if the square root symbol is interpreted suitably for the complex numbers and :
is also the only real number other than 1 whose infinite tetrate (i.e., infinite exponential tower) is equal to its square. In other words: if for , and for , the limit of as will be called (if this limit exists) . Then is the only number for which . Or symbolically:
appears in Viète's formula for ,
which is related to the formula
Similar in appearance but with a finite number of terms, appears in various trigonometric constants:
It is not known whether is a normal number, which is a stronger property than irrationality, but statistical analyses of its binary expansion are consistent with the hypothesis that it is normal to base two.
Representations
Series and product
The identity , along with the infinite product representations for the sine and cosine, leads to products such as
and
or equivalently,
The number can also be expressed by taking the Taylor series of a trigonometric function. For example, the series for gives
The Taylor series of with and using the double factorial gives
The convergence of this series can be accelerated with an Euler transform, producing
It is not known whether can be represented with a BBP-type formula. BBP-type formulas are known for and , however.
The number can be represented by an infinite series of Egyptian fractions, with denominators defined by 2n th terms of a Fibonacci-like recurrence relation a(n) = 34a(n−1) − a(n−2), a(0) = 0, a(1) = 6.
Continued fraction
The square root of two has the following continued fraction representation:
The convergents formed by truncating this representation form a sequence of fractions that approximate the square root of two to increasing accuracy, and that are described by the Pell numbers (i.e., ). The first convergents are: and the convergent following is . The convergent differs from by almost exactly , which follows from:
Nested square
The following nested square expressions converge to
Applications
Paper size
In 1786, German physics professor Georg Christoph Lichtenberg found that any sheet of paper whose long edge is times longer than its short edge could be folded in half and aligned with its shorter side to produce a sheet with exactly the same proportions as the original. This ratio of lengths of the longer over the shorter side guarantees that cutting a sheet in half along a line results in the smaller sheets having the same (approximate) ratio as the original sheet. When Germany standardised paper sizes at the beginning of the 20th century, they used Lichtenberg's ratio to create the "A" series of paper sizes. Today, the (approximate) aspect ratio of paper sizes under ISO 216 (A4, A0, etc.) is 1:.
Proof:
Let shorter length and longer length of the sides of a sheet of paper, with
as required by ISO 216.
Let be the analogous ratio of the halved sheet, then
Physical sciences
There are some interesting properties involving the square root of 2 in the physical sciences:
The square root of two is the frequency ratio of a tritone interval in twelve-tone equal temperament music.
The square root of two forms the relationship of f-stops in photographic lenses, which in turn means that the ratio of areas between two successive apertures is 2.
The celestial latitude (declination) of the Sun during a planet's astronomical cross-quarter day points equals the tilt of the planet's axis divided by .
In the brain there are lattice cells, discovered in 2005 by a group led by May-Britt and Edvard Moser. "The grid cells were found in the cortical area located right next to the hippocampus [...] At one end of this cortical area the mesh size is small and at the other it is very large. However, the increase in mesh size is not left to chance, but increases by the squareroot of two from one area to the next."
See also
List of mathematical constants
Square root of 3,
Square root of 5,
Gelfond–Schneider constant,
Silver ratio,
Notes
References
External links
.
The Square Root of Two to 5 million digits by Jerry Bonnell and Robert J. Nemiroff. May, 1994.
Square root of 2 is irrational, a collection of proofs
Search Engine 2 billion searchable digits of , and
Quadratic irrational numbers
Mathematical constants
Pythagorean theorem
Articles containing proofs | Square root of 2 | [
"Mathematics"
] | 3,526 | [
"Euclidean plane geometry",
"Mathematical objects",
"Equations",
"nan",
"Pythagorean theorem",
"Articles containing proofs",
"Planes (geometry)",
"Mathematical constants",
"Numbers"
] |
341,703 | https://en.wikipedia.org/wiki/Bertram%20Brockhouse | Bertram Neville Brockhouse, (July 15, 1918 – October 13, 2003) was a Canadian physicist. He was awarded the Nobel Prize in Physics (1994, shared with Clifford Shull) "for pioneering contributions to the development of neutron scattering techniques for studies of condensed matter", in particular "for the development of neutron spectroscopy".
Education and early life
Brockhouse was born in Lethbridge, Alberta, to a family of English descent. He was a graduate of the University of British Columbia (BA, 1947) and the University of Toronto (MA, 1948; Ph.D, 1950).
Career and research
From 1950 to 1962, Brockhouse carried out research at Atomic Energy of Canada's Chalk River Nuclear Laboratory. Here he was joined by P. K. Iyengar, who is treated as the father of India's nuclear program.
In 1962, he became a professor at McMaster University in Canada, where he remained until his retirement in 1984.
Brockhouse died on October 13, 2003, in Hamilton, Ontario, aged 85.
Awards and honours
Brockhouse was elected a Fellow of the Royal Society (FRS) in 1965. In 1982, Brockhouse was made an Officer of the Order of Canada and was promoted to Companion in 1995.
Brockhouse shared the 1994 Nobel Prize in Physics with American Clifford Shull of MIT for developing neutron scattering techniques for studying condensed matter.
In October 2005, as part of the 75th anniversary of McMaster University's establishment in Hamilton, Ontario, a street on the university campus (University Avenue) was renamed to Brockhouse Way in honour of Brockhouse. The town of Deep River, Ontario, has also named a street in his honour.
The Nobel Prize that Bertram Brockhouse won (shared with Clifford Shull) in 1994 was awarded after the longest-ever waiting time (counting from the time when the award-winning research had been carried out).
In 1999 the Division of Condensed Matter and Materials Physics (DCMMP) and the Canadian Association of Physicists (CAP) created a medal in honour of Brockhouse. The medal is called the Brockhouse Medal and is awarded to recognize and encourage outstanding experimental or theoretical contributions to condensed matter and materials physics. This medal is awarded annually on the basis of outstanding experimental or theoretical contributions to condensed matter physics. An eligible candidate must have performed their research primarily with a Canadian Institution.
References
External links
Bertram Brockhouse, the Triple-axis Spectrometer, and Neutron Spectroscopy , from the Office of Scientific and Technical Information, United States Department of Energy
including the Nobel Lecture, December 8, 1994 Slow Neutron Spectroscopy and the Grand Atlas of the Physical World
1918 births
2003 deaths
Scientists from Lethbridge
Spectroscopists
Canadian nuclear physicists
University of Toronto alumni
University of British Columbia Faculty of Science alumni
Academic staff of McMaster University
Nobel laureates in Physics
Canadian Nobel laureates
Fellows of the Royal Society of Canada
Fellows of the American Physical Society
Companions of the Order of Canada
Canadian fellows of the Royal Society
Oliver E. Buckley Condensed Matter Prize winners
20th-century Canadian scientists
Members of the Royal Swedish Academy of Sciences | Bertram Brockhouse | [
"Physics",
"Chemistry"
] | 623 | [
"Physical chemists",
"Spectrum (physical sciences)",
"Analytical chemists",
"Spectroscopists",
"Spectroscopy"
] |
341,745 | https://en.wikipedia.org/wiki/Signal%20corps | A signal corps is a military branch, responsible for military communications (signals). Many countries maintain a signal corps, which is typically subordinate to a country's army.
Military communication usually consists of radio, telephone, and digital communications.
Asia
Rejimen Semboyan Diraja, Malaysian Royal Signals Regiment
Indian Army Corps of Signals, raised in 1911.
Pakistan Army Corps of Signals, raised in 1947.
Singapore Armed Forces Signals Formation
Sri Lanka Signals Corps
Israeli C4I Corps
Korps Perhubungan TNI AD (Indonesian Army Signal Corps)
Armed Forces of the Philippines Signal Corps
Signal Department, Royal Thai Army
Australia
Royal Australian Corps of Signals
Royal New Zealand Corps of Signals
Europe
Arma delle Trasmissioni, corps of Italian Army founded in 1953, see List of units of the Italian Army.
Royal Corps of Signals, founded in the United Kingdom (under the name Telegraph Battalion Royal Engineers) in 1884.
Communications and Information Services Corps (CIS), the signals corps of Ireland's Defence Forces.
Communication and Information Systems Groups (CIS) of the Belgian Armed Forces, before: Transmission Troops
Signal Brigade, a unit of the Serbian Armed Forces.
Telegrafregimentet, Royal Danish Signal Regiment.
Sambandsbataljonen in the Brigade Nord of the Norwegian Army
Regiment Verbindingstroepen, a regiment of the Royal Netherlands Army.
Fernmeldetruppe of Bundeswehr, before: Signal Corps of the Wehrmacht and Waffen SS.
Signal Communications Troops of Russia.
Signal Corps (French Army).
Viestirykmentti, Signal Regiment of the Finnish Army.
Swedish Army Signal Troops.
North America
Royal Canadian Corps of Signals, formed in 1903 as the Canadian Signalling Corps
United States Army Signal Corps, founded in 1860 by Major Albert J. Myer
See also
Military communications
Telegraph troops
Combat support occupations | Signal corps | [
"Engineering"
] | 373 | [
"Military communications",
"Telecommunications engineering"
] |
341,782 | https://en.wikipedia.org/wiki/Scarsdale%20diet | The Scarsdale diet, a high-protein low-carbohydrate fad diet designed for weight loss, created in the 1970s by Herman Tarnower and named for the town in New York where he practiced cardiology, is described in the book The Complete Scarsdale Medical Diet Plus Dr. Tarnower's Lifetime Keep-Slim Program. Tarnower wrote the book together with self-help author Samm Sinclair Baker.
Overview
The diet is similar to the Atkins Diet and Stillman diet in calling for a high protein low-carbohydrate diet, but also emphasizes the importance of fruits and vegetables. The diet restricts certain foods but allows an unrestricted amount of animal protein, especially eggs, fish, lean meats and poultry. To eat on Sundays, the diet recommends "plenty of steak" with tomatoes, celery or brussels sprouts. The Scarsdale diet is low-calorie, restricted to 1,000 calories per day and lasts between seven and fourteen days.
The book was originally published in 1978 and received an unexpected boost in popular sales when its author, Herman Tarnower, was murdered in 1980 by his jilted lover Jean Harris. During her trial, Harris' lawyer argued that she had been the book's "primary author".
Health risks
Medical experts have listed the Scarsdale diet as an example of a fad diet, as it carries potential health risks and does not instill the kind of healthy eating habits required for sustainable weight loss. It is unbalanced because of the high amount of meat consumed. The diet's high fat ratio may increase the risk of heart disease. People following the diet can lose much weight at first, but this loss is generally not sustained any better than with normal calorie restriction.
Nutritionist Elaine B. Feldman has commented that high-protein low-carbohydrate diets such as the Atkins and Scarsdale diets are nutritionally deficient, produce diuresis and are "clearly unphysiologic and may be hazardous". The Scarsdale diet was criticized by Henry Buchwald and colleagues for "serious nutritional deficiencies". Negative effects of the diet include constipation, nausea, weakness and bad breath due to ketosis. The diet has also been criticized for being deficient in vitamin A and riboflavin.
See also
List of fad diets
References
Fad diets
High-protein diets
Low-carbohydrate diets | Scarsdale diet | [
"Chemistry"
] | 509 | [
"Carbohydrates",
"Low-carbohydrate diets"
] |
341,818 | https://en.wikipedia.org/wiki/Lane%E2%80%93Emden%20equation | In astrophysics, the Lane–Emden equation is a dimensionless form of Poisson's equation for the gravitational potential of a Newtonian self-gravitating, spherically symmetric, polytropic fluid. It is named after astrophysicists Jonathan Homer Lane and Robert Emden. The equation reads
where is a dimensionless radius and is related to the density, and thus the pressure, by for central density . The index is the polytropic index that appears in the polytropic equation of state,
where and are the pressure and density, respectively, and is a constant of proportionality. The standard boundary conditions are and . Solutions thus describe the run of pressure and density with radius and are known as polytropes of index . If an isothermal fluid (polytropic index tends to infinity) is used instead of a polytropic fluid, one obtains the Emden–Chandrasekhar equation.
Applications
Physically, hydrostatic equilibrium connects the gradient of the potential, the density, and the gradient of the pressure, whereas Poisson's equation connects the potential with the density. Thus, if we have a further equation that dictates how the pressure and density vary with respect to one another, we can reach a solution. The particular choice of a polytropic gas as given above makes the mathematical statement of the problem particularly succinct and leads to the Lane–Emden equation. The equation is a useful approximation for self-gravitating spheres of plasma such as stars, but typically it is a rather limiting assumption.
Derivation
From hydrostatic equilibrium
Consider a self-gravitating, spherically symmetric fluid in hydrostatic equilibrium. Mass is conserved and thus described by the continuity equation
where is a function of . The equation of hydrostatic equilibrium is
where is also a function of . Differentiating again gives
where the continuity equation has been used to replace the mass gradient. Multiplying both sides by and collecting the derivatives of on the left, one can write
Dividing both sides by yields, in some sense, a dimensional form of the desired equation. If, in addition, we substitute for the polytropic equation of state with and , we have
Gathering the constants and substituting , where
we have the Lane–Emden equation,
From Poisson's equation
Equivalently, one can start with Poisson's equation,
One can replace the gradient of the potential using the hydrostatic equilibrium, via
which again yields the dimensional form of the Lane–Emden equation.
Exact solutions
For a given value of the polytropic index , denote the solution to the Lane–Emden equation as . In general, the Lane–Emden equation must be solved numerically to find . There are exact, analytic solutions for certain values of , in particular: . For between 0 and 5, the solutions are continuous and finite in extent, with the radius of the star given by , where .
For a given solution , the density profile is given by
The total mass of the model star can be found by integrating the density over radius, from 0 to .
The pressure can be found using the polytropic equation of state, , i.e.
Finally, if the gas is ideal, the equation of state is , where is the Boltzmann constant and the mean molecular weight. The temperature profile is then given by
In spherically symmetric cases, the Lane–Emden equation is integrable for only three values of the polytropic index .
For n = 0
If , the equation becomes
Re-arranging and integrating once gives
Dividing both sides by and integrating again gives
The boundary conditions and imply that the constants of integration are and . Therefore,
For n = 1
When , the equation can be expanded in the form
One assumes a power series solution:
This leads to a recursive relationship for the expansion coefficients:
This relation can be solved leading to the general solution:
The boundary condition for a physical polytrope demands that as .
This requires that , thus leading to the solution:
For n = 2
This exact solution was found by accident when searching for zero values of the related TOV Equation.
We consider a series expansion around
with initial values and .
Plugging this into the Lane-Emden equation, we can show that all odd coefficients of the series vanish .
Furthermore, we obtain a recursive relationship between the even coefficients of the series.
It was proven that this series converges at least for but numerical results showed good agreement for much larger values.
For n = 5
We start from with the Lane–Emden equation:
Rewriting for produces:
Differentiating with respect to leads to:
Reduced, we come by:
Therefore, the Lane–Emden equation has the solution
when . This solution is finite in mass but infinite in radial extent, and therefore the complete polytrope does not represent a physical solution. Chandrasekhar believed for a long time that finding other solution for "is complicated and involves elliptic integrals".
Srivastava's solution
In 1962, Sambhunath Srivastava found an explicit solution when . His solution is given by
and from this solution, a family of solutions can be obtained using homology transformation. Since this solution does not satisfy the conditions at the origin (in fact, it is oscillatory with amplitudes growing indefinitely as the origin is approached), this solution can be used in composite stellar models.
Analytic solutions
In applications, the main role play analytic solutions that are expressible by the convergent power series expanded around some initial point. Typically the expansion point is , which is also a singular point (fixed singularity) of the equation, and there is provided some initial data at the centre of the star. One can prove that the equation has the convergent power series/analytic solution around the origin of the form
The radius of convergence of this series is limited due to existence of two singularities on the imaginary axis in the complex plane. These singularities are located symmetrically with respect to the origin. Their position change when we change equation parameters and the initial condition , and therefore, they are called movable singularities due to classification of the singularities of non-linear ordinary differential equations in the complex plane by Paul Painlevé. A similar structure of singularities appears in other non-linear equations that result from the reduction of the Laplace operator in spherical symmetry, e.g., Isothermal Sphere equation.
Analytic solutions can be extended along the real line by analytic continuation procedure resulting in the full profile of the star or molecular cloud cores. Two analytic solutions with the overlapping circles of convergence can also be matched on the overlap to the larger domain solution, which is a commonly used method of construction of profiles of required properties.
The series solution is also used in the numerical integration of the equation. It is used to shift the initial data for analytic solution slightly away from the origin since at the origin the numerical methods fail due to the singularity of the equation.
Numerical solutions
In general, solutions are found by numerical integration. Many standard methods require that the problem is formulated as a system of first-order ordinary differential equations. For example,
Here, is interpreted as the dimensionless mass, defined by . The relevant initial conditions are and . The first equation represents hydrostatic equilibrium and the second represents mass conservation.
Homologous variables
Homology-invariant equation
It is known that if is a solution of the Lane–Emden equation, then so is . Solutions that are related in this way are called homologous; the process that transforms them is homology. If one chooses variables that are invariant to homology, then we can reduce the order of the Lane–Emden equation by one.
A variety of such variables exist. A suitable choice is
and
We can differentiate the logarithms of these variables with respect to , which gives
and
Finally, we can divide these two equations to eliminate the dependence on , which leaves
This is now a single first-order equation.
Topology of the homology-invariant equation
The homology-invariant equation can be regarded as the autonomous pair of equations
and
The behaviour of solutions to these equations can be determined by linear stability analysis. The critical points of the equation (where ) and the eigenvalues and eigenvectors of the Jacobian matrix are tabulated below.
See also
Emden–Chandrasekhar equation
Chandrasekhar's white dwarf equation
References
Further reading
External links
Astrophysics
Ordinary differential equations | Lane–Emden equation | [
"Physics",
"Astronomy"
] | 1,708 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
341,821 | https://en.wikipedia.org/wiki/Spectrolite | Spectrolite is an uncommon variety of labradorite feldspar.
Colors
Spectrolite exhibits a richer range of colors than other labradorites as for instance in Canada or Madagascar (which show mostly tones of blue-grey-green) and high labradorescence. Due to the unique colors mined in Finland, spectrolite has become a brand name for material mined only there. Sometimes spectrolite is incorrectly used to describe labradorite whenever a richer display of colors is present, regardless of locality: for example, labradorite with the spectrolite play of colors has sometimes described material from Madagascar.
History
Finnish geologist Aarne Laitakari (1890–1975) described the peculiar stone and sought its origin for years when his son Pekka discovered a deposit at Ylämaa in south-eastern Finland, while building the Salpa Line fortifications there in 1940.
The quarrying of spectrolite began after the Second World War and became a significant local industry. In 1973, the first workshop in Ylämaa began cutting and polishing spectrolite for jewels. After that, a gem center was established in Ylämaa with training for gem-cutting accompanied by an annual Gem and Mineral Show initiated by Esko Hämäläinen, mayor of Ylämaa municipality.
References
Seppo Lahti I.1989
The origin of interference colours in spectrolite (iridescent labradorite).Geologi 41.
Feldspar
Gemstones | Spectrolite | [
"Physics"
] | 291 | [
"Materials",
"Gemstones",
"Matter"
] |
341,918 | https://en.wikipedia.org/wiki/Parthenocarpy | In botany and horticulture, parthenocarpy is the natural or artificially induced production of fruit without fertilisation of ovules, which makes the fruit seedless. The phenomenon has been observed since ancient times but was first scientifically described by German botanist Fritz Noll in 1902.
Stenospermocarpy may also produce apparently seedless fruit, but the seeds are actually aborted while they are still small. Parthenocarpy (or stenospermocarpy) occasionally occurs as a mutation in nature; if it affects every flower, the plant can no longer sexually reproduce but might be able to propagate by apomixis or by vegetative means. Examples of this include many citrus varieties that undergo nucellar embryony for reproduction, instead of solely sexual reproduction, and can yield seedless fruits.
Ecological importance
Parthenocarpy of some fruits on a plant may be of value. Up to 20% of the fruits of wild parsnip are parthenocarpic. The seedless wild parsnip fruit are preferred by certain herbivores and so serve as a "decoy defense" against seed predation. Utah juniper has a similar defense against bird feeding. The ability to produce seedless fruit when pollination is unsuccessful may be an advantage to a plant because it provides food for the plant's seed dispersers. Without a fruit crop, the seed dispersing animals may starve or migrate.
In some plants, pollination or another stimulation is required for parthenocarpy, termed stimulative parthenocarpy. Plants that do not require pollination or other stimulation to produce parthenocarpic fruit have vegetative parthenocarpy. Seedless cucumbers are an example of vegetative parthenocarpy, seedless watermelon is an example of stenospermocarpy as they are immature seeds (aborted ones).
Plants that moved from one area of the world to another may not always be accompanied by their pollinating partner, and the lack of pollinators has spurred human cultivation of parthenocarpic varieties.
Commercial importance
Seedlessness is seen as a desirable trait in edible fruit with hard seeds such as banana, pineapple, orange and grapefruit. Parthenocarpy is also desirable in fruit crops that may be difficult to pollinate or fertilize, such as fig, tomato and summer squash. In dioecious species, such as persimmon, parthenocarpy increases fruit production because staminate trees do not need to be planted to provide pollen. Parthenocarpy is undesirable in nut crops, such as pistachio, for which the seed is the edible part. Horticulturists have selected and propagated parthenocarpic cultivars of many plants, including banana, fig, cactus pear (Opuntia), breadfruit and eggplant. Some plants, such as pineapple, produce seedless fruits when a single cultivar is grown because they are self-infertile. Some cucumbers produce seedless fruit if pollinators are excluded. Seedless watermelon plants are actually grown from seeds. The seeds are produced by crossing a diploid parent with a tetraploid parent to produce triploid seeds. It has been suggested that parthenocarpy could explain the difference in the yields in active compounds of the genus Cannabis.
Some parthenocarpic cultivars are of ancient origin. The oldest known cultivated plant is a parthenocarpic fig that was first grown at least 11,200 years ago.
In some climates, normally-seeded pear cultivars produce mainly seedless fruit for lack of pollination.
When sprayed on flowers, any of the plant hormones gibberellin, auxin and cytokinin could stimulate the development of parthenocarpic fruit. That is termed artificial parthenocarpy. Plant hormones are seldom used commercially to produce parthenocarpic fruit. Home gardeners sometimes spray their tomatoes with an auxin to assure fruit production.
Some parthenocarpic cultivars have been developed as genetically modified organisms.
Misconceptions
Most commercial seedless grape cultivars, such as 'Thompson Seedless', are seedless not because of parthenocarpy but because of stenospermocarpy.
Parthenocarpy is sometimes claimed to be the equivalent of parthenogenesis in animals. That is incorrect because parthenogenesis is a method of asexual reproduction, with embryo formation without fertilization, and parthenocarpy involves fruit formation, without seed formation. The plant equivalent of parthenogenesis is apomixis.
References
External links
Plant morphology
Plant reproduction | Parthenocarpy | [
"Biology"
] | 995 | [
"Behavior",
"Plant reproduction",
"Plants",
"Reproduction",
"Plant morphology"
] |
342,038 | https://en.wikipedia.org/wiki/List%20of%20Lie%20groups%20topics | This is a list of Lie group topics, by Wikipedia page.
Examples
See Table of Lie groups for a list
General linear group, special linear group
SL2(R)
SL2(C)
Unitary group, special unitary group
SU(2)
SU(3)
Orthogonal group, special orthogonal group
Rotation group SO(3)
SO(8)
Generalized orthogonal group, generalized special orthogonal group
The special unitary group SU(1,1) is the unit sphere in the ring of coquaternions. It is the group of hyperbolic motions of the Poincaré disk model of the Hyperbolic plane.
Lorentz group
Spinor group
Symplectic group
Exceptional groups
G2
F4
E6
E7
E8
Affine group
Euclidean group
Poincaré group
Heisenberg group
Lie algebras
Commutator
Jacobi identity
Universal enveloping algebra
Baker-Campbell-Hausdorff formula
Casimir invariant
Killing form
Kac–Moody algebra
Affine Lie algebra
Loop algebra
Graded Lie algebra
Foundational results
One-parameter group, One-parameter subgroup
Matrix exponential
Infinitesimal transformation
Lie's third theorem
Maurer–Cartan form
Cartan's theorem
Cartan's criterion
Local Lie group
Formal group law
Hilbert's fifth problem
Hilbert-Smith conjecture
Lie group decompositions
Real form (Lie theory)
Complex Lie group
Complexification (Lie group)
Semisimple theory
Simple Lie group
Compact Lie group, Compact real form
Semisimple Lie algebra
Root system
Simply laced group
ADE classification
Maximal torus
Weyl group
Dynkin diagram
Weyl character formula
Representation theory
Representation of a Lie group
Representation of a Lie algebra
Adjoint representation of a Lie group
Adjoint representation of a Lie algebra
Unitary representation
Weight (representation theory)
Peter–Weyl theorem
Borel–Weil theorem
Kirillov character formula
Representation theory of SU(2)
Representation theory of SL2(R)
Applications
Physical theories
Pauli matrices
Gell-Mann matrices
Poisson bracket
Noether's theorem
Wigner's classification
Gauge theory
Grand unification theory
Supergroup
Lie superalgebra
Twistor theory
Anyon
Witt algebra
Virasoro algebra
Geometry
Erlangen programme
Homogeneous space
Principal homogeneous space
Invariant theory
Lie derivative
Darboux derivative
Lie groupoid
Lie algebroid
Discrete groups
Lattice (group)
Lattice (discrete subgroup)
Frieze group
Wallpaper group
Space group
Crystallographic group
Fuchsian group
Modular group
Congruence subgroup
Kleinian group
Discrete Heisenberg group
Clifford–Klein form
Algebraic groups
Borel subgroup
Arithmetic group
Special functions
Dunkl operator
Automorphic forms
Modular form
Langlands program
People
Sophus Lie (1842 – 1899)
Wilhelm Killing (1847 – 1923)
Élie Cartan (1869 – 1951)
Hermann Weyl (1885 – 1955)
Harish-Chandra (1923 – 1983)
Lajos Pukánszky (1928 – 1996)
Bertram Kostant (1928 – 2017)
Lie groups
Lie algebras
Lie groups
Lie groups | List of Lie groups topics | [
"Mathematics"
] | 591 | [
"Lie groups",
"Mathematical structures",
"Algebraic structures",
"nan"
] |
342,058 | https://en.wikipedia.org/wiki/Gimbal%20lock | Gimbal lock is the loss of one degree of freedom in a multi-dimensional mechanism at certain alignments of the axes. In a three-dimensional three-gimbal mechanism, gimbal lock occurs when the axes of two of the gimbals are driven into a parallel configuration, "locking" the system into rotation in a degenerate two-dimensional space.
The term gimbal-lock can be misleading in the sense that none of the individual gimbals are actually restrained. All three gimbals can still rotate freely about their respective axes of suspension. Nevertheless, because of the parallel orientation of two of the gimbals' axes, there is no gimbal available to accommodate rotation about one axis, leaving the suspended object effectively locked (i.e. unable to rotate) around that axis.
The problem can be generalized to other contexts, where a coordinate system loses definition of one of its variables at certain values of the other variables.
Gimbals
A gimbal is a ring that is suspended so it can rotate about an axis. Gimbals are typically nested one within another to accommodate rotation about multiple axes.
They appear in gyroscopes and in inertial measurement units to allow the inner gimbal's orientation to remain fixed while the outer gimbal suspension assumes any orientation. In compasses and flywheel energy storage mechanisms they allow objects to remain upright. They are used to orient thrusters on rockets.
Some coordinate systems in mathematics behave as if they were real gimbals used to measure the angles, notably Euler angles.
For cases of three or fewer nested gimbals, gimbal lock inevitably occurs at some point in the system due to properties of covering spaces.
In engineering
While only two specific orientations produce exact gimbal lock, practical mechanical gimbals encounter difficulties near those orientations. When a set of gimbals is close to the locked configuration, small rotations of the gimbal platform require large motions of the surrounding gimbals. Although the ratio is infinite only at the point of gimbal lock, the practical speed and acceleration limits of the gimbals—due to inertia (resulting from the mass of each gimbal ring), bearing friction, the flow resistance of air or other fluid surrounding the gimbals (if they are not in a vacuum), and other physical and engineering factors—limit the motion of the platform close to that point.
In two dimensions
Gimbal lock can occur in gimbal systems with two degrees of freedom such as a theodolite with rotations about an azimuth (horizontal angle) and elevation (vertical angle). These two-dimensional systems can gimbal lock at zenith and nadir, because at those points azimuth is not well-defined, and rotation in the azimuth direction does not change the direction the theodolite is pointing.
Consider tracking a helicopter flying towards the theodolite from the horizon. The theodolite is a telescope mounted on a tripod so that it can move in azimuth and elevation to track the helicopter. The helicopter flies towards the theodolite and is tracked by the telescope in elevation and azimuth. The helicopter flies immediately above the tripod (i.e. it is at zenith) when it changes direction and flies at 90 degrees to its previous course. The telescope cannot track this maneuver without a discontinuous jump in one or both of the gimbal orientations. There is no continuous motion that allows it to follow the target. It is in gimbal lock. So there is an infinity of directions around zenith for which the telescope cannot continuously track all movements of a target. Note that even if the helicopter does not pass through zenith, but only near zenith, so that gimbal lock does not occur, the system must still move exceptionally rapidly to track it, as it rapidly passes from one bearing to the other. The closer to zenith the nearest point is, the faster this must be done, and if it actually goes through zenith, the limit of these "increasingly rapid" movements becomes infinitely fast, namely discontinuous.
To recover from gimbal lock the user has to go around the zenith – explicitly: reduce the elevation, change the azimuth to match the azimuth of the target, then change the elevation to match the target.
Mathematically, this corresponds to the fact that spherical coordinates do not define a coordinate chart on the sphere at zenith and nadir. Alternatively, the corresponding map T2→S2 from the torus T2 to the sphere S2 (given by the point with given azimuth and elevation) is not a covering map at these points.
In three dimensions
Consider a case of a level-sensing platform on an aircraft flying due north with its three gimbal axes mutually perpendicular (i.e., roll, pitch and yaw angles each zero). If the aircraft pitches up 90 degrees, the aircraft and platform's yaw axis gimbal becomes parallel to the roll axis gimbal, and changes about yaw can no longer be compensated for.
Solutions
This problem may be overcome by use of a fourth gimbal, actively driven by a motor so as to maintain a large angle between roll and yaw gimbal axes. Another solution is to rotate one or more of the gimbals to an arbitrary position when gimbal lock is detected and thus reset the device.
Modern practice is to avoid the use of gimbals entirely. In the context of inertial navigation systems, that can be done by mounting the inertial sensors directly to the body of the vehicle (this is called a strapdown system) and integrating sensed rotation and acceleration digitally using quaternion methods to derive vehicle orientation and velocity. Another way to replace gimbals is to use fluid bearings or a flotation chamber.
On Apollo 11
A well-known gimbal lock incident happened in the Apollo 11 Moon mission. On this spacecraft, a set of gimbals was used on an inertial measurement unit (IMU). The engineers were aware of the gimbal lock problem but had declined to use a fourth gimbal. Some of the reasoning behind this decision is apparent from the following quote:
They preferred an alternate solution using an indicator that would be triggered when near to 85 degrees pitch.
Rather than try to drive the gimbals faster than they could go, the system simply gave up and froze the platform. From this point, the spacecraft would have to be manually moved away from the gimbal lock position, and the platform would have to be manually realigned using the stars as a reference.
After the Lunar Module had landed, Mike Collins aboard the Command Module joked "How about sending me a fourth gimbal for Christmas?"
Robotics
In robotics, gimbal lock is commonly referred to as "wrist flip", due to the use of a "triple-roll wrist" in robotic arms, where three axes of the wrist, controlling yaw, pitch, and roll, all pass through a common point.
An example of a wrist flip, also called a wrist singularity, is when the path through which the robot is traveling causes the first and third axes of the robot's wrist to line up. The second wrist axis then attempts to spin 180° in zero time to maintain the orientation of the end effector. The result of a singularity can be quite dramatic and can have adverse effects on the robot arm, the end effector, and the process.
The importance of avoiding singularities in robotics has led the American National Standard for Industrial Robots and Robot Systems – Safety Requirements to define it as "a condition caused by the collinear alignment of two or more robot axes resulting in unpredictable robot motion and velocities".
In applied mathematics
The problem of gimbal lock appears when one uses Euler angles in applied mathematics; developers of 3D computer programs, such as 3D modeling, embedded navigation systems, and video games must take care to avoid it.
In formal language, gimbal lock occurs because the map from Euler angles to rotations (topologically, from the 3-torus T3 to the real projective space RP3, which is the same as the space of rotations for three-dimensional rigid bodies, formally named SO(3)) is not a local homeomorphism at every point, and thus at some points the rank (degrees of freedom) must drop below 3, at which point gimbal lock occurs. Euler angles provide a means for giving a numerical description of any rotation in three-dimensional space using three numbers, but not only is this description not unique, but there are some points where not every change in the target space (rotations) can be realized by a change in the source space (Euler angles). This is a topological constraint – there is no covering map from the 3-torus to the 3-dimensional real projective space; the only (non-trivial) covering map is from the 3-sphere, as in the use of quaternions.
To make a comparison, all the translations can be described using three numbers , , and , as the succession of three consecutive linear movements along three perpendicular axes , and axes. The same holds true for rotations: all the rotations can be described using three numbers , , and , as the succession of three rotational movements around three axes that are perpendicular one to the next. This similarity between linear coordinates and angular coordinates makes Euler angles very intuitive, but unfortunately they suffer from the gimbal lock problem.
Loss of a degree of freedom with Euler angles
A rotation in 3D space can be represented numerically with matrices in several ways. One of these representations is:
An example worth examining happens when . Knowing that and , the above expression becomes equal to:
Carrying out matrix multiplication:
And finally using the trigonometry formulas:
Changing the values of and in the above matrix has the same effects: the rotation angle changes, but the rotation axis remains in the direction: the last column and the first row in the matrix won't change. The only solution for and to recover different roles is to change .
It is possible to imagine an airplane rotated by the above-mentioned Euler angles using the X-Y-Z convention. In this case, the first angle - is the pitch. Yaw is then set to and the final rotation - by - is again the airplane's pitch. Because of gimbal lock, it has lost one of the degrees of freedom - in this case the ability to roll.
It is also possible to choose another convention for representing a rotation with a matrix using Euler angles than the X-Y-Z convention above, and also choose other variation intervals for the angles, but in the end there is always at least one value for which a degree of freedom is lost.
The gimbal lock problem does not make Euler angles "invalid" (they always serve as a well-defined coordinate system), but it makes them unsuited for some practical applications.
Alternate orientation representation
The cause of gimbal lock is the representation of orientation in calculations as three axial rotations based on Euler angles. A potential solution therefore is to represent the orientation in some other way. This could be as a rotation matrix, a quaternion (see quaternions and spatial rotation), or a similar orientation representation that treats the orientation as a value rather than three separate and related values. Given such a representation, the user stores the orientation as a value. To quantify angular changes produced by a transformation, the orientation change is expressed as a delta angle/axis rotation. The resulting orientation must be re-normalized to prevent the accumulation of floating-point error in successive transformations. For matrices, re-normalizing the result requires converting the matrix into its nearest orthonormal representation. For quaternions, re-normalization requires performing quaternion normalization.
See also
Aircraft principal axes
(equivalent navigational problem on polar expeditions)
Keyhole problem – Problems tracking near a gimbal axis under rate and acceleration limits
References
External links
Gimbal Lock - Explained at YouTube
Rotation in three dimensions
Angle
Gyroscopes
Spaceflight concepts
3D computer graphics | Gimbal lock | [
"Physics"
] | 2,519 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical quantities",
"Wikipedia categories named after physical quantities",
"Angle"
] |
342,078 | https://en.wikipedia.org/wiki/Gimbal | A gimbal is a pivoted support that permits rotation of an object about an axis. A set of three gimbals, one mounted on the other with orthogonal pivot axes, may be used to allow an object mounted on the innermost gimbal to remain independent of the rotation of its support (e.g. vertical in the first animation). For example, on a ship, the gyroscopes, shipboard compasses, stoves, and even drink holders typically use gimbals to keep them upright with respect to the horizon despite the ship's pitching and rolling.
The gimbal suspension used for mounting compasses and the like is sometimes called a Cardan suspension after Italian mathematician and physicist Gerolamo Cardano (1501–1576) who described it in detail. However, Cardano did not invent the gimbal, nor did he claim to. The device has been known since antiquity, first described in the 3rd c. BC by Philo of Byzantium, although some modern authors support the view that it may not have a single identifiable inventor.
History
The gimbal was first described by the Greek inventor Philo of Byzantium (280–220 BC). Philo described an eight-sided ink pot with an opening on each side, which can be turned so that while any face is on top, a pen can be dipped and inked — yet the ink never runs out through the holes of the other sides. This was done by the suspension of the inkwell at the center, which was mounted on a series of concentric metal rings so that it remained stationary no matter which way the pot is turned.
In Ancient China, the Han dynasty (202 BC – 220 AD) inventor and mechanical engineer Ding Huan created a gimbal incense burner around 180 AD. There is a hint in the writing of the earlier Sima Xiangru (179–117 BC) that the gimbal existed in China since the 2nd century BC. There is mention during the Liang dynasty (502–557) that gimbals were used for hinges of doors and windows, while an artisan once presented a portable warming stove to Empress Wu Zetian (r. 690–705) which employed gimbals. Extant specimens of Chinese gimbals used for incense burners date to the early Tang dynasty (618–907), and were part of the silver-smithing tradition in China.
The authenticity of Philo's description of a cardan suspension has been doubted by some authors on the ground that the part of Philo's Pneumatica which describes the use of the gimbal survived only in an Arabic translation of the early 9th century. Thus, as late as 1965, the sinologist Joseph Needham suspected Arab interpolation. However, Carra de Vaux, author of the French translation which still provides the basis for modern scholars, regards the Pneumatics as essentially genuine. The historian of technology George Sarton (1959) also asserts that it is safe to assume the Arabic version is a faithful copying of Philo's original, and credits Philon explicitly with the invention. So does his colleague Michael Lewis (2001). In fact, research by the latter scholar (1997) demonstrates that the Arab copy contains sequences of Greek letters which fell out of use after the 1st century, thereby strengthening the case that it is a faithful copy of the Hellenistic original, a view recently also shared by the classicist Andrew Wilson (2002).
The ancient Roman author Athenaeus Mechanicus, writing during the reign of Augustus (30 BC–14 AD), described the military use of a gimbal-like mechanism, calling it "little ape" (pithêkion). When preparing to attack coastal towns from the sea-side, military engineers used to yoke merchant-ships together to take the siege machines up to the walls. But to prevent the shipborne machinery from rolling around the deck in heavy seas, Athenaeus advises that "you must fix the pithêkion on the platform attached to the merchant-ships in the middle, so that the machine stays upright in any angle".
After antiquity, gimbals remained widely known in the Near East. In the Latin West, reference to the device appeared again in the 9th century recipe book called the Little Key of Painting''' (mappae clavicula). The French inventor Villard de Honnecourt depicts a set of gimbals in his sketchbook (see right). In the early modern period, dry compasses were suspended in gimbals.
Applications
Inertial navigation
In inertial navigation, as applied to ships and submarines, a minimum of three gimbals are needed to allow an inertial navigation system (stable table) to remain fixed in inertial space, compensating for changes in the ship's yaw, pitch, and roll. In this application, the inertial measurement unit (IMU) is equipped with three orthogonally mounted gyros to sense rotation about all axes in three-dimensional space. The gyro outputs are kept to a null through drive motors on each gimbal axis, to maintain the orientation of the IMU. To accomplish this, the gyro error signals are passed through "resolvers" mounted on the three gimbals, roll, pitch and yaw. These resolvers perform an automatic matrix transformation according to each gimbal angle, so that the required torques are delivered to the appropriate gimbal axis. The yaw torques must be resolved by roll and pitch transformations. The gimbal angle is never measured.
Similar sensing platforms are used on aircraft.
In inertial navigation systems, gimbal lock may occur when vehicle rotation causes two of the three gimbal rings to align with their pivot axes in a single plane. When this occurs, it is no longer possible to maintain the sensing platform's orientation.
Rocket engines
In spacecraft propulsion, rocket engines are generally mounted on a pair of gimbals to allow a single engine to vector thrust about both the pitch and yaw axes; or sometimes just one axis is provided per engine. To control roll, twin engines with differential pitch or yaw control signals are used to provide torque about the vehicle's roll axis.
Photography and imaging
Gimbals are also used to mount everything from small camera lenses to large photographic telescopes.
In portable photography equipment, single-axis gimbal heads are used in order to allow a balanced movement for camera and lenses. This proves useful in wildlife photography as well as in any other case where very long and heavy telephoto lenses are adopted: a gimbal head rotates a lens around its center of gravity, thus allowing for easy and smooth manipulation while tracking moving subjects.
Very large gimbal mounts in the form 2 or 3 axis altitude-altitude mounts are used in satellite photography for tracking purposes.
Gyrostabilized gimbals which house multiple sensors are also used for airborne surveillance applications including airborne law enforcement, pipe and power line inspection, mapping, and ISR (intelligence, surveillance, and reconnaissance). Sensors include thermal imaging, daylight, low light cameras as well as laser range finder, and illuminators.
Gimbal systems are also used in scientific optics equipment. For example, they are used to rotate a material sample along an axis to study their angular dependence of optical properties.
Film and video
Handheld 3-axis gimbals are used in stabilization systems designed to give the camera operator the independence of handheld shooting without camera vibration or shake. There are two versions of such stabilization systems: mechanical and motorized.
Mechanical gimbals have the sled, which includes the top stage where the camera is attached, the post'' which in most models can be extended, with the monitor and batteries at the bottom to counterbalance the camera weight. This is how the Steadicam stays upright, by simply making the bottom slightly heavier than the top, pivoting at the gimbal. This leaves the center of gravity of the whole rig, however heavy it may be, exactly at the operator's fingertip, allowing deft and finite control of the whole system with the lightest of touches on the gimbal.
Powered by three brushless motors, motorized gimbals have the ability to keep the camera level on all axes as the camera operator moves the camera. An inertial measurement unit (IMU) responds to movement and utilizes its three separate motors to stabilize the camera. With the guidance of algorithms, the stabilizer is able to notice the difference between deliberate movement such as pans and tracking shots from unwanted shake. This allows the camera to seem as if it is floating through the air, an effect achieved by a Steadicam in the past. Gimbals can be mounted to cars and other vehicles such as drones, where vibrations or other unexpected movements would make tripods or other camera mounts unacceptable. An example which is popular in the live TV broadcast industry, is the Newton 3-axis camera gimbal.
Marine chronometers
The rate of a mechanical marine chronometer is sensitive to its orientation. Because of this, chronometers were normally mounted on gimbals, in order to isolate them from the rocking motions of a ship at sea.
Gimbal lock
Gimbal lock is the loss of one degree of freedom in a three-dimensional, three-gimbal mechanism that occurs when the axes of two of the three gimbals are driven into a parallel configuration, "locking" the system into rotation in a degenerate two-dimensional space.
The word lock is misleading: no gimbal is restrained. All three gimbals can still rotate freely about their respective axes of suspension. Nevertheless, because of the parallel orientation of two of the gimbals' axes there is no gimbal available to accommodate rotation about one axis.
See also
Canfield joint
Heligimbal
Universal joint
Cardan shaft
Keyhole problem
Trunnion
References
External links
Ancient Roman technology
Chinese inventions
Greek inventions
Gyroscopes
Hellenistic engineering
Mechanisms (engineering) | Gimbal | [
"Engineering"
] | 2,083 | [
"Mechanical engineering",
"Mechanisms (engineering)"
] |
342,086 | https://en.wikipedia.org/wiki/War%20of%20the%20currents | The war of the currents was a series of events surrounding the introduction of competing electric power transmission systems in the late 1880s and early 1890s. It grew out of two lighting systems developed in the late 1870s and early 1880s; arc lamp street lighting running on high-voltage alternating current (AC), and large-scale low-voltage direct current (DC) indoor incandescent lighting being marketed by Thomas Edison's company. In 1886, the Edison system was faced with new competition: an alternating current system initially introduced by George Westinghouse's company that used transformers to step down from a high voltage so AC could be used for indoor lighting. Using high voltage allowed an AC system to transmit power over longer distances from more efficient large central generating stations. As the use of AC spread rapidly with other companies deploying their own systems, the Edison Electric Light Company claimed in early 1888 that high voltages used in an alternating current system were hazardous, and that the design was inferior to, and infringed on the patents behind, their direct current system.
In the spring of 1888, a media furor arose over electrical fatalities caused by pole-mounted high-voltage AC lines, attributed to the greed and callousness of the arc lighting companies that operated them. In June of that year Harold P. Brown, a New York electrical engineer, claimed the AC-based lighting companies were putting the public at risk using high-voltage systems installed in a slipshod manner. Brown also claimed that alternating current was more dangerous than direct current and tried to prove this by publicly killing animals with both currents, with technical assistance from Edison Electric. The Edison company and Brown colluded further in their parallel goals to limit the use of AC with attempts to push through legislation to severely limit AC installations and voltages. Both also colluded with Westinghouse's chief AC rival, the Thomson-Houston Electric Company, to make sure the first electric chair was powered by a Westinghouse AC generator.
By the early 1890s, the war was winding down. Further deaths caused by AC lines in New York City forced electric companies to fix safety problems. Thomas Edison no longer controlled Edison Electric, and subsidiary companies were beginning to add AC to the systems they were building. Mergers reduced competition between companies, including the merger of Edison Electric with their largest competitor, Thomson-Houston, forming General Electric in 1892. Edison Electric's merger with their chief alternating current rival brought an end to the war of the currents and created a new company that now controlled three quarters of the US electrical business. Westinghouse won the bid to supply electrical power for the World's Columbian Exposition in 1893 and won the major part of the contract to build Niagara Falls hydroelectric project later that year (partially splitting the contract with General Electric). DC commercial power distribution systems declined rapidly in numbers throughout the 20th century; the last DC utility in New York City was shut down in 2007.
Background
The war of the currents grew out of the development of two lighting systems; arc lighting running on alternating current and incandescent lighting running on direct current. Both were supplanting gas lighting systems, with arc lighting taking over large area/street lighting, and incandescent lighting replacing gas for business and residential indoor lighting.
Arc lighting
By the late 1870s, arc lamp systems were beginning to be installed in cities, powered by central generating plants. Arc lighting was capable of lighting streets, factory yards, or the interior of large buildings. Arc lamp systems used high voltages (above 3,000 volts) to supply current to multiple series-connected lamps, and some ran better on alternating current.
1880 saw the installation of large-scale arc lighting systems in several US cities including a central station set up by the Brush Electric Company in December 1880 to supply a length of Broadway in New York City with a 3,500–volt demonstration arc lighting system. The disadvantages of arc lighting were: it was maintenance intensive, buzzed, flickered, constituted a fire hazard, was really only suitable for outdoor lighting, and, at the high voltages used, was dangerous to work with.
Edison's direct current company
In 1878 inventor Thomas Edison saw a market for a system that could bring electric lighting directly into a customer's business or home, a niche not served by arc lighting systems. By 1882 the investor-owned utility Edison Illuminating Company was established in New York City. Edison designed his utility to compete with the then established gas lighting utilities, basing it on a relatively low 110-volt direct current supply to power a high resistance incandescent lamp he had invented for the system. Edison direct current systems would be sold to cities throughout the United States, making it a standard with Edison controlling all technical development and holding all the key patents. Direct current worked well with incandescent lamps, which were the principal load of the day. Direct-current systems could be directly used with storage batteries, providing valuable load-leveling and backup power during interruptions of generator operation. Direct-current generators could be easily paralleled, allowing economical operation by using smaller machines during periods of light load and improving reliability. Edison had invented a meter to allow customers to be billed for energy proportional to consumption, but this meter worked only with direct current. Direct current also worked well with electric motors, an advantage DC held throughout the 1880s. The primary drawback with the Edison direct current system was that it ran at 110 volts from generation to its final destination giving it a relatively short useful transmission range: to keep the size of the expensive copper conductors down generating plants had to be situated in the middle of population centers and could only supply customers less than a mile from the plant.
Westinghouse and alternating current
In 1884 Pittsburgh, Pennsylvania inventor and entrepreneur George Westinghouse entered the electric lighting business when he started to develop a DC system and hired William Stanley, Jr. to work on it. In 1885 he read an article in UK technical journal Engineering that described alternating current systems under development. By that time alternating current had gained a key advantage over direct current with the development of transformers that allowed the voltage to be "stepped up" to much higher transmission voltages and then dropped down to a lower end user voltage for business and residential use. The high voltages allowed a central generating station to supply a large area, up to long circuits. Westinghouse saw this as a way to build a truly competitive system instead of simply building another barely competitive DC lighting system using patents just different enough to get around the Edison patents. The Edison DC system of centralized DC plants with their short transmission range also meant there was a patchwork of un-supplied customers between Edison's plants that Westinghouse could easily supply with AC power.
In 1885 Westinghouse purchased the US patents rights to a transformer developed by French engineer Lucien Gaulard (financed by British engineer John Dixon Gibbs). He imported several of these "Gaulard–Gibbs" transformers as well as Siemens AC generators to begin experimenting with an AC-based lighting system in Pittsburgh. That same year William Stanley used the Gaulard-Gibbs design and designs from the Hungarian Ganz company's Z.B.D. transformer to develop the first practical transformer. The Westinghouse Electric Company was formed at the beginning of 1886.
In March 1886 Stanley, with Westinghouse's backing, installed the first multiple-voltage AC power system, a demonstration incandescent lighting system, in Great Barrington, Massachusetts. Expanded to the point where it could light 23 businesses along main street with very little power loss over 4000 feet, the system used transformers to step 500 AC volts at the street down to 100 volts to power incandescent lamps at each location. By fall of 1886 Westinghouse, Stanley, and Oliver B. Shallenberger had built the first commercial AC power system in the US in Buffalo, New York.
The spread of AC
By the end of 1887 Westinghouse had 68 alternating current power stations to Edison's 121 DC-based stations. To make matters worse for Edison, the Thomson-Houston Electric Company of Lynn, Massachusetts (another competitor offering AC- and DC-based systems) had built 22 power stations. Thomson-Houston was expanding their business while trying to avoid patent conflicts with Westinghouse, arranging deals such as coming to agreements over lighting company territory, paying a royalty to use the Stanley AC transformer patent, and allowing Westinghouse to use their Sawyer-Man incandescent bulb patent. Besides Thomson-Houston and Brush there were other competitors at the time, including the United States Illuminating Company and the Waterhouse Electric Light Company. All of the companies had their own electric power systems, arc lighting systems, and even incandescent lamp designs for domestic lighting, leading to constant lawsuits and patent battles between themselves and with Edison.
Safety concerns
Elihu Thomson of Thomson-Houston was concerned about AC safety and put a great deal of effort into developing a lightning arrestor for high-tension power lines as well as a magnetic blowout switch that could shut the system down in a power surge, a safety feature the Westinghouse system did not have. Thomson also worried about what would happen with the equipment after they sold it, assuming customers would follow a risky practice of installing as many lights and generators as they could get away with. He also thought the idea of using AC lighting in residential homes was too dangerous and had the company hold back on that type of installation until a safer transformer could be developed.
Due to the hazards presented by high voltage electrical lines most European cities and the city of Chicago in the US required them to be buried underground. The City of New York did not require burying and had little in the way of regulation so by the end of 1887 the mishmash of overhead wires for telephone, telegraph, fire and burglar alarm systems in Manhattan were now mixed with haphazardly strung AC lighting system wires carrying up to 6,000 volts. Insulation on power lines was rudimentary, with one electrician referring to it as having as much value "as a molasses covered rag", and exposure to the elements was eroding it over time. A third of the wires were simply abandoned by defunct companies and slowly deteriorating, causing damage to, and shorting out the other lines. Besides being an eyesore, New Yorkers were annoyed when a large March 1888 snowstorm (the Great Blizzard of 1888) tore down a large number of the lines, cutting off utilities in the city. This spurred on the idea of having these lines moved underground but it was stopped by a court injunction obtained by Western Union. Legislation to give all the utilities 90 days to move their lines into underground conduits supplied by the city was slowly making its way through the government but that was also being fought in court by the United States Illuminating Company, who claimed their AC lines were perfectly safe.
Edison's anti-AC stance
As AC systems continued to spread into territories covered by DC systems, with the companies seeming to impinge on Edison patents including incandescent lighting, things got worse for the company. The price of copper was rising, adding to the expense of Edison's low voltage DC system, which required much heavier copper wires than higher voltage AC systems. Thomas Edison's own colleagues and engineers were trying to get him to consider AC. Edison's sales force was continually losing bids in municipalities that opted for cheaper AC systems and Edison Electric Illuminating Company president Edward Hibberd Johnson pointed out that if the company stuck with an all DC system it would not be able to do business in small towns and even mid-sized cities. Edison Electric had a patent option on the ZBD transformer, and a 1886 confidential in-house report by electrical engineer Frank Sprague had recommended that the company go AC, but Thomas Edison was against the idea.
After Westinghouse installed his first large scale system, Edison wrote in a November 1886 private letter to Edward Johnson, "Just as certain as death Westinghouse will kill a customer within six months after he puts in a system of any size, He has got a new thing and it will require a great deal of experimenting to get it working practically." Edison seemed to hold a view that the very high voltage used in AC systems was too dangerous and that it would take many years to develop a safe and workable system. Safety and avoiding the bad press of killing a customer had been one of the goals in designing his DC system and he worried that a death caused by a mis-installed AC system could hold back the use of electricity in general. Edison's understanding of how AC systems worked seemed to be extensive. He noted what he saw as inefficiencies and that, combined with the capital costs in trying to finance very large generating plants, led him to believe there would be very little cost savings in an AC venture. Edison was also of the opinion that DC was a superior system (a fact that he was sure the public would come to recognize) and inferior AC technology was being used by other companies as a way to get around his DC patents.
In February 1888 Edison Electric president Edward Johnson published an 84-page pamphlet titled "A Warning from the Edison Electric Light Company" and sent it to newspapers and to companies that had purchased or were planning to purchase electrical equipment from Edison competitors, including Westinghouse and Thomson-Houston, stating that the competitors were infringing on Edison's incandescent light and other electrical patents.
Execution by electricity
As arc lighting systems spread, so did stories of how the high voltages involved were killing people, usually unwary linemen, a strange new phenomenon that seemed to instantaneously strike a victim dead. One such story in 1881 of a drunken dock worker dying after he grabbed a large electric dynamo led Buffalo, New York dentist Alfred P. Southwick to seek some application for the curious phenomenon. He worked with local physician George E. Fell and the Buffalo ASPCA, electrocuting hundreds of stray dogs, to come up with a method to euthanize animals via electricity. Southwick's 1882 and 1883 articles on how electrocution could be a replacement for hanging, using a restraint similar to a dental chair (an electric chair) caught the attention of New York State politicians who, following a series of botched hangings, were desperately seeking an alternative. An 1886 commission appointed by New York governor David B. Hill, which including Southwick, recommended in 1888 that executions be carried out by electricity using the electric chair.
There were early indications that this new form of execution would become mixed up with the war of currents. As part of their fact-finding, the commission sent out surveys to hundreds of experts on law and medicine, seeking their opinions, as well as contacting electrical experts, including Elihu Thomson and Thomas Edison. In late 1887, when death penalty commission member Southwick contacted Edison, the inventor stated he was against capital punishment and wanted nothing to do with the matter. After further prompting, Edison hit out at his chief electric power competitor, George Westinghouse, in what may have been the opening salvo in the war of currents, stating in a December 1887 letter to Southwick that it would be best to use current generated by "'alternating machines,' manufactured principally in this country by Geo. Westinghouse". Soon after the execution by electricity bill passed in June 1888, Edison was asked by a New York government official what means would be the best way to implement the state's new form of execution. "Hire out your criminals as linemen to the New York electric lighting companies" was Edison's tongue-in-cheek answer.
Anti-AC backlash
As the number of deaths attributed to high voltage lighting around the country continued to mount, a cluster of deaths in New York City in the spring of 1888 related to AC arc lighting set off a media frenzy against the "deadly arc-lighting current" and the seemingly callous lighting companies that used it. These deaths included a 15-year-old boy killed on April 15 by a broken telegraph line that had been energized with alternating current from a United States Illuminating Company line; a clerk killed two weeks later by an AC line; and a Brush Electric Company lineman killed in May by the AC line he was cutting. The press in New York seemed to switch overnight from stories about electric lights vs gas lighting to "death by wire" incidents, with each new report seeming to fan public resentment against high voltage AC and the dangerously tangled overhead electrical wires in the city.
Harold Brown's crusade
At this point an electrical engineer named Harold P. Brown, who at that time seemed to have no connection to the Edison company, sent a June 5, 1888 letter to the editor of the New York Post claiming the root of the problem was the alternating current (AC) system being used. Brown argued that the AC system was inherently dangerous and "damnable" and asked why the "public must submit to constant danger from sudden death" just so utilities could use a cheaper AC system.
At the beginning of attacks on AC, Westinghouse, in a June 7, 1888 letter, tried to defuse the situation. He invited Edison to visit him in Pittsburgh and said "I believe there has been a systemic attempt on the part of some people to do a great deal of mischief and create as great a difference as possible between the Edison Company and The Westinghouse Electric Co., when there ought to be an entirely different condition of affairs". Edison thanked him but said "My laboratory work consumes the whole of my time".
On June 8, Brown was lobbying in person before the New York Board of Electrical Control, asking that his letter to the paper be read into the meeting's record and demanding severe regulations on AC including limiting voltage to 300 volts, a level that would make AC next to useless for transmission. There were many rebuttals to Brown's claims in the newspapers and letters to the board, with people pointing out he was showing no scientific evidence that AC was more dangerous than DC. Westinghouse pointed out in letters to various newspapers the number of fires caused by DC equipment and suggested that Brown was obviously being controlled by Edison, something Brown continually denied.
A July edition of The Electrical Journal covered Brown's appearance before the New York Board of Electrical Control and the debate in technical societies over the merits of DC and AC, noting that:
At a July meeting Board of Electrical Control, Brown's criticisms of AC and even his knowledge of electricity was challenged by other electrical engineers, some of whom worked for Westinghouse. At this meeting, supporters of AC provided anecdotal stories from electricians on how they had survived shocks from AC at voltages up to 1000 volts and argued that DC was the more dangerous of the two.
Brown's demonstrations
Brown, determined to prove alternating current was more dangerous than direct current, at some point contacted Thomas Edison to see if he could make use of equipment to conduct experiments. Edison immediately offered to assist Brown in his crusade against AC companies. Before long, Brown was loaned space and equipment at Edison's West Orange, New Jersey laboratory, as well as laboratory assistant Arthur Kennelly.
Brown paid local children to collect stray dogs off the street for his experiments with direct and alternating current. After much experimentation killing a series of dogs, Brown held a public demonstration on July 30 in a lecture room at Columbia College. With many participants shouting for the demonstration to stop and others walking out, Brown subjected a caged dog to several shocks with increasing levels of direct current up to 1,000 volts, which the dog survived. Brown then applied 330 volts of alternating current which killed the dog. Four days later he held a second demonstration to answer critics' claims that the DC probably weakened the dog before it died. In this second demonstration, three dogs were killed in quick succession with 300 volts of AC. Brown wrote to a colleague that he was sure this demonstration would get the New York Board of Electrical Control to limit AC installations to 300 volts. Brown's campaign to restrict AC to 300 volts was unsuccessful but legislation did come close to passing in Ohio and Virginia.
Collusion with Edison
What brought Brown to the forefront of the debate over AC and his motives remain unclear, but historians note there grew to be some form of collusion between the Edison company and Brown. Edison records seem to show it was Edison Electric Light treasurer Francis S. Hastings who came up with the idea of using Brown and several New York physicians to attack Westinghouse and the other AC companies in retaliation for what Hastings thought were unscrupulous bids by Westinghouse for lighting contracts in Denver and Minneapolis. Hasting brought Brown and Edison together and was in continual contact with Brown. Edison Electric seemed to be footing the bill for some of Brown's publications on the dangers of AC. In addition, Thomas Edison himself sent a letter to the city government of Scranton, Pennsylvania recommending Brown as an expert on the dangers of AC. Some of this collusion was exposed in letters stolen from Brown's office and published in August 1889.
Patents and mergers
During this period Westinghouse continued to pour money and engineering resources into the goal of building a completely integrated AC system. To gain control of the Sawyer-Man lamp patents he bought Consolidated Electric Light in 1887. He bought the Waterhouse Electric Light Company in 1888 and the United States Illuminating Company in 1890, giving Westinghouse their own arc lighting systems as well as control over all the major incandescent lamp patents not controlled by Edison. In April 1888 Westinghouse engineer Oliver B. Shallenberger developed an induction meter that used a rotating magnetic field for measuring alternating current, giving the company a way to calculate how much electricity a customer used. In July 1888 Westinghouse paid a substantial amount to license Nikola Tesla's US patents for a poly-phase AC induction motor and obtained a patent option on Galileo Ferraris' induction motor design. Although the acquisition of a feasible AC motor gave Westinghouse a key patent in building a completely integrated AC system, the general shortage of cash the company was going through by 1890 meant development had to be put on hold for a while. The difficulties of obtaining funding for such a capital intensive business was becoming a serious problem for the company and 1890 saw the first of several attempts by investor J. P. Morgan to take over Westinghouse Electric.
Thomson-Houston was continuing to expand, buying seven smaller electric companies including a purchase of the Brush Electric Company in 1889. By 1890 Thomson-Houston controlled the majority of the arc lighting systems in the US and a collection of its own US AC patents. Several of the business deals between Thomson-Houston and Westinghouse fell apart and in April 1888 a judge rolled back part of Westinghouse's original Gaulard Gibbs patent, stating it only covered transformers linked in series.
With the help of the financier Henry Villard the Edison group of companies also went through a series of mergers: Edison Lamp Company, a lamp manufacturer in East Newark, New Jersey; Edison Machine Works, a manufacturer of dynamos and large electric motors in Schenectady, New York; Bergmann & Company, a manufacturer of electric lighting fixtures, sockets, and other electric lighting devices; and Edison Electric Light Company, the patent-holding company and the financial arm backed by J.P. Morgan and the Vanderbilt family for Edison's lighting experiments, merged. The new company, Edison General Electric Company, was formed in January 1889 with the help of Drexel, Morgan & Co. and Grosvenor Lowrey with Villard as president. It later included the Sprague Electric Railway & Motor Company.
The peak of the war
Through the fall of 1888 a battle of words with Brown specifically attacking Westinghouse continued to escalate. In November George Westinghouse challenged Brown's assertion in the pages of the Electrical Engineer that the Westinghouse AC systems had caused 30 deaths. The magazine investigated the claim and found at most only two of the deaths could be attributed to Westinghouse installations.
Associating AC and Westinghouse with the electric chair
Although New York had a criminal procedure code that specified electrocution via an electric chair, it did not spell out the type of electricity, the amount of current, or its method of supply, since these were still relative unknowns. The New York Medico-Legal Society, an informal society composed of doctors and lawyers, was given the task of working out the details and in late 1888 through early 1889 conducted a series of animal experiments on voltage amounts, electrode design and placement, and skin conductivity. During this time they sought the advice of Harold Brown as a consultant. This ended up expanding the war of currents into the development of the chair and the general debate over capital punishment in the US.
After the Medico-Legal Society formed their committee in September 1888 chairman Frederick Peterson, who had been an assistant at Brown's July 1888 public electrocution of dogs with AC at Columbia College, had the results of those experiments submitted to the committee. The claims that AC was more deadly than DC and was the best current to use was questioned, with some committee members pointing out that Brown's experiments were not scientifically carried out and were on animals smaller than a human being. At their November meeting the committee recommended 3,000 volts although the type of electricity, direct current or alternating current, was not determined.
In order to more conclusively prove to the committee that AC was more deadly than DC, Brown contacted Edison Electric Light treasurer Francis S. Hastings to arrange the use of the West Orange laboratory. There on December 5, 1888 Brown set up an experiment with members of the press, members of the Medico-Legal Society, the chairman of the death penalty commission, and Thomas Edison looking on. Brown used alternating current for all of his tests on animals larger than a human, including 4 calves and a lame horse, all dispatched with 750 volts of AC. Based on these results the Medico-Legal Society's December meeting recommended the use of 1,000–1,500 volts of alternating current for executions and newspapers noted the AC used was half the voltage used in the power lines over the streets of American cities.
Westinghouse criticized these tests as a skewed self-serving demonstration designed to be a direct attack on alternating current. On December 13 in a letter to the New York Times, Westinghouse spelled out where Brown's experiments were wrong and claimed again that Brown was being employed by the Edison company. Brown's December 18 letter refuted the claims and Brown even challenged Westinghouse to an electrical duel, with Brown agreeing to be shocked by ever-increasing amounts of DC power if Westinghouse submitted himself to the same amount of increasing AC power, first to quit loses. Westinghouse declined the offer.
In March 1889 when members of the Medico-Legal Society embarked on another series of tests to work out the details of electrode composition and placement they turned to Brown for technical assistance. Edison treasurer Hastings tried unsuccessfully to obtain a Westinghouse AC generator for the test. They ended up using Edison's West Orange laboratory for the animal tests.
Also in March, Superintendent of Prisons Austin Lathrop asked Brown if he could supply the equipment needed for the executions as well as design the electric chair. Brown turned down the job of designing the chair but did agree to fulfill the contract to supply the necessary electrical equipment. The state refused to pay up front, and Brown apparently turned to Edison Electric as well as Thomson-Houston Electric Company to help obtaining the equipment. This became another behind-the-scenes maneuver to acquire Westinghouse AC generators to supply the current, apparently with the help of the Edison company and Westinghouse's chief AC rival, Thomson-Houston. Thomson-Houston arranged to acquire three Westinghouse AC generators by replacing them with new Thomson-Houston AC generators. Thomson-Houston president Charles Coffin had at least two reasons for obtaining the Westinghouse generators; he did not want his company's equipment to be associated with the death penalty and he wanted to use one to prove a point, paying Brown to set up a public efficiency test to show that Westinghouse's sales claim of manufacturing 50% more efficient generators was false.
That spring Brown published "The Comparative Danger to Life of the Alternating and Continuous Electrical Current" detailing the animal experiments done at Edison's lab and claiming they showed AC was far deadlier than DC. This 61-page professionally printed booklet (possibly paid for by the Edison company) was sent to government officials, newspapers, and businessmen in towns with populations greater than 5,000 inhabitants.
In May 1889 when New York had its first criminal sentenced to be executed in the electric chair, a street merchant named William Kemmler, there was a great deal of discussion in the editorial column of the New York Times as to what to call the then-new form of execution. The term "Westinghoused" was put forward as well as "Gerrycide" (after death penalty commission head Elbridge Gerry), and "Browned". The Times hated the word that was eventually adopted, electrocution, describing it as being pushed forward by "pretentious ignoramuses". One of Edison's lawyers wrote to his colleague expressing an opinion that Edison's preference for dynamort, ampermort and electromort were not good terms but thought Westinghoused was the best choice.
The Kemmler appeal
William Kemmler was sentenced to die in the electric chair around June 24, 1889, but before the sentence could be carried out an appeal was filed on the grounds that it constituted cruel and unusual punishment under the U.S. Constitution. It became obvious to the press and everyone involved that the politically connected (and expensive) lawyer who filed the appeal, William Bourke Cockran, had no connection to the case but did have connection to the Westinghouse company, obviously paying for his services.
During fact-finding hearings held around the state beginning on July 9 in New York City, Cockran used his considerable skills as a cross-examiner and orator to attack Brown, Edison, and their supporters. His strategy was to show that Brown had falsified his test on the killing power of AC and to prove that electricity would not cause certain death and simply lead to torturing the condemned. In cross examination he questioned Brown's lack of credentials in the electrical field and brought up possible collusion between Brown and Edison, which Brown again denied. Many witnesses were called by both sides to give firsthand anecdotal accounts about encounters with electricity and evidence was given by medical professionals on the human body's nervous system and the electrical conductivity of skin. Brown was accused of fudging his tests on animals, hiding the fact that he was using lower current DC and high-current AC. When the hearing convened for a day at Edison's West Orange lab to witness demonstrations of skin resistance to electricity, Brown almost got in a fight with a Westinghouse representative, accusing him of being in the Edison laboratory to conduct industrial espionage. Newspapers noted the often contradictory testimony was raising public doubts about the electrocution law but after Edison took the stand many accepted assurances from the "wizard of Menlo Park" that 1,000 volts of AC would easily kill any man.
After the gathered testimony was submitted and the two sides presented their case, Judge Edwin Day ruled against Kemmler's appeal on October 9 and US Supreme Court denied Kemmler's appeal on May 23, 1890.
When the chair was first used, on August 6, 1890, the technicians on hand misjudged the voltage needed to kill William Kemmler. After the first jolt of electricity Kemmler was found to be still breathing. The procedure had to be repeated and a reporter on hand described it as "an awful spectacle, far worse than hanging." George Westinghouse commented: "They would have done better using an axe."
Brown's collusion exposed
On August 25, 1889 the New York Sun ran a story headlined:
The story was based on 45 letters stolen from Brown's office that spelled out Brown's collusion with Thomson-Houston and Edison Electric. The majority of the letters were correspondence between Brown and Thomson-Houston on the topic of acquiring the three Westinghouse generators for the state of New York as well as using one of them in an efficiency test. They also showed that Brown had received $5,000 from Edison Electric to purchase the surplus Westinghouse generators from Thomson-Houston. Further Edison involvement was contained in letters from Edison treasurer Hastings asking Brown to send anti-AC pamphlets to all the legislators in the state of Missouri (at the company's expense), Brown requesting that a letter of recommendation from Thomas Edison be sent to Scranton, Pennsylvania, as well as Edison and Arthur Kennelly coaching Brown in his upcoming testimony in the Kemmler appeal trial.
Brown was not slowed down by this revelation and characterized his efforts to expose Westinghouse as the same as going after a grocer who sells poison and calls it sugar.
The "Electric Wire Panic"
1889 saw another round of deaths attributed to alternating current including a lineman in Buffalo, New York, four linemen in New York City, and a New York fruit merchant who was killed when the display he was using came in contact with an overhead line. NYC Mayor Hugh J. Grant, in a meeting with the Board of Electrical Control and the AC electric companies, rejected the claims that the AC lines were perfectly safe saying "we get news of all who touch them through the coroners office". On October 11, 1889, John Feeks, a Western Union lineman, was high up in the tangle of overhead electrical wires working on what were supposed to be low-voltage telegraph lines in a busy Manhattan district. As the lunchtime crowd below looked on he grabbed a nearby line that, unknown to him, had been shorted many blocks away with a high-voltage AC line. The jolt entered through his bare right hand and exited his left steel studded climbing boot. Feeks was killed almost instantly, his body falling into the tangle of wire, sparking, burning, and smoldering for the better part of an hour while a horrified crowd of thousands gathered below. The source of the power that killed Feeks was not determined although United States Illuminating Company lines ran nearby.
Feeks' public death sparked a new round of people fearing the electric lines over their heads in what has been called the "Electric Wire Panic". The blame seemed to settle on Westinghouse since, Westinghouse having bought many of the lighting companies involved, people assumed Feeks' death was the fault of a Westinghouse subsidiary. Newspapers joined into the public outcry following Feeks' death, pointing out men's lives "were cheaper to this monopoly than insulated wires" and calling for the executives of AC companies to be charged with manslaughter. The October 13, 1889, New Orleans Times-Picayune noted "Death does not stop at the door, but comes right into the house, and perhaps as you are closing a door or turning on the gas you are killed." Harold Brown's reputation was rehabilitated almost overnight with newspapers and magazines seeking his opinion and reporters following him around New York City where he measured how much current was leaking from AC power lines.
At the peak of the war of currents, Edison himself joined the public debate for the first time, denounced AC current in a November 1889 article in the North American Review titled "The Dangers of Electric Lighting". Edison put forward the view that burying the high-voltage lines was not a solution, and would simply move the deaths underground and be a "constant menace" that could short with other lines threatening people's homes and lives. He stated the only way to make AC safe was to limit its voltage and vowed Edison Electric would never adopt AC as long as he was in charge.
George Westinghouse was characterized as a villain trying to defend pole-mounted AC installations that he knew were unsafe, and fumbled his replies to the questions put to him by reporters, attempting to point out all the other things in a large city that were more dangerous than AC. However, his subsequent response, printed in the North American Review, was much improved, highlighting that his AC/transformer system actually used lower household voltages than the Edison DC system. He also pointed out 87 deaths in one year caused by street cars and gas lighting, versus only 5 accidental electrocutions and no in-home deaths attributed to AC current.
The crowd that watched Feeks contained many New York aldermen due to the site of the accident being near the New York government offices and the horrifying affair galvanized them into the action of passing the law on moving utilities underground. The electric companies involved obtained an injunction preventing their lines from being cut down immediately but shut down most of their lighting until the situation was settled, plunging many New York streets into darkness. The legislation ordering the cutting down of all of the utility lines was finally upheld by the New York Supreme Court in December. The AC lines were cut down, keeping many New York City streets in darkness for the rest of the winter, since little had been done by the overpaid Tammany Hall city supervisors who were supposed arrange the building of the underground "subways" to house them.
The current war ends
Even with the Westinghouse propaganda losses, the war of currents itself was winding down with direct current on the losing side. This was due in part to Thomas Edison himself leaving the electric power business. Edison was becoming marginalized in his own company, having lost majority control in the 1889 merger that formed Edison General Electric. In 1890, he told president Henry Villard he thought it was time to retire from the lighting business and moved on to an iron ore refining project that preoccupied his time. Edison's dogmatic anti-AC values were no longer controlling the company. By 1889, Edison's Electric's own subsidiaries were lobbying to add AC power transmission to their systems, and in October 1890, Edison Machine Works began developing AC-based equipment.
With Thomas Edison no longer involved with Edison General Electric, the war of currents came to a close with a financial merger. Edison president Henry Villard, who had engineered the merger that formed Edison General Electric, was continually working on the idea of merging that company with Thomson-Houston or Westinghouse. He saw a real opportunity in 1891. The market was in a general downturn causing cash shortages for all the companies concerned and Villard was in talks with Thomson-Houston, which was now Edison General Electric's biggest competitor. Thomson-Houston had a habit of saving money on development by buying, or sometimes stealing, patents. Patent conflicts were stymieing the growth of both companies and the idea of saving on some 60 ongoing lawsuits as well as saving on profit losses of trying to undercut each other by selling generating plants below cost pushed forward the idea of this merger in financial circles. Edison hated the idea and tried to hold it off, but Villard thought his company, now winning its incandescent light patent lawsuits in the courts, was in a position to dictate the terms of any merger. As a committee of financiers, which included J.P. Morgan, worked on the deal in early 1892, things went against Villard. In Morgan's view, Thomson-Houston looked on the books to be the stronger of the two companies and engineered a behind the scenes deal announced on April 15, 1892, that put the management of Thomson-Houston in control of the new company, now called General Electric (dropping Edison's name). Thomas Edison was not aware of the deal until the day before it happened.
The fifteen electric companies that existed five years before had merged down to two: General Electric and Westinghouse. The war of currents came to an end, and this merger of the Edison company, along with its lighting patents, and the Thomson-Houston, with its AC patents, created a company that controlled three quarters of the US electrical business. From this point on, General Electric and Westinghouse were both marketing alternating current systems. Edison put on a brave face, noting to the media how his stock had gained value in the deal, but privately he was bitter that his company and all of his patents had been turned over to the competition.
Aftermath
Even though the institutional war of currents had ended in a financial merger, the technical difference between direct and alternating current systems followed a much longer technical merger. Due to innovation in the US and Europe, alternating current's economy of scale with very large generating plants linked to loads via long-distance transmission was slowly being combined with the ability to link it up with all of the existing systems that needed to be supplied. These included single phase AC systems, poly-phase AC systems, low-voltage incandescent lighting, high voltage arc lighting, and existing DC motors in factories and street cars. In the engineered universal system these technological differences were temporarily being bridged via the development of rotary converters and motor–generators that allowed the large number of legacy systems to be connected to the AC grid. These stopgaps were slowly replaced as older systems were retired or upgraded.
In May 1892, Westinghouse Electric managed to underbid General Electric on the contract to electrify the World's Columbian Exposition in Chicago and, although they made no profit, their demonstration of a safe, effective and highly flexible universal alternating current system powering all of the disparate electrical systems at the Exposition led to them winning the bid at the end of that year to build an AC power station at Niagara Falls. General Electric was awarded contracts to build AC transmission lines and transformers in that project and further bids at Niagara were split with GE who were quickly catching up in the AC field due partly to Charles Proteus Steinmetz, a Prussian mathematician who was the first person to fully understand AC power from a solid mathematical standpoint. General Electric hired many talented new engineers to improve its design of transformers, generators, motors and other apparatus.
A three-phase three-wire transmission system had already been deployed in Europe at the International Electro-Technical Exhibition of 1891, where Mikhail Dolivo-Dobrovolsky used this system to transmit electric power over a distance of 176 km with 75% efficiency. In 1891 he also created a three-phase transformer, the short-circuited (squirrel-cage) induction motor and designed the world's first three-phase hydroelectric power plant.
Patent lawsuits were still hampering both companies and bleeding off cash, so in 1896, J. P. Morgan engineered a patent sharing agreement between the two companies that remained in force for 11 years.
In 1897 Edison sold his remaining stock in Edison Electric Illuminating of New York to finance his iron ore refining prototype plant. In 1908, Edison said to George Stanley, son of AC transformer inventor William Stanley, Jr., "Tell your father I was wrong", likely an admission that he had underestimated the developmental potential of alternating current.
Remnant and existent DC systems
Some cities continued to use DC well into the 20th century. For example, central Helsinki had a DC network until the late 1940s, and Stockholm lost its dwindling DC network as late as the 1970s. A mercury-arc valve rectifier station could convert AC to DC where networks were still used. Parts of Boston, Massachusetts, along Beacon Street and Commonwealth Avenue still used 110 volts DC in the 1960s, causing the destruction of many small appliances (typically hair dryers and phonographs) used by Boston University students, who ignored warnings about the electricity supply.
New York City's electric utility company, Consolidated Edison, continued to supply direct current to customers who had adopted it early in the twentieth century, mainly for elevators. The New Yorker Hotel, constructed in 1929, had a large direct-current power plant and did not convert fully to alternating-current service until well into the 1960s. This was the building in which AC pioneer Nikola Tesla spent his last years, and where he died in 1943. New York City's Broadway theaters continued to use DC services until 1975, requiring the use of outmoded manual resistance dimmer boards operated by several stagehands. This practice ended when the musical A Chorus Line introduced computerized lighting control and thyristor (SCR) dimmers to Broadway, and New York theaters were finally converted to AC.
In January 1998, Consolidated Edison started to eliminate DC service. At that time there were 4,600 DC customers. By 2006, there were only 60 customers using DC service, and on November 14, 2007, the last direct-current distribution by Con Edison was shut down. Customers still using DC were provided with on-site AC to DC rectifiers. In 2012, Pacific Gas and Electric Company still provided DC power to some locations in San Francisco, primarily for elevators, supplied by close to 200 rectifiers each providing power for 7–10 customers.
The Central Electricity Generating Board in the UK maintained a 200–volt DC generating station at Bankside Power Station in London until 1981. It exclusively powered DC printing machinery in Fleet Street, then the heart of the UK's newspaper industry. It was decommissioned later in 1981 when the newspaper industry moved into the developing docklands area further down the river (using modern AC-powered equipment).
High-voltage direct current (HVDC) systems are used for bulk transmission of energy from distant generating stations, for underwater lines, and for interconnection of separate alternating-current systems.
See also
Format war
History of electric power transmission
History of electronic engineering
Timeline of electrical and electronic engineering
Topsy (elephant) – in popular culture associated with the war of currents
References
Citations
Bibliography
Further reading
External links
(AC vs DC an online video mini-history).
1880s in science
1880s in technology
1880s in the United States
1890s in science
1890s in technology
1890s in the United States
Animal rights
Business rivalries
Cruelty to animals
Electric power
Electric power transmission systems in the United States
Electrical safety
Energy development
History of electrical engineering
Ideological rivalry
Nikola Tesla
Thomas Edison
Scientific rivalry | War of the currents | [
"Physics",
"Engineering"
] | 9,388 | [
"Physical quantities",
"Power (physics)",
"Electric power",
"Electrical engineering",
"History of electrical engineering"
] |
342,094 | https://en.wikipedia.org/wiki/List%20of%20gene%20families | This is a list of gene families or gene complexes, i.e. sets of genes which are related ancestrally and often serve similar biological functions. These gene families typically encode functionally related proteins, and sometimes the term gene families is a shorthand for the sets of proteins that the genes encode. They may or may not be physically adjacent on the same chromosome.
Regulatory protein gene families
14-3-3 protein family
Achaete-scute complex (neuroblast formation)
Cyclins
Cyclin dependent kinases
CDK inhibitors
FOX proteins (forkhead box proteins)
Families containing homeobox domains
DLX gene family
Hox gene family
POU family
GATA transcription factor
General transcription factor
Krüppel-type zinc finger (ZNF)
MADS-box gene family
NF-kB
NOTCH2NL
Nuclear receptor
P300-CBP coactivator family
Transcription factors
SOX gene family
Immune system proteins
Immunoglobulin superfamily
Killer-cell immunoglobulin-like receptors
Leukocyte immunoglobulin-like receptors
Major histocompatibility complex (MHC)
NOD-like receptors
Pattern recognition receptor
Toll-like receptors
RIG-I like receptors
Motor proteins
Dynein
Kinesin
Myosin
Signal transducing proteins
Arf family
G-proteins
Janus kinases
MAP Kinase
Non-receptor tyrosine kinase
Olfactory receptor
Peroxiredoxin
Rab family
Rap family
Ras family
Receptor tyrosine kinases
Rho family
Serine/threonine-specific protein kinase
SRC kinase family
Transporters
ABC transporters
Antiporter
Aquaporins
Globin
Major facilitator superfamily
Neurotransmitter transporter
GABA transporter
Glutamate transporter
Glycine transporter
Monoamine transporter
Equilibrative nucleoside transporter family
Papain-like protease
Solute carrier family
Other families
See also
Protein family
Housekeeping gene
F
Biological classification
Gene families | List of gene families | [
"Biology"
] | 408 | [
"nan"
] |
342,113 | https://en.wikipedia.org/wiki/Archimedes%20number | In viscous fluid dynamics, the Archimedes number (Ar), is a dimensionless number used to determine the motion of fluids due to density differences, named after the ancient Greek scientist and mathematician Archimedes.
It is the ratio of gravitational forces to viscous forces and has the form:
where:
is the local external field (for example gravitational acceleration), ,
is the characteristic length of body, .
is the submerged specific gravity,
is the density of the fluid, ,
is the density of the body, ,
is the kinematic viscosity, ,
is the dynamic viscosity, ,
Uses
The Archimedes number is generally used in design of tubular chemical process reactors. The following are non-exhaustive examples of using the Archimedes number in reactor design.
Packed-bed fluidization design
The Archimedes number is applied often in the engineering of packed beds, which are very common in the chemical processing industry. A packed bed reactor, which is similar to the ideal plug flow reactor model, involves packing a tubular reactor with a solid catalyst, then passing incompressible or compressible fluids through the solid bed. When the solid particles are small, they may be "fluidized", so that they act as if they were a fluid. When fluidizing a packed bed, the pressure of the working fluid is increased until the pressure drop between the bottom of the bed (where fluid enters) and the top of the bed (where fluid leaves) is equal to the weight of the packed solids. At this point, the velocity of the fluid is just not enough to achieve fluidization, and extra pressure is required to overcome the friction of particles with each other and the wall of the reactor, allowing fluidization to occur. This gives a minimum fluidization velocity, , that may be estimated by:
where:
is the diameter of sphere with the same volume as the solid particle and can often be estimated as , where is the diameter of the particle.
Bubble column design
Another use is in the estimation of gas holdup in a bubble column. In a bubble column, the gas holdup (fraction of a bubble column that is gas at a given time) can be estimated by:
where:
is the gas holdup fraction
is the Eötvos number
is the Froude number
is the diameter of holes in the column's spargers (holed discs that emit bubbles)
is the column diameter
Parameters to are found empirically
Spouted-bed minimum spouting velocity design
A spouted bed is used in drying and coating. It involves spraying a liquid into a bed packed with the solid to be coated. A fluidizing gas fed from the bottom of the bed causes a spout, which causes the solids to circle linearly around the liquid. Work has been undertaken to model the minimum velocity of gas required for spouting in a spouted bed, including the use of artificial neural networks. Testing with such models found that Archimedes number is a parameter that has a very large effect on the minimum spouting velocity.
See also
Viscous fluid dynamics
Convection
Convection (heat transfer)
Dimensionless quantity
Galilei number
Grashof number
Reynolds number
Froude number
Eötvös number
Sherwood number
References
Dimensionless numbers of fluid mechanics
Fluid dynamics | Archimedes number | [
"Chemistry",
"Engineering"
] | 668 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
342,127 | https://en.wikipedia.org/wiki/Anti-gravity | Anti-gravity (also known as non-gravitational field) is the phenomenon of creating a place or object that is free from the force of gravity. It does not refer to either the lack of weight under gravity experienced in free fall or orbit, or to balancing the force of gravity with some other force, such as electromagnetism or aerodynamic lift. Anti-gravity is a recurring concept in science fiction.
"Anti-gravity" is often used to refer to devices that look as if they reverse gravity even though they operate through other means, such as lifters, which fly in the air by moving air with electromagnetic fields.
Historical attempts at understanding gravity
The possibility of creating anti-gravity depends upon a complete understanding and description of gravity and its interactions with other physical theories, such as general relativity and quantum mechanics; however, no quantum theory of gravity has yet been found.
During the summer of 1666, Isaac Newton observed an apple falling from the tree in his garden, thus realizing the principle of universal gravitation. Albert Einstein in 1915 considered the physical interaction between matter and space, where gravity occurs as a consequence of matter causing a geometric deformation of spacetime which is otherwise flat. Einstein, both independently and with Walther Mayer, attempted to unify his theory of gravity with electromagnetism using the work of Theodor Kaluza and James Clerk Maxwell to link gravity and quantum field theory.
Theoretical quantum physicists have postulated the existence of a quantum gravity particle, the graviton. Various theoretical explanations of quantum gravity have been created, including superstring theory, loop quantum gravity, E8 theory and asymptotic safety theory amongst many others.
Probable solutions
In Newton's law of universal gravitation, gravity was an external force transmitted by unknown means. In the 20th century, Newton's model was replaced by general relativity where gravity is not a force but the result of the geometry of spacetime. Under general relativity, anti-gravity is impossible except under contrived circumstances.
Gravity shields
In 1948 businessman Roger Babson (founder of Babson College) formed the Gravity Research Foundation to study ways to reduce the effects of gravity. Their efforts were initially somewhat "crankish", but they held occasional conferences that drew such people as Clarence Birdseye, known for his frozen-food products, and helicopter pioneer Igor Sikorsky. Over time the Foundation turned its attention away from trying to control gravity, to simply better understanding it. The Foundation nearly disappeared after Babson's death in 1967. However, it continues to run an essay award, offering prizes of up to $4,000. As of 2017, it is still administered out of Wellesley, Massachusetts, by George Rideout Jr., son of the foundation's original director. Winners include California astrophysicist George F. Smoot (1993), who later won the 2006 Nobel Prize in Physics, and Gerard 't Hooft (2015) who previously won the 1999 Nobel Prize in Physics.
General relativity research in the 1950s
General relativity was introduced in the 1910s, but development of the theory was greatly slowed by a lack of suitable mathematical tools. It appeared that anti-gravity was outlawed under general relativity.
It is claimed the US Air Force also ran a study effort throughout the 1950s and into the 1960s. Former Lieutenant Colonel Ansel Talbert wrote two series of newspaper articles claiming that most of the major aviation firms had started gravity control propulsion research in the 1950s. However, there is no outside confirmation of these stories, and since they take place in the midst of the policy by press release era, it is not clear how much weight these stories should be given.
It is known that there were serious efforts underway at the Glenn L. Martin Company, who formed the Research Institute for Advanced Study. Major newspapers announced the contract that had been made between theoretical physicist Burkhard Heim and the Glenn L. Martin Company. Another effort in the private sector to master understanding of gravitation was the creation of the Institute for Field Physics, University of North Carolina at Chapel Hill in 1956, by Gravity Research Foundation trustee Agnew H. Bahnson.
Military support for anti-gravity projects was terminated by the Mansfield Amendment of 1973, which restricted Department of Defense spending to only the areas of scientific research with explicit military applications. The Mansfield Amendment was passed specifically to end long-running projects that had no results.
Under general relativity, gravity is the result of following spatial geometry (change in the normal shape of space) caused by local mass-energy. This theory holds that it is the altered shape of space, deformed by massive objects, that causes gravity, which is actually a property of deformed space rather than being a true force. Although the equations cannot normally produce a "negative geometry", it is possible to do so by using "negative mass". The same equations do not, of themselves, rule out the existence of negative mass.
Both general relativity and Newtonian gravity appear to predict that negative mass would produce a repulsive gravitational field. In particular, Sir Hermann Bondi proposed in 1957 that negative gravitational mass, combined with negative inertial mass, would comply with the strong equivalence principle of general relativity theory and the Newtonian laws of conservation of linear momentum and energy. Bondi's proof yielded singularity-free solutions for the relativity equations. In July 1988, Robert L. Forward presented a paper at the AIAA/ASME/SAE/ASEE 24th Joint Propulsion Conference that proposed a Bondi negative gravitational mass propulsion system.
Bondi pointed out that a negative mass will fall toward (and not away from) "normal" matter, since although the gravitational force is repulsive, the negative mass (according to Newton's law, F=ma) responds by accelerating in the opposite of the direction of the force. Normal mass, on the other hand, will fall away from the negative matter. He noted that two identical masses, one positive and one negative, placed near each other will therefore self-accelerate in the direction of the line between them, with the negative mass chasing after the positive mass. Notice that because the negative mass acquires negative kinetic energy, the total energy of the accelerating masses remains at zero. Forward pointed out that the self-acceleration effect is due to the negative inertial mass, and could be seen induced without the gravitational forces between the particles.
The Standard Model of particle physics, which describes all currently known forms of matter, does not include negative mass. Although cosmological dark matter may consist of particles outside the Standard Model whose nature is unknown, their mass is ostensibly known – since they were postulated from their gravitational effects on surrounding objects, which implies their mass is positive. The proposed cosmological dark energy, on the other hand, is more complicated, since according to general relativity the effects of both its energy density and its negative pressure contribute to its gravitational effect.
Unique force
Under general relativity any form of energy couples with spacetime to create the geometries that cause gravity. A longstanding question was whether or not these same equations applied to antimatter. The issue was considered solved in 1960 with the development of CPT symmetry, which demonstrated that antimatter follows the same laws of physics as "normal" matter, and therefore has positive energy content and also causes (and reacts to) gravity like normal matter (see gravitational interaction of antimatter).
For much of the last quarter of the 20th century, the physics community was involved in attempts to produce a unified field theory, a single physical theory that explains the four fundamental forces: gravity, electromagnetism, and the strong and weak nuclear forces. Scientists have made progress in unifying the three quantum forces, but gravity has remained "the problem" in every attempt. This has not stopped any number of such attempts from being made, however.
Generally these attempts tried to "quantize gravity" by positing a particle, the graviton, that carried gravity in the same way that photons (light) carry electromagnetism. Simple attempts along this direction all failed, however, leading to more complex examples that attempted to account for these problems. Two of these, supersymmetry and the relativity related supergravity, both required the existence of an extremely weak "fifth force" carried by a graviphoton, which coupled together several "loose ends" in quantum field theory, in an organized manner. As a side effect, both theories also all but required that antimatter be affected by this fifth force in a way similar to anti-gravity, dictating repulsion away from mass. Several experiments were carried out in the 1990s to measure this effect, but none yielded positive results.
In 2013 CERN looked for an antigravity effect in an experiment designed to study the energy levels within antihydrogen. The antigravity measurement was just an "interesting sideshow" and was inconclusive.
Breakthrough Propulsion Physics Program
During the close of the twentieth century NASA provided funding for the Breakthrough Propulsion Physics Program (BPP) from 1996 through 2002. This program studied a number of "far out" designs for space propulsion that were not receiving funding through normal university or commercial channels. Anti-gravity-like concepts were investigated under the name "diametric drive". The work of the BPP program continues in the independent, non-NASA affiliated Tau Zero Foundation.
Empirical claims and commercial efforts
There have been a number of attempts to build anti-gravity devices, and a small number of reports of anti-gravity-like effects in the scientific literature. None of the examples that follow are accepted as reproducible examples of anti-gravity.
Gyroscopic devices
Gyroscopes produce a force when twisted that operates "out of plane" and can appear to lift themselves against gravity. Although this force is well understood to be illusory, even under Newtonian models, it has nevertheless generated numerous claims of anti-gravity devices and any number of patented devices. None of these devices has ever been demonstrated to work under controlled conditions, and they have often become the subject of conspiracy theories as a result.
Another "rotating device" example is shown in a series of patents granted to Henry Wallace between 1968 and 1974. His devices consist of rapidly spinning disks of brass, a material made up largely of elements with a total half-integer nuclear spin. He claimed that by rapidly rotating a disk of such material, the nuclear spin became aligned, and as a result created a "gravitomagnetic" field in a fashion similar to the magnetic field created by the Barnett effect. No independent testing or public demonstration of these devices is known.
In 1989, it was reported that a weight decreases along the axis of a right spinning gyroscope. A test of this claim a year later yielded null results. A recommendation was made to conduct further tests at a 1999 AIP conference.
Thomas Townsend Brown's gravitator
In 1921, while still in high school, Thomas Townsend Brown found that a high-voltage Coolidge tube seemed to change mass depending on its orientation on a balance scale. Through the 1920s Brown developed this into devices that combined high voltages with materials with high dielectric constants (essentially large capacitors); he called such a device a "gravitator". Brown made the claim to observers and in the media that his experiments were showing anti-gravity effects. Brown would continue his work and produced a series of high-voltage devices in the following years in attempts to sell his ideas to aircraft companies and the military. He coined the names Biefeld–Brown effect and electrogravitics in conjunction with his devices. Brown tested his asymmetrical capacitor devices in a vacuum, supposedly showing it was not a more down-to-earth electrohydrodynamic effect generated by high voltage ion flow in air.
Electrogravitics is a popular topic in ufology, anti-gravity, free energy, with government conspiracy theorists and related websites, in books and publications with claims that the technology became highly classified in the early 1960s and that it is used to power UFOs and the B-2 bomber. There is also research and videos on the internet purported to show lifter-style capacitor devices working in a vacuum, therefore not receiving propulsion from ion drift or ion wind being generated in air.
Follow-up studies on Brown's work and other claims have been conducted by R. L. Talley in a 1990 US Air Force study, NASA scientist Jonathan Campbell in a 2003 experiment, and Martin Tajmar in a 2004 paper. They have found that no thrust could be observed in a vacuum and that Brown's and other ion lifter devices produce thrust along their axis regardless of the direction of gravity consistent with electrohydrodynamic effects.
Gravitoelectric coupling
In 1992, the Russian researcher Eugene Podkletnov claimed to have discovered, whilst experimenting with superconductors, that a fast rotating superconductor reduces the gravitational effect. Many studies have attempted to reproduce Podkletnov's experiment, always to negative results.
Douglas Torr, of the University of Alabama in Huntsville proposed how a time-dependent magnetic field could cause the spins of the lattice ions in a superconductor to generate detectable gravitomagnetic and gravitoelectric fields in a series of papers published between 1991 and 1993. In 1999, a Miss Li appeared in Popular Mechanics, claiming to have constructed a working prototype to generate what she described as "AC Gravity." No further evidence of this prototype has been offered.
Douglas Torr and Timir Datta were involved in the development of a "gravity generator" at the University of South Carolina. According to a leaked document from the Office of Technology Transfer at the University of South Carolina and confirmed to Wired reporter Charles Platt in 1998, the device would create a "force beam" in any desired direction and the university planned to patent and license this device. No further information about this university research project or the "Gravity Generator" device was ever made public.
Göde Award
The Institute for Gravity Research of the Göde Scientific Foundation has tried to reproduce many of the different experiments which claim any "anti-gravity" effects. All attempts by this group to observe an anti-gravity effect by reproducing past experiments have been unsuccessful thus far. The foundation has offered a reward of one million euros for a reproducible anti-gravity experiment.
In fiction
The existence of anti-gravity is a common theme in science fiction. The Encyclopedia of Science Fiction lists Francis Godwin's posthumously-published 1638 novel The Man in the Moone, where a "semi-magical" stone has the power to make gravity stronger or weaker, as the earliest variation of the theme. The first story to use anti-gravity for the purpose of space travel, as well as the first to treat the subject from a scientific rather than supernatural angle, was George Tucker's 1827 novel A Voyage to the Moon.
Apergy
Apergy is a term for a fictitious form of anti-gravitational energy first used by Percy Greg in his 1880 sword and planet novel Across the Zodiac. The term was later adopted by other fiction authors such as John Jacob Astor IV in his 1894 science fiction novel A Journey in Other Worlds, and it also appeared outside of explicit fiction writing.
See also
Area 51
Aerodynamic levitation
Artificial gravity
Burkhard Heim
Casimir effect
Clinostat
Electrostatic levitation
Exotic matter
Gravitational interaction of antimatter
Gravitational shielding
Gravitational wave
Ion-propelled aircraft
Heim theory
Magnetic levitation
Nazi UFOs
Optical levitation
Reactionless drive
Tractor beam
References
Bibliography
Criteria:
"Newtons discovery of the apple law"
"Newtons principle of gravitation" > "Newtons principle of gravitation apple falls" (google > google books)
Further reading
Cady, W. M. (15 September 1952). "Thomas Townsend Brown: Electro-Gravity Device" (File 24–185). Pasadena, CA: Office of Naval Research. Public access to the report was authorized on 1 October 1952.
External links
Responding to Mechanical Antigravity, a NASA paper debunking a wide variety of gyroscopic (and related) devices
Göde Scientific Foundation
KURED Research
General relativity
History of physics
History of science and technology in the United States
Historiography of science
Science fiction themes
Fringe physics | Anti-gravity | [
"Physics",
"Astronomy"
] | 3,322 | [
"Astronomical hypotheses",
"General relativity",
"Anti-gravity",
"Theory of relativity"
] |
342,371 | https://en.wikipedia.org/wiki/Directional%20antenna | A directional antenna or beam antenna is an antenna which radiates or receives greater radio wave power in specific directions. Directional antennas can radiate radio waves in beams, when greater concentration of radiation in a certain direction is desired, or in receiving antennas receive radio waves from one specific direction only. This can increase the power transmitted to receivers in that direction, or reduce interference from unwanted sources. This contrasts with omnidirectional antennas such as dipole antennas which radiate radio waves over a wide angle, or receive from a wide angle.
The extent to which an antenna's angular distribution of radiated power, its radiation pattern, is concentrated in one direction is measured by a parameter called antenna gain. A high-gain antenna (HGA) is a directional antenna with a focused, narrow beam width, permitting more precise targeting of the radio signals. Most commonly referred to during space missions, these antennas are also in use all over Earth, most successfully in flat, open areas where there are no mountains to disrupt radiowaves.
In contrast, a low-gain antenna (LGA) is an omnidirectional antenna, with a broad radiowave beam width, that allows the signal to propagate reasonably well even in mountainous regions and is thus more reliable regardless of terrain. Low-gain antennas are often used in spacecraft as a backup to the high-gain antenna, which transmits a much narrower beam and is therefore susceptible to loss of signal.
All practical antennas are at least somewhat directional, although usually only the direction in the plane parallel to the earth is considered, and practical antennas can easily be omnidirectional in one plane. The most common directional antenna types are
the Yagi–Uda antenna,
the log-periodic antenna, and
the corner reflector antenna.
These antenna types, or combinations of several single-frequency versions of one type or (rarely) a combination of two different types, are frequently sold commercially as residential TV antennas. Cellular repeaters often make use of external directional antennas to give a far greater signal than can be obtained on a standard cell phone. Satellite television receivers usually use parabolic antennas. For long and medium wavelength frequencies, tower arrays are used in most cases as directional antennas.
Principle of operation
When transmitting, a high-gain antenna allows more of the transmitted power to be sent in the direction of the receiver, increasing the received signal strength. When receiving, a high gain antenna captures more of the signal, again increasing signal strength. Due to reciprocity, these two effects are equal—an antenna that makes a transmitted signal 100 times stronger (compared to an isotropic radiator) will also capture 100 times as much energy as the isotropic antenna when used as a receiving antenna. As a consequence of their directivity, directional antennas also send less (and receive less) signal from directions other than the main beam. This property may avoid interference from other out-of-beam transmitters, and always reduces antenna noise. (Noise comes from every direction, but a desired signal will only come from one approximate direction, so the narrower the antenna's beam, the better the crucial signal-to-noise ratio.)
There are many ways to make a high-gain antenna; the most common are parabolic antennas, helical antennas, Yagi-Uda antennas, and phased arrays of smaller antennas of any kind. Horn antennas can also be constructed with high gain, but are less commonly seen. Still other configurations are possible—the Arecibo Observatory used a combination of a line feed with an enormous spherical reflector (as opposed to a more usual parabolic reflector), to achieve extremely high gains at specific frequencies.
Antenna gain
Antenna gain is often quoted with respect to a hypothetical antenna that radiates equally in all directions, an isotropic radiator. This gain, when measured in decibels, is called dBi. Conservation of energy dictates that high gain antennas must have narrow beams. For example, if a high gain antenna makes a 1 Watt transmitter look like a 100 Watt transmitter, then the beam can cover at most of the sky (otherwise the total amount of energy radiated in all directions would sum to more than the transmitter power, which is not possible). In turn this implies that high-gain antennas must be physically large, since according to the diffraction limit, the narrower the beam desired, the larger the antenna must be (measured in wavelengths).
Antenna gain can also be measured in dBd, which is gain in decibels compared to the maximum intensity direction of a half wave dipole. In the case of Yagi-type aerials this more or less equates to the gain one would expect from the aerial under test minus all its directors and reflector. It is important not to confuse dB and dB; the two differ by 2.15 dB, with the dBi figure being higher, since a dipole has 2.15 dB of gain with respect to an isotropic antenna.
Gain is also dependent on the number of elements and the tuning of those elements. Antennas can be tuned to be resonant over a wider spread of frequencies but, all other things being equal, this will mean the gain of the aerial is lower than one tuned for a single frequency or a group of frequencies. For example, in the case of wideband TV antennas the fall off in gain is particularly large at the bottom of the TV transmitting band. In the UK this bottom third of the TV band is known as group A.
Other factors may also affect gain such as aperture (the area the antenna collects signal from, almost entirely related to the size of the antenna but for small antennas can be increased by adding a ferrite rod), and efficiency (again, affected by size, but also resistivity of the materials used and impedance matching). These factors are easy to improve without adjusting other features of the antennas or coincidentally improved by the same factors that increase directivity, and so are typically not emphasized.
Applications
High gain antennas are typically the largest component of deep space probes, and the highest gain radio antennas are physically enormous structures, such as the Arecibo Observatory. The Deep Space Network uses 35 m dishes at about 1 cm wavelengths. This combination gives the antenna gain of about 100,000,000 (or 80 dB, as normally measured), making the transmitter appear about 100 million times stronger, and a receiver about 100 million times more sensitive, provided the target is within the beam. This beam can cover at most one hundred millionth () of the sky, so very accurate pointing is required.
Use of high gain and millimeter-wave communication in WPAN gaining increases the probability of concurrent scheduling of non‐interfering transmissions in a localized area, which results in an immense increase in network throughput. However, the optimum scheduling of concurrent transmission is an NP-Hard problem.
Gallery
See also
Amateur radio direction finding
Antenna boresight
Antenna gain
Cantenna
Cardioid
Cassegrain antenna
Cassegrain reflector
Directivity
Loop antenna
Omnidirectional antenna
Parabolic antenna
Phased array
Radio direction finder
Radio propagation model, Antenna subsection
Radiation pattern
References
External links
Radio frequency antenna types
Radio frequency propagation
Antennas (radio) | Directional antenna | [
"Physics"
] | 1,474 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
342,453 | https://en.wikipedia.org/wiki/Center%20%28algebra%29 | The term center or centre is used in various contexts in abstract algebra to denote the set of all those elements that commute with all other elements.
The center of a group G consists of all those elements x in G such that xg = gx for all g in G. This is a normal subgroup of G.
The similarly named notion for a semigroup is defined likewise and it is a subsemigroup.
The center of a ring (or an associative algebra) R is the subset of R consisting of all those elements x of R such that xr = rx for all r in R. The center is a commutative subring of R.
The center of a Lie algebra L consists of all those elements x in L such that [x,a] = 0 for all a in L. This is an ideal of the Lie algebra L.
See also
Centralizer and normalizer
Center (category theory)
References
Abstract algebra | Center (algebra) | [
"Mathematics"
] | 194 | [
"Abstract algebra",
"Algebra"
] |
342,457 | https://en.wikipedia.org/wiki/Cellulase | Cellulase (; systematic name 4-β-D-glucan 4-glucanohydrolase) is any of several enzymes produced chiefly by fungi, bacteria, and protozoans that catalyze cellulolysis, the decomposition of cellulose and of some related polysaccharides:
Endohydrolysis of (1→4)-β-D-glucosidic linkages in cellulose, lichenin and cereal β-D-glucan
The name is also used for any naturally occurring mixture or complex of various such enzymes, that act serially or synergistically to decompose cellulosic material.
Cellulases break down the cellulose molecule into monosaccharides ("simple sugars") such as β-glucose, or shorter polysaccharides and oligosaccharides. Cellulose breakdown is of considerable economic importance, because it makes a major constituent of plants available for consumption and use in chemical reactions. The specific reaction involved is the hydrolysis of the 1,4-β-D-glycosidic linkages in cellulose, hemicellulose, lichenin, and cereal β-D-glucans. Because cellulose molecules bind strongly to each other, cellulolysis is relatively difficult compared to the breakdown of other polysaccharides such as starch.
Most mammals have only very limited ability to digest dietary fibres like cellulose by themselves. In many herbivorous animals such as ruminants like cattle and sheep and hindgut fermenters like horses, cellulases are produced by symbiotic bacteria. Endogenous cellulases are produced by a few types of animals, such as some termites, snails, and earthworms.
Cellulases have also been found in green microalgae (Chlamydomonas reinhardtii, Gonium pectorale and Volvox carteri) and their catalytic domains (CD) belonging to GH9 Family show highest sequence homology to metazoan endogenous cellulases. Algal cellulases are modular, consisting of putative novel cysteine-rich
carbohydrate-binding modules (CBMs), proline/serine-(PS) rich linkers in addition to putative Ig-like and unknown domains in some members. Cellulase from Gonium pectorale consisted of two CDs separated by linkers and with a C-terminal CBM.
Several different kinds of cellulases are known, which differ structurally and mechanistically. Synonyms, derivatives, and specific enzymes associated with the name "cellulase" include endo-1,4-β-D-glucanase (β-1,4-glucanase, β-1,4-endoglucan hydrolase, endoglucanase D, 1,4-(1,3;1,4)-β-D-glucan 4-glucanohydrolase), carboxymethyl cellulase (CMCase), avicelase, celludextrinase, cellulase A, cellulosin AP, alkali cellulase, cellulase A 3, 9.5 cellulase, celloxylanase and pancellase SS. Enzymes that cleave lignin have occasionally been called cellulases, but this old usage is deprecated; they are lignin-modifying enzymes.
Types and action
Five general types of cellulases based on the type of reaction catalyzed:
Endocellulases (EC 3.2.1.4) randomly cleave internal bonds at amorphous sites that create new chain ends.
Exocellulases or cellobiohydrolases (EC 3.2.1.91) cleave two to four units from the ends of the exposed chains produced by endocellulase, resulting in tetrasaccharides or disaccharides, such as cellobiose. Exocellulases are further classified into type I, that work processively from the reducing end of the cellulose chain, and type II, that work processively from the nonreducing end.
Cellobiases (EC 3.2.1.21) or β-glucosidases hydrolyse the exocellulase product into individual monosaccharides.
Oxidative cellulases depolymerize cellulose by radical reactions, as for instance cellobiose dehydrogenase (acceptor).
Cellulose phosphorylases depolymerize cellulose using phosphates instead of water.
Within the above types there are also progressive (also known as processive) and nonprogressive types. Progressive cellulase will continue to interact with a single polysaccharide strand, nonprogressive cellulase will interact once then disengage and engage another polysaccharide strand.
Cellulase action is considered to be synergistic as all three classes of cellulase can yield much more sugar than the addition of all three separately. Aside from ruminants, most animals (including humans) do not produce cellulase in their bodies and can only partially break down cellulose through fermentation, limiting their ability to use energy in fibrous plant material.
Structure
Most fungal cellulases have a two-domain structure, with one catalytic domain and one cellulose binding domain, that are connected by a flexible linker. This structure is adapted for working on an insoluble substrate, and it allows the enzyme to diffuse two-dimensionally on a surface in a caterpillar-like fashion. However, there are also cellulases (mostly endoglucanases) that lack cellulose binding domains.
Both binding of substrates and catalysis depend on the three-dimensional structure of the enzyme which arises as a consequence of the level of protein folding. The amino acid sequence and arrangement of their residues that occur within the active site, the position where the substrate binds, may influence factors like binding affinity of ligands, stabilization of substrates within the active site and catalysis. The substrate structure is complementary to the precise active site structure of enzyme. Changes in the position of residues may result in distortion of one or more of these interactions. Additional factors like temperature, pH and metal ions influence the non-covalent interactions between enzyme structure. The Thermotoga maritima species make cellulases consisting of 2 β-sheets (protein structures) surrounding a central catalytic region which is the active-site.<ref name="onlinelibrary.wiley.com">{{cite journal | vauthors = Cheng YS, Ko TP, Wu TH, Ma Y, Huang CH, Lai HL, Wang AH, Liu JR, Guo RT | display-authors = 6 | title = Crystal structure and substrate-binding mode of cellulase 12A from Thermotoga maritima | journal = Proteins | volume = 79 | issue = 4 | pages = 1193–204 | date = April 2011 | pmid = 21268113 | doi = 10.1002/prot.22953 | s2cid = 23572933 }}</ref> The enzyme is categorised as an endoglucanase, which internally cleaves β-1,4-glycosydic bonds in cellulose chains facilitating further degradation of the polymer. Different species in the same family as T. maritima make cellulases with different structures. Cellulases produced by the species Coprinopsis cinerea consists of seven protein strands in the shape of an enclosed tunnel called a β/α barrel. These enzymes hydrolyse the substrate carboxymethyl cellulose. Binding of the substrate in the active site induces a change in conformation which allows degradation of the molecule.
Cellulase complexes
In many bacteria, cellulases in vivo are complex enzyme structures organized in supramolecular complexes, the cellulosomes. They can contain, but are not limited to, five different enzymatic subunits representing namely endocellulases, exocellulases, cellobiases, oxidative cellulases and cellulose phosphorylases wherein only exocellulases and cellobiases participate in the actual hydrolysis of the β(1→4) linkage. The number of sub-units making up cellulosomes can also determine the rate of enzyme activity.
Multidomain cellulases are widespread among many taxonomic groups, however, cellulases from anaerobic bacteria, found in cellulosomes, have the most complex architecture consisting of different types of modules. For example, Clostridium cellulolyticum produces 13 GH9 modular cellulases containing a different number and arrangement of catalytic-domain (CD), carbohydrate-binding module (CBM), dockerin, linker and Ig-like domain.
The cellulase complex from Trichoderma reesei, for example, comprises a component labeled C1 (57,000 daltons) that separates the chains of crystalline cellulose, an endoglucanase (about 52,000 daltons), an exoglucanase (about 61,000 dalton), and a β-glucosidase (76,000 daltons).
Numerous "signature" sequences known as dockerins and cohesins have been identified in the genomes of bacteria that produce cellulosomes. Depending on their amino acid sequence and tertiary structures, cellulases are divided into clans and families.
Multimodular cellulases are more efficient than free enzyme (with only CD) due to synergism because of the close proximity between the enzyme and the cellulosic substrate. CBM are involved in binding of cellulose whereas glycosylated linkers provide flexibility to the CD for higher activity and protease protection, as well as increased binding to the cellulose surface.
Mechanism of cellulolysis
Uses
Cellulase is used for commercial food processing in coffee. It performs hydrolysis of cellulose during drying of beans. Furthermore, cellulases are widely used in textile industry and in laundry detergents. They have also been used in the pulp and paper industry for various purposes, and they are even used for pharmaceutical applications.
Cellulase is used in the fermentation of biomass into biofuels, although this process is relatively experimental at present.
Paper and pulp
Cellulases have a wide varierty of applications in the paper and pulp industry. In the production and recycling processes cellulases can be applied to improve debarking, pulping, bleaching, drainage or deinking.
The use of cellulase can also improve the quality of the paper. Cellulases affect the fiber morphology, which may lead to improved fibre-fibre bonding, resulting in increased fibre cohesion. Additional effects on the paper may include increased tensile strength, higher bulk, porosity and tissue softness.
Pharmaceutical
Cellulase is used in medicine as a treatment for phytobezoars, a form of cellulose bezoar found in the human stomach, and it has exhibited efficacy in degrading polymicrobial bacterial biofilms by hydrolyzing the β(1-4) glycosidic linkages within the structural, matrix exopolysaccharides of the extracellular polymeric substance (EPS).
Textiles
Various uses of cellulases in the textile industry include biostoning of jeans, polishing of textile fibres, softening of garments, removal of excess dye or the restoration of colour brightness.
Agriculture
Cellulases can be used in the agricultural sector as a plant pathogen and for disease control. It is also applied to enhance seed germination and improvement of the root system, and may lead to improved soil quality and recude the dependence on mineral fertilisers.
Measurement
As the native substrate, cellulose, is a water-insoluble polymer, traditional reducing sugar assays using this substrate can not be employed for the measurement of cellulase activity. Analytical scientists have developed a number of alternative methods.
DNSA Method''' Cellulase activity was determined by incubating 0.5 ml of supernatant with 0.5 ml of 1% carboxymethylcellulose (CMC) in 0.05M citrate buffer (pH 4.8) at 50 °C for 30 minutes. The reaction was terminated by the addition of 3 ml dinitrosalicylic acid reagent. Absorbance was read at 540 nm.
A viscometer can be used to measure the decrease in viscosity of a solution containing a water-soluble cellulose derivative such as carboxymethyl cellulose upon incubation with a cellulase sample. The decrease in viscosity is directly proportional to the cellulase activity. While such assays are very sensitive and specific for endo-cellulase (exo-acting cellulase enzymes produce little or no change in viscosity), they are limited by the fact that it is hard to define activity in conventional enzyme units (micromoles of substrate hydrolyzed or product produced per minute).
Cellooligosaccharide substrates
The lower DP cello-oligosaccharides (DP2-6) are sufficiently soluble in water to act as viable substrates for cellulase enzymes. However, as these substrates are themselves 'reducing sugars', they are not suitable for use in traditional reducing sugar assays because they generate a high 'blank' value. However their cellulase mediated hydrolysis can be monitored by HPLC or IC methods to gain valuable information on the substrate requirements of a particular cellulase enzyme.
Reduced cello-oligosaccharide substrates
Cello-oligosaccharides can be chemically reduced through the action of sodium borohydride to produce their corresponding sugar alcohols. These compounds do not react in reducing sugar assays but their hydrolysis products do. This makes borohydride reduced cello-oligosaccharides valuable substrates for the assay of cellulase using traditional reducing sugar assays such as the Nelson-Symogyi method.
Dyed polysaccharide substrates
These substrates can be subdivided into two classes-
Insoluble chromogenic substrates: An insoluble cellulase substrate such as AZCL-HE-cellulose absorbs water to create gelatinous particles when placed in solution. This substrate is gradually depolymerised and solubilised by the action of cellulase. The reaction is terminated by adding an alkaline solution to stop enzyme activity and the reaction slurry is filtered or centrifuged. The colour in the filtrate or supernatant is measured and can be related to enzyme activity.
Soluble chromogenic substrates: A cellulase sample is incubated with a water-soluble substrate such as azo-CM-cellulose, the reaction is terminated and high molecular weight, partially hydrolysed fragments are precipitated from solution with an organic solvent such as ethanol or methoxyethanol. The suspension is mixed thoroughly, centrifuged, and the colour in the supernatant solution (due to small, soluble, dyed fragments) is measured. With the aid of a standard curve, the enzyme activity can be determined.
Enzyme coupled reagents
New reagents have been developed that allow for the specific measurement of endo''-cellulase. These methods involve the use of functionalised oligosaccharide substrates in the presence of an ancillary enzyme. In the example shown, a cellulase enzyme is able to recognise the trisaccharide fragment of cellulose and cleave this unit. The ancillary enzyme present in the reagent mixture (β-glucosidase) then acts to hydrolyse the fragment containing the chromophore or fluorophore. The assay is terminated by the addition of a basic solution that stops the enzymatic reaction and deprotonates the liberated phenolic compound to produce the phenolate species. The cellulase activity of a given sample is directly proportional to the quantity of phenolate liberated which can be measured using a spectrophotometer. The acetal functionalisation on the non-reducing end of the trisaccharide substrate prevents the action of the ancillary β-glucosidase on the parent substrate.
See also
Cellulose 1,4-beta-cellobiosidase, an efficient cellulase
Cellulase unit, a unit for quantifying cellulase activity
References
Further reading
The Merck Manual of Diagnosis and Therapy, Chapter 24
Carbohydrate metabolism
Cellulose
Enzymes | Cellulase | [
"Chemistry"
] | 3,561 | [
"Carbohydrate metabolism",
"Carbohydrate chemistry",
"Metabolism"
] |
342,520 | https://en.wikipedia.org/wiki/Fuel%20efficiency | Fuel efficiency (or fuel economy) is a form of thermal efficiency, meaning the ratio of effort to result of a process that converts chemical potential energy contained in a carrier (fuel) into kinetic energy or work. Overall fuel efficiency may vary per device, which in turn may vary per application, and this spectrum of variance is often illustrated as a continuous energy profile. Non-transportation applications, such as industry, benefit from increased fuel efficiency, especially fossil fuel power plants or industries dealing with combustion, such as ammonia production during the Haber process.
In the context of transport, fuel economy is the energy efficiency of a particular vehicle, given as a ratio of distance traveled per unit of fuel consumed. It is dependent on several factors including engine efficiency, transmission design, and tire design. In most countries, using the metric system, fuel economy is stated as "fuel consumption" in liters per 100 kilometers (L/100 km) or kilometers per liter (km/L or kmpl). In a number of countries still using other systems, fuel economy is expressed in miles per gallon (mpg), for example in the US and usually also in the UK (imperial gallon); there is sometimes confusion as the imperial gallon is 20% larger than the US gallon so that mpg values are not directly comparable. Traditionally, litres per mil were used in Norway and Sweden, but both have aligned to the EU standard of L/100 km.
Fuel consumption is a more accurate measure of a vehicle's performance because it is a linear relationship while fuel economy leads to distortions in efficiency improvements. Weight-specific efficiency (efficiency per unit weight) may be stated for freight, and passenger-specific efficiency (vehicle efficiency per passenger) for passenger vehicles.
Vehicle design
Fuel efficiency is dependent on many parameters of a vehicle, including its engine parameters, aerodynamic drag, weight, AC usage, fuel and rolling resistance. There have been advances in all areas of vehicle design in recent decades. Fuel efficiency of vehicles can also be improved by careful maintenance and driving habits.
Hybrid vehicles use two or more power sources for propulsion. In many designs, a small combustion engine is combined with electric motors. Kinetic energy which would otherwise be lost to heat during braking is recaptured as electrical power to improve fuel efficiency. The larger batteries in these vehicles power the car's electronics, allowing the engine to shut off and avoid prolonged idling.
Fleet efficiency
Fleet efficiency describes the average efficiency of a population of vehicles. Technological advances in efficiency may be offset by a change in buying habits with a propensity to heavier vehicles that are less fuel-efficient.
Energy efficiency terminology
Energy efficiency is similar to fuel efficiency but the input is usually in units of energy such as megajoules (MJ), kilowatt-hours (kW·h), kilocalories (kcal) or British thermal units (BTU). The inverse of "energy efficiency" is "energy intensity", or the amount of input energy required for a unit of output such as MJ/passenger-km (of passenger transport), BTU/ton-mile or kJ/t-km (of freight transport), GJ/t (for production of steel and other materials), BTU/(kW·h) (for electricity generation), or litres/100 km (of vehicle travel). Litres per 100 km is also a measure of "energy intensity" where the input is measured by the amount of fuel and the output is measured by the distance travelled. For example: Fuel economy in automobiles.
Given a heat value of a fuel, it would be trivial to convert from fuel units (such as litres of gasoline) to energy units (such as MJ) and conversely. But there are two problems with comparisons made using energy units:
There are two different heat values for any hydrogen-containing fuel which can differ by several percent (see below).
When comparing transportation energy costs, a kilowatt hour of electric energy may require an amount of fuel with heating value of 2 or 3 kilowatt hours to produce it.
Energy content of fuel
The specific energy content of a fuel is the heat energy obtained when a certain quantity is burned (such as a gallon, litre, kilogram). It is sometimes called the heat of combustion. There exists two different values of specific heat energy for the same batch of fuel. One is the high (or gross) heat of combustion and the other is the low (or net) heat of combustion. The high value is obtained when, after the combustion, the water in the exhaust is in liquid form. For the low value, the exhaust has all the water in vapor form (steam). Since water vapor gives up heat energy when it changes from vapor to liquid, the liquid water value is larger since it includes the latent heat of vaporization of water. The difference between the high and low values is significant, about 8 or 9%. This accounts for most of the apparent discrepancy in the heat value of gasoline. In the U.S. (and the table) the high heat values have traditionally been used, but in many other countries, the low heat values are commonly used.
Neither the gross heat of combustion nor the net heat of combustion gives the theoretical amount of mechanical energy (work) that can be obtained from the reaction. (This is given by the change in Gibbs free energy, and is around 45.7 MJ/kg for gasoline.) The actual amount of mechanical work obtained from fuel (the inverse of the specific fuel consumption) depends on the engine. A figure of 17.6 MJ/kg is possible with a gasoline engine, and 19.1 MJ/kg for a diesel engine. See Brake-specific fuel consumption for more information.
Transportation
Fuel efficiency of motor vehicles
Driving technique
Advanced technology
The most efficient machines for converting energy to rotary motion are electric motors, as used in electric vehicles. However, electricity is not a primary energy source so the efficiency of the electricity production has also to be taken into account. Railway trains can be powered using electricity, delivered through an additional running rail, overhead catenary system or by on-board generators used in diesel-electric locomotives as common on the US and UK rail networks. Pollution produced from centralised generation of electricity is emitted at a distant power station, rather than "on site". Pollution can be reduced by using more railway electrification and low carbon power for electricity. Some railways, such as the French SNCF and Swiss federal railways derive most, if not 100% of their power, from hydroelectric or nuclear power stations, therefore atmospheric pollution from their rail networks is very low. This was reflected in a study by AEA Technology between a Eurostar train and airline journeys between London and Paris, which showed the trains on average emitting 10 times less CO2, per passenger, than planes, helped in part by French nuclear generation.
Hydrogen fuel cells
In the future, hydrogen cars may be commercially available. Toyota is test-marketing vehicles powered by hydrogen fuel cells in southern California, where a series of hydrogen fueling stations has been established. Powered either through chemical reactions in a fuel cell that create electricity to drive very efficient electrical motors or by directly burning hydrogen in a combustion engine (near identically to a natural gas vehicle, and similarly compatible with both natural gas and gasoline); these vehicles promise to have near-zero pollution from the tailpipe (exhaust pipe). Potentially the atmospheric pollution could be minimal, provided the hydrogen is made by electrolysis using electricity from non-polluting sources such as solar, wind or hydroelectricity or nuclear. Commercial hydrogen production uses fossil fuels and produces more carbon dioxide than hydrogen.
Because there are pollutants involved in the manufacture and destruction of a car and the production, transmission and storage of electricity and hydrogen, the label "zero pollution" applies only to the car's conversion of stored energy into movement.
In 2004, a consortium of major auto-makers — BMW, General Motors, Honda, Toyota and Volkswagen/Audi — came up with "Top Tier Detergent Gasoline Standard" to gasoline brands in the US and Canada that meet their minimum standards for detergent content and do not contain metallic additives. Top Tier gasoline contains higher levels of detergent additives in order to prevent the build-up of deposits (typically, on fuel injector and intake valve) known to reduce fuel economy and engine performance.
In microgravity
How fuel combusts affects how much energy is produced. The National Aeronautics and Space Administration (NASA) has investigated fuel consumption in microgravity.
The common distribution of a flame under normal gravity conditions depends on convection, because soot tends to rise to the top of a flame, such as in a candle, making the flame yellow. In microgravity or zero gravity, such as an environment in outer space, convection no longer occurs, and the flame becomes spherical, with a tendency to become more blue and more efficient. There are several possible explanations for this difference, of which the most likely one given is the hypothesis that the temperature is evenly distributed enough that soot is not formed and complete combustion occurs., National Aeronautics and Space Administration, April 2005. Experiments by NASA in microgravity reveal that diffusion flames in microgravity allow more soot to be completely oxidised after they are produced than diffusion flames on Earth, because of a series of mechanisms that behaved differently in microgravity when compared to normal gravity conditions.LSP-1 experiment results, National Aeronautics and Space Administration, April 2005. Premixed flames in microgravity burn at a much slower rate and more efficiently than even a candle on Earth, and last much longer.
See also
References
External links
US Government website on fuel economy
UK DfT comparisons on road and rail
NASA Offers a $1.5 Million Prize for a Fast and Fuel-Efficient Aircraft
Car Fuel Consumption Official Figures
Spritmonitor.de "the most fuel efficient cars" - Database of thousands of (mostly German) car owners' actual fuel consumption figures (cf. Spritmonitor)
Searchable fuel economy data from the EPA - United States Environmental Protection Agency
penghemat bbm - Alat penghemat bbm
Ny Times: A Road Test of Alternative Fuel Visions
Energy economics
Physical quantities
Energy efficiency
Transport economics | Fuel efficiency | [
"Physics",
"Mathematics",
"Environmental_science"
] | 2,084 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Energy economics",
"Physical properties",
"Environmental social science"
] |
342,526 | https://en.wikipedia.org/wiki/Villa | A villa is a type of house that was originally an ancient Roman upper class country house that originally provided an escape from urban life. Since its origins in the Roman villa, the idea and function of a villa have evolved considerably. After the fall of the Roman Republic, villas became small farming compounds, which were increasingly fortified in Late Antiquity, sometimes transferred to the Church for reuse as a monastery. Then they gradually re-evolved through the Middle Ages into elegant upper-class country homes. In the early modern period, any comfortable detached house with a garden near a city or town was likely to be described as a villa; most survivals have now been engulfed by suburbia. In modern parlance, "villa" can refer to various types and sizes of residences, ranging from the suburban semi-detached double villa to, in some countries, especially around the Mediterranean, residences of above average size in the countryside.
Roman
Roman villas included:
the villa urbana, a suburban or country seat that could easily be reached from Rome or another city for a night or two. They often featured decorated rooms and porticoes.
the villa rustica, the farm-house estate that was permanently occupied by the servants who had charge generally of the estate, which would centre on the villa itself, perhaps only seasonally occupied. The Roman villae rusticae at the heart of latifundia were the earliest versions of what later and elsewhere became called manors and plantations.
the otium villa, for rural retirement or pleasure.
In terms of design, there was often little difference in the main residence between these types at any particular level of size, but the presence or absence of farm outbuildings reflected the size and function of the estate.
Not included as villae were the domus, city houses for the élite and privileged classes, and the insulae, blocks of apartment buildings for the rest of the population. In Satyricon (1st century CE), Petronius described the wide range of Roman dwellings. Another type of villae is the "villa maritima", a seaside villa, located on the coast.
A concentration of Imperial villas existed on the Gulf of Naples, on the Isle of Capri, at Monte Circeo and at Antium. Examples include the Villa of the Papyri in Herculaneum; and the Villa of the Mysteries and Villa of the Vettii in Pompeii.
There was an important villa maritima in Barcola near Trieste. This villa was located directly on the coast and was divided into terraces in a representation area in which luxury and power was displayed, a separate living area, a garden, some facilities open to the sea and a thermal bath. Not far from this noble place, which was already popular with the Romans because of its favorable microclimate, one of the most important Villa Maritima of its time, the Miramare Castle, was built in the 19th century.
Wealthy Romans also escaped the summer heat in the hills round Rome, especially around Tibur (Tivoli and Frascati), such as at Hadrian's Villa. Cicero allegedly possessed no fewer than seven villas, the oldest of which was near Arpinum, which he inherited. Pliny the Younger had three or four, of which the example near Laurentium is the best known from his descriptions.
Roman writers refer with satisfaction to the self-sufficiency of their latifundium villas, where they drank their own wine and pressed their own oil. This was an affectation of urban aristocrats playing at being old-fashioned virtuous Roman farmers, it has been said that the economic independence of later rural villas was a symptom of the increasing economic fragmentation of the Roman Empire.
In Roman Britannia
Archaeologists have meticulously examined numerous Roman villas in England. Like their Italian counterparts, they were complete working agrarian societies of fields and vineyards, perhaps even tileworks or quarries, ranged round a high-status power centre with its baths and gardens. The grand villa at Woodchester preserved its mosaic floors when the Anglo-Saxon parish church was built (not by chance) upon its site. Grave-diggers preparing for burials in the churchyard as late as the 18th century had to punch through the intact mosaic floors. The even more palatial villa rustica at Fishbourne near Winchester was built (uncharacteristically) as a large open rectangle, with porticos enclosing gardens entered through a portico. Towards the end of the 3rd century, Roman towns in Britain ceased to expand: like patricians near the centre of the empire, Roman Britons withdrew from the cities to their villas, which entered on a palatial building phase, a "golden age" of villa life. Villae rusticae are essential in the Empire's economy.
Two kinds of villa-plan in Roman Britain may be characteristic of Roman villas in general. The more usual plan extended wings of rooms all opening onto a linking portico, which might be extended at right angles, even to enclose a courtyard. The other kind featured an aisled central hall like a basilica, suggesting the villa owner's magisterial role. The villa buildings were often independent structures linked by their enclosed courtyards. Timber-framed construction, carefully fitted with mortises and tenons and dowelled together, set on stone footings, were the rule, replaced by stone buildings for the important ceremonial rooms. Traces of window glass have been found, as well as ironwork window grilles.
Monastery villas of Late Antiquity
With the decline and collapse of the Western Roman Empire in the fourth and fifth centuries, the villas were more and more isolated and came to be protected by walls. In England the villas were abandoned, looted, and burned by Anglo-Saxon invaders in the fifth century, but the concept of an isolated, self-sufficient agrarian working community, housed close together, survived into Anglo-Saxon culture as the vill, with its inhabitants – if formally bound to the land – as villeins.
In regions on the Continent, aristocrats and territorial magnates donated large working villas and overgrown abandoned ones to individual monks; these might become the nuclei of monasteries. In this way, the Italian villa system of late Antiquity survived into the early Medieval period in the form of monasteries that withstood the disruptions of the Gothic War (535–554) and the Lombards. About 529 Benedict of Nursia established his influential monastery of Monte Cassino in the ruins of a villa at Subiaco that had belonged to Nero.
From the sixth to the eighth century, Gallo-Roman villas in the Merovingian royal fisc were repeatedly donated as sites for monasteries under royal patronage in Gaul – Saint-Maur-des-Fossés and Fleury Abbey provide examples. In Germany a famous example is Echternach; as late as 698, Willibrord established an abbey at a Roman villa of Echternach near Trier, presented to him by Irmina, daughter of Dagobert II, king of the Franks. Kintzheim was Villa Regis, the "villa of the king". Around 590, Saint Eligius was born in a highly placed Gallo-Roman family at the 'villa' of Chaptelat near Limoges, in Aquitaine (now France). The abbey at Stavelot was founded ca 650 on the domain of a former villa near Liège and the abbey of Vézelay had a similar founding.
Post-Roman era
As Europe's influence spread to other cultures, the form, and use of the villa would also spread as well. In post-Roman times a villa referred to a self-sufficient, usually fortified Italian or Gallo-Roman farmstead. It was economically as self-sufficient as a village and its inhabitants, who might be legally tied to it as serfs were villeins. The Merovingian Franks inherited the concept, followed by the Carolingian French but the later French term was basti or bastide.
Villa/Vila (or its cognates) is part of many Spanish and Portuguese placenames, like Vila Real and Villadiego: a villa/vila is a town with a charter (fuero or foral) of lesser importance than a ciudad/cidade ("city"). When it is associated with a personal name, villa was probably used in the original sense of a country estate rather than a chartered town. Later evolution has made the Hispanic distinction between villas and ciudades a purely honorific one. Madrid is the Villa y Corte, the villa considered to be separate from the formerly mobile royal court, but the much smaller Ciudad Real was declared ciudad by the Spanish crown.
Italian Renaissance
Tuscany
In 14th and 15th century Italy, a villa once more connoted a country house, like the first Medici villas, the Villa del Trebbio and that at Cafaggiolo, both strong fortified houses built in the 14th century in the Mugello region near Florence. In 1450, Giovanni de' Medici commenced on a hillside the Villa Medici in Fiesole, Tuscany, probably the first villa created under the instructions of Leon Battista Alberti, who theorized the features of the new idea of villa in his De re aedificatoria.
These first examples of Renaissance villa predate the age of Lorenzo de' Medici, who added the Villa di Poggio a Caiano by Giuliano da Sangallo, begun in 1470, in Poggio a Caiano, Province of Prato, Tuscany.
From Tuscany the idea of villa was spread again through Renaissance Italy and Europe.
Tuscan villa gardens
The Quattrocento villa gardens were treated as a fundamental and aesthetic link between a residential building and the outdoors, with views over a humanized agricultural landscape, at that time the only desirable aspect of nature. Later villas and gardens include the Palazzo Pitti and Boboli Gardens in Florence, and the Villa di Pratolino in Vaglia.
Rome
Rome had more than its share of villas with easy reach of the small sixteenth-century city: the progenitor, the first villa suburbana built since Antiquity, was the Belvedere or palazzetto, designed by Antonio del Pollaiuolo and built on the slope above the Vatican Palace.
The Villa Madama, the design of which, attributed to Raphael and carried out by Giulio Romano in 1520, was one of the most influential private houses ever built; elements derived from Villa Madama appeared in villas through the 19th century. Villa Albani was built near the Porta Salaria. Other are the Villa Borghese; the Villa Doria Pamphili (1650); the Villa Giulia of Pope Julius III (1550), designed by Vignola. The Roman villas Villa Ludovisi and Villa Montalto, were destroyed during the late nineteenth century in the wake of the real estate bubble that took place in Rome after the seat of government of a united Italy was established at Rome.
The cool hills of Frascati gained the Villa Aldobrandini (1592); the Villa Falconieri and the Villa Mondragone. The Villa d'Este near Tivoli is famous for the water play in its terraced gardens. The Villa Medici was on the edge of Rome, on the Pincian Hill, when it was built in 1540. Besides these designed for seasonal pleasure, usually located within easy distance of a city, other Italian villas were remade from a rocca or castello, as the family seat of power, such as Villa Caprarola for the Farnese.
Near Siena in Tuscany, the Villa Cetinale was built by Cardinal Flavio Chigi. He employed Carlo Fontana, pupil of Gian Lorenzo Bernini to transform the villa and dramatic gardens in a Roman Baroque style by 1680. The Villa Lante garden is one of the most sublime creations of the Italian villa in the landscape, completed in the 17th century.
Venice
In the later 16th century in the northeastern Italian Peninsula the Palladian villas of the Veneto, designed by Andrea Palladio (1508–1580), were built in Vicenza in the Republic of Venice. Palladio always designed his villas with reference to their setting. He often unified all the farm buildings into the architecture of his extended villas while focusing on symmetry and perfect proportion.
Examples are the Villa Emo, the Villa Godi, the Villa Forni Cerato, the Villa Capra "La Rotonda", and Villa Foscari.
The Villas are grouped into an association (Associazione Ville Venete) and offer touristic itineraries and accommodation possibilities.
Villas elsewhere
17th century
Soon after in Greenwich England, following his 1613–1615 Grand Tour, Inigo Jones designed and built the Queen's House between 1615 and 1617 in an early Palladian architecture style adaptation in another country. The Palladian villa style renewed its influence in different countries and eras and remained influential for over four hundred years, with the Neo-Palladian a part of the late 17th century and on Renaissance Revival architecture period.
18th and 19th centuries
In the early 18th century the English took up the term, and applied it to compact houses in the country, especially those accessible from London: Chiswick House is an example of such a "party villa". Thanks to the revival of interest in Palladio and Inigo Jones, soon Neo-Palladian villas dotted the valley of the River Thames and English countryside. Marble Hill House in England was conceived originally as a "villa" in the 18th-century sense.
In many ways the late 18th century Monticello, by Thomas Jefferson in Virginia, United States is a Palladian Revival villa. Other examples of the period and style are Hammond-Harwood House in Annapolis, Maryland; and many pre-American Civil War or antebellum plantations, such as Westover Plantation and many other James River plantations as well dozens of Antebellum era plantations in the rest of the Old South functioned as the Roman Latifundium villas had. A later revival, in the Gilded Age and early 20th century, produced The Breakers in Newport, Rhode Island, Filoli in Woodside, California, and Dumbarton Oaks in Georgetown, Washington, D.C.; by architects-landscape architects such as Richard Morris Hunt, Willis Polk, and Beatrix Farrand.
In the nineteenth century, the term villa was extended to describe any large suburban house that was free-standing in a landscaped plot of ground. By the time 'semi-detached villas' were being erected at the turn of the twentieth century, the term collapsed under its extension and overuse.
The second half of the nineteenth century saw the creation of large "Villenkolonien" in the German speaking countries, wealthy residential areas that were completely made up of large mansion houses and often built to an artfully created masterplan. Also many large mansions for the wealthy German industrialists were built, such as Villa Hügel in Essen. The Villenkolonie of Lichterfelde West in Berlin was conceived after an extended trip by the architect through the South of England.
Representative historicist mansions in Germany include the Heiligendamm and other resort architecture mansions at the Baltic Sea, Rose Island and King's House on Schachen in the Bavarian Alps, Villa Dessauer in Bamberg, Villa Wahnfried in Bayreuth, Drachenburg near Bonn, Hammerschmidt Villa in Bonn, the Liebermann Villa and Britz House in Berlin, Albrechtsberg, Eckberg, Villa Stockhausen and in Dresden, Villa Waldberta in Feldafing, in Frankfurt, Jenisch House and Budge-Palais in Hamburg, and in Königstein, Villa Stuck and in Munich, Schloss Klink at Lake Müritz, Villa Ludwigshöhe in Rhineland-Palatinate, Villa Haux in Stuttgart and Weinberg House in Waren.
In France the Château de Ferrières is an example of the Italian Neo-Renaissance style villa – and in Britain the Mentmore Towers. A representative building of this style in Germany is Villa Haas (designed by Ludwig Hofmann) in Hesse.
Villa Hakasalmi in Helsinki (built in 1834–46) represents Empire-era villa architecture. It was the home of Aurora Karamzin (1808–1902) at the end of the 19th century and is now the city museum of Helsinki, Finland.
20th – 21st centuries
Europe
During the 19th and 20th century, the term "villa" became widespread for detached mansions in Europe. Special forms are for instance spa villas (Kurvillen in German) and seaside villas (Bädervillen in German), that became especially popular at the end of the 19th century. The tradition established back then continued throughout the 20th century and even until today.
Another trend was the erection of rather minimalist mansions in the Bauhaus style since the 1920s, that also continues until today.
In Denmark, Norway and Sweden "villa" denotes most forms of single-family detached homes, regardless of size and standard.
Americas
The villa concept lived and lives on in the haciendas of Latin America and the estancias of Brazil and Argentina. The oldest are original Portuguese and Spanish Colonial architecture; followed after independences in the Americas from Spain and Portugal, by the Spanish Colonial Revival style with regional variations. In the 20th century International Style villas were designed by Roberto Burle Marx, Oscar Niemeyer, Luis Barragán, and other architects developing a unique Euro-Latin synthesized aesthetic.
Villas are particularly well represented in California and the West Coast of the United States, where they were originally commissioned by well travelled "upper-class" patrons moving on from the Queen Anne style Victorian architecture and Beaux-Arts architecture. Communities such as Montecito, Pasadena, Bel Air, Beverly Hills, and San Marino in Southern California, and Atherton and Piedmont in the San Francisco Bay Area are a few examples of villa density.
The popularity of Mediterranean Revival architecture in its various iterations over the last century has been consistently used in that region and in Florida. Just a few of the notable early architects were Wallace Neff, Addison Mizner, Stanford White, and George Washington Smith. A few examples are the Harold Lloyd Estate in Beverly Hills, California, Medici scale Hearst Castle on the Central Coast of California, and Villa Montalvo in the Santa Cruz Mountains of Saratoga, California, Villa Vizcaya in Coconut Grove, Miami, American Craftsman versions are the Gamble House and the villas by Greene and Greene in Pasadena, California
Modern villas
Modern architecture has produced some important examples of buildings known as villas:
Villa Noailles by Robert Mallet-Stevens in Hyères, France
Villa Savoye by Le Corbusier in Poissy, France
Villa Mairea by Alvar Aalto in Noormarkku, Finland
Villa Tugendhat by Ludwig Mies van der Rohe in Brno, Czech Republic
Villa Lewaro by Vertner Tandy in Irvington, New York
Country-villa examples:
Hollyhock House (1919) by Frank Lloyd Wright in Hollywood
Gropius House by Walter Gropius (1937) in Lincoln, Massachusetts
Fallingwater by Frank Lloyd Wright (1939) in Pennsylvania, U.S.
Farnsworth House by Ludwig Mies van der Rohe in Plano, Illinois
Kaufmann Desert House by Richard Neutra (1946) in Palm Springs, California
Auldbrass Plantation by Frank Lloyd Wright (1940–1951) in Beaufort County, South Carolina
Palácio da Alvorada by Oscar Niemeyer (1958) in Brasília, Brazil
Getty Villa, in Pacific Palisades, Los Angeles
Other
Today, the term "villa" is often applied to vacation rental properties. In the United Kingdom the term is used for high quality detached homes in warm destinations, particularly Florida and the Mediterranean. The term is also used in Pakistan, and in some of the Caribbean islands such as Jamaica, Saint Barthélemy, Saint Martin, Guadeloupe, British Virgin Islands, and others. It is similar for the coastal resort areas of Baja California Sur and mainland Mexico, and for hospitality industry destination resort "luxury bungalows" in various locations worldwide.
In Indonesia, the term "villa" is applied to Dutch colonial country houses (landhuis). Nowadays, the term is more popularly applied to vacation rental usually located in countryside area.
In Australia, "villas" or "villa units" are terms used to describe a type of townhouse complex which contains, possibly smaller attached or detached houses of up to 3–4 bedrooms that were built since the early 1980s.
In New Zealand, "villa" refers almost exclusively to Victorian and Edwardian wooden weatherboard houses mainly built between 1880 and 1914, characterised by high ceilings (often ), sash windows, and a long entrance hall.
In South Korea, the term "villa" refers to small multi-household house with 4 floors or less.
In Cambodia, "villa" is used as a loanword in the local language of Khmer, and is generally used to describe any type of detached townhouse that features yard space. The term does not apply to any particular architectural style or size, the only features that distinguish a Khmer villa from another building are the yard space and being fully detached. The terms "twin-villa" and "mini-villa" have been coined meaning semi-detached and smaller versions respectively. Generally, these would be more luxurious and spacious houses than the more common row houses. The yard space would also typically feature some form of garden, trees or greenery. Generally, these would be properties in major cities, where there is more wealth and hence more luxurious houses.
See also
Dacha
Estate
Great house
Manor house
Mansion
Ultimate bungalow
Notes
Architectural history
House styles
House types
Architecture in Italy
Vacation rental
Tourist accommodations | Villa | [
"Engineering"
] | 4,459 | [
"Architectural history",
"Architecture"
] |
342,592 | https://en.wikipedia.org/wiki/Touchard%20polynomials | The Touchard polynomials, studied by , also called the exponential polynomials or Bell polynomials, comprise a polynomial sequence of binomial type defined by
where is a Stirling number of the second kind, i.e., the number of partitions of a set of size n into k disjoint non-empty subsets.
The first few Touchard polynomials are
Properties
Basic properties
The value at 1 of the nth Touchard polynomial is the nth Bell number, i.e., the number of partitions of a set of size n:
If X is a random variable with a Poisson distribution with expected value λ, then its nth moment is E(Xn) = Tn(λ), leading to the definition:
Using this fact one can quickly prove that this polynomial sequence is of binomial type, i.e., it satisfies the sequence of identities:
The Touchard polynomials constitute the only polynomial sequence of binomial type with the coefficient of x equal 1 in every polynomial.
The Touchard polynomials satisfy the Rodrigues-like formula:
The Touchard polynomials satisfy the recurrence relation
and
In the case x = 1, this reduces to the recurrence formula for the Bell numbers.
A generalization of both this formula and the definition, is a generalization of Spivey's formula
Using the umbral notation Tn(x)=Tn(x), these formulas become:
The generating function of the Touchard polynomials is
which corresponds to the generating function of Stirling numbers of the second kind.
Touchard polynomials have contour integral representation:
Zeroes
All zeroes of the Touchard polynomials are real and negative. This fact was observed by L. H. Harper in 1967.
The absolute value of the leftmost zero is bounded from above by
although it is conjectured that the leftmost zero grows linearly with the index n.
The Mahler measure of the Touchard polynomials can be estimated as follows:
where and are the smallest of the maximum two k indices such that
and
are maximal, respectively.
Generalizations
Complete Bell polynomial may be viewed as a multivariate generalization of Touchard polynomial , since
The Touchard polynomials (and thereby the Bell numbers) can be generalized, using the real part of the above integral, to non-integer order:
See also
Bell polynomials
References
Polynomials | Touchard polynomials | [
"Mathematics"
] | 475 | [
"Polynomials",
"Algebra"
] |
342,602 | https://en.wikipedia.org/wiki/Lehmann%E2%80%93Scheff%C3%A9%20theorem | In statistics, the Lehmann–Scheffé theorem is a prominent statement, tying together the ideas of completeness, sufficiency, uniqueness, and best unbiased estimation. The theorem states that any estimator that is unbiased for a given unknown quantity and that depends on the data only through a complete, sufficient statistic is the unique best unbiased estimator of that quantity. The Lehmann–Scheffé theorem is named after Erich Leo Lehmann and Henry Scheffé, given their two early papers.
If T is a complete sufficient statistic for θ and E(g(T)) = τ(θ) then g(T) is the uniformly minimum-variance unbiased estimator (UMVUE) of τ(θ).
Statement
Let be a random sample from a distribution that has p.d.f (or p.m.f in the discrete case) where is a parameter in the parameter space. Suppose is a sufficient statistic for θ, and let be a complete family. If then is the unique MVUE of θ.
Proof
By the Rao–Blackwell theorem, if is an unbiased estimator of θ then defines an unbiased estimator of θ with the property that its variance is not greater than that of .
Now we show that this function is unique. Suppose is another candidate MVUE estimator of θ. Then again defines an unbiased estimator of θ with the property that its variance is not greater than that of . Then
Since is a complete family
and therefore the function is the unique function of Y with variance not greater than that of any other unbiased estimator. We conclude that is the MVUE.
Example for when using a non-complete minimal sufficient statistic
An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that is not complete, was provided by Galili and Meilijson in 2016. Let be a random sample from a scale-uniform distribution with unknown mean and known design parameter . In the search for "best" possible unbiased estimators for , it is natural to consider as an initial (crude) unbiased estimator for and then try to improve it. Since is not a function of , the minimal sufficient statistic for (where and ), it may be improved using the Rao–Blackwell theorem as follows:
However, the following unbiased estimator can be shown to have lower variance:
And in fact, it could be even further improved when using the following estimator:
The model is a scale model. Optimal equivariant estimators can then be derived for loss functions that are invariant.
See also
Basu's theorem
Completeness (statistics)
Rao–Blackwell theorem
References
Theorems in statistics
Estimation theory | Lehmann–Scheffé theorem | [
"Mathematics"
] | 582 | [
"Mathematical theorems",
"Mathematical problems",
"Theorems in statistics"
] |
342,649 | https://en.wikipedia.org/wiki/Resveratrol | Resveratrol (3,5,4′-trihydroxy-trans-stilbene) is a stilbenoid, a type of natural phenol or polyphenol and a phytoalexin produced by several plants in response to injury or when the plant is under attack by pathogens, such as bacteria or fungi. Sources of resveratrol in food include the skin of grapes, blueberries, raspberries, mulberries, and peanuts.
Although commonly used as a dietary supplement and studied in laboratory models of human diseases, there is no high-quality evidence that resveratrol improves lifespan or has a substantial effect on any human disease.
Research
Resveratrol has been studied for its potential therapeutic use, with little evidence of anti-disease effects or health benefits in humans.
Cardiovascular disease
There is no evidence of benefit from resveratrol in people who already have heart disease. A 2018 meta-analysis found no effect on systolic or diastolic blood pressure; a sub-analysis revealed a 2 mmHg decrease in systolic pressure only from resveratrol doses of 300 mg per day, and only in diabetic people. A 2014 Chinese meta-analysis found no effect on systolic or diastolic blood pressure; a sub-analysis found an 11.90 mmHg reduction in systolic blood pressure from resveratrol doses of 150 mg per day.
Cancer
, there is no evidence of an effect of resveratrol on cancer in humans.
Metabolic syndrome
There is no conclusive evidence for an effect of resveratrol on human metabolic syndrome. One 2015 review found little evidence for use of resveratrol to treat diabetes. A 2015 meta-analysis found little evidence for an effect of resveratrol on diabetes biomarkers.
One review found limited evidence that resveratrol lowered fasting plasma glucose in people with diabetes. Two reviews indicated that resveratrol supplementation may reduce body weight and body mass index, but not fat mass or total blood cholesterol. A 2018 review found that resveratrol supplementation may reduce biomarkers of inflammation, TNF-α and C-reactive protein.
Lifespan
, there is insufficient evidence to indicate that consuming resveratrol has an effect on human lifespan.
Cognition
Resveratrol has been assessed for a possible effect on cognition, but with mixed evidence for an effect. One review concluded that resveratrol had no effect on neurological function, but reported that supplementation improved recognition and mood, although there were inconsistencies in study designs and results.
Alzheimer's disease
A 2022 meta-analysis provided preliminary evidence that resveratrol, alone or in combination with glucose and malate, may slow cognitive decline in Alzheimer's disease.
Diabetes
Although animal experiments have found some evidence that resveratrol may help improve insulin sensitivity and so potentially help manage diabetes, subsequent research on people is limited and does not support the use of resveratrol for this purpose.
Other
There is no significant evidence that resveratrol affects vascular endothelial function, neuroinflammation, skin infections or aging skin. A 2019 review of human studies found mixed effects of resveratrol on certain bone biomarkers, such as increases in blood and bone alkaline phosphatase, while reporting no effect on other biomarkers, such as calcium and collagen.
Pharmacology
Pharmacodynamics
Resveratrol has been identified as a pan-assay interference compound, which produces positive results in many different laboratory assays. Its ability for varied interactions may be due to direct effects on cell membranes.
As of 2015, many specific biological targets for resveratrol had been identified, including NQO2 (alone and in interaction with AKT1), GSTP1, estrogen receptor beta, CBR1, and integrin αVβ. It was unclear at that time if any or all of these were responsible for the observed effects in cells and model organisms.
Pharmacokinetics
The viability of an oral delivery method is unlikely due to the low aqueous solubility of the molecule. The bioavailability of resveratrol is about 0.5% due to extensive hepatic glucuronidation and sulfation. Glucuronidation occurs in the intestine as well as in the liver, whereas sulfonation not only occurs in the liver but in the intestine and by microbial gut activity. Due to rapid metabolism, the half-life of resveratrol is short (about 8–14 minutes), but the half-life of the sulphate and glucoronide metabolites is above 9 hours.
Metabolism
Resveratrol is extensively metabolized in the body, with the liver and intestines as the major sites of its metabolism. Liver metabolites are products of phase II (conjugation) enzymes, which are themselves induced by resveratrol in vitro.
Chemistry
Resveratrol (3,5,4'-trihydroxystilbene) is a stilbenoid, a derivative of stilbene. It exists as two geometric isomers: cis- (Z) and trans- (E), with the trans-isomer shown in the top image. Resveratrol exists conjugated to glucose.
The trans- form can undergo photoisomerization to the cis- form when exposed to ultraviolet irradiation.
UV irradiation to cis-resveratrol induces further photochemical reaction, producing a fluorescent molecule named "Resveratrone".
Trans-resveratrol in the powder form was found to be stable under "accelerated stability" conditions of 75% humidity and 40 °C in the presence of air. The trans isomer is also stabilized by the presence of transport proteins. Resveratrol content also was stable in the skins of grapes and pomace taken after fermentation and stored for a long period. lH- and 13C-NMR data for the four most common forms of resveratrols are reported in literature.
Biosynthesis
Resveratrol is produced in plants via the enzyme resveratrol synthase (stilbene synthase). Its immediate precursor is a tetraketide derived from malonyl CoA and 4-coumaroyl CoA. The latter is derived from phenylalanine.
Biotransformation
The grapevine fungal pathogen Botrytis cinerea is able to oxidise resveratrol into metabolites showing attenuated antifungal activities. Those include the resveratrol dimers restrytisol A, B, and C, resveratrol trans-dehydrodimer, leachinol F, and pallidol. The soil bacterium Bacillus cereus can be used to transform resveratrol into piceid (resveratrol 3-O-beta-D-glucoside).
Adverse effects
Only a few human studies have been done to determine the adverse effects of resveratrol, all of them preliminary with small participant numbers. Adverse effects resulted mainly from long-term use (weeks or longer) and daily doses of 1000 mg or higher, causing nausea, stomach pain, flatulence, and diarrhea. A review of 136 patients in seven studies who were given more than 500 mg for a month showed 25 cases of diarrhea, 8 cases of abdominal pain, 7 cases of nausea, and 5 cases of flatulence. A 2018 review of resveratrol effects on blood pressure found that some people had increased frequency of bowel movements and loose stools.
Occurrences
Plants
Resveratrol is a phytoalexin, a class of compounds produced by many plants when they are infected by pathogens or physically harmed by cutting, crushing, or ultraviolet radiation.
Plants that synthesize resveratrol include knotweeds, pine trees including Scots pine and Eastern white pine, grape vines, raspberries, mulberries, peanut plants, cocoa bushes, and Vaccinium shrubs that produce berries, including blueberries, cranberries, and bilberries.
Foods
The levels of resveratrol found in food varies considerably, even in the same food from season to season and batch to batch.
Wine and grape juice
Resveratrol concentrations in red wines average trans-resveratrol/L (), ranging from nondetectable levels to 14.3 mg/L (62.7 μM) trans-resveratrol. Levels of cis-resveratrol follow the same trend as trans-resveratrol.
In general, wines made from grapes of the Pinot noir and St. Laurent varieties showed the highest level of trans-resveratrol, though no wine or region can yet be said to produce wines with significantly higher concentrations than any other wine or region. Champagne and vinegar also contain appreciable levels of resveratrol.
Red wine contains between 0.2 and 5.8 mg/L, depending on the grape variety. White wine has much less because red wine is fermented with the skins, allowing the wine to extract the resveratrol, whereas white wine is fermented after the skin has been removed. The composition of wine is different from that of grapes since the extraction of resveratrol from grapes depends on the duration of the skin contact, and the resveratrol 3-glucosides are in part hydrolysed, yielding both trans- and cis-resveratrol.
Though its extraction (i.e. from wood chips or other sources) during artificial ageing, resveratrol is added in red wines to improve the color and sensory properties.
Selected foods
Ounce for ounce, peanuts have about 25% as much resveratrol as red wine. Peanuts, especially sprouted peanuts, have a content similar to grapes in a range of 2.3 to 4.5 μg/g before sprouting, and after sprouting, in a range of 11.7 to 25.7 μg/g, depending on peanut cultivar.
Mulberries (especially the skin) are a source of as much as 50 micrograms of resveratrol per gram dry weight.
Most US supplements of resveratrol are derived from the root of Reynoutria japonica (also called Japanese knotweed, Hu Zhang, etc.)
History
The first mention of resveratrol was in a Japanese article in 1939 by Michio Takaoka, who isolated it from Veratrum album, variety grandiflorum, and later, in 1963, from the roots of Japanese knotweed. In 2004, Harvard University professor David Sinclair co-founded Sirtris Pharmaceuticals, the initial product of which was a resveratrol formulation. Sirtris was purchased and made a subsidiary of GlaxoSmithKline in 2008 for $720 million and shut down in 2013, without successful drug development.
Related compounds
Dihydro-resveratrol
Epsilon-viniferin, Pallidol and Quadrangularin A three different resveratrol dimers
Elafibranor, a structurally related compound that acts as a dual PPARα/δ agonist
THSG, a glycoside compound found in He Shou Wu which is very similar to resveratrol.
Trans-diptoindonesin B, a resveratrol trimer
Hopeaphenol, a resveratrol tetramer
Oxyresveratrol, the aglycone of mulberroside A, a compound found in Morus alba, the white mulberry
Piceatannol, an active metabolite of resveratrol found in red wine
Piceid, a resveratrol glucoside
Pterostilbene, a doubly methylated resveratrol
4'-Methoxy-(E)-resveratrol 3-O-rutinoside, a compound found in the stem bark of Boswellia dalzielii
Rhaponticin a glucoside of the stilbenoid rhapontigenin, found in rhubarb rhizomes
See also
Phenolic compounds in wine
Polyphenol antioxidant
List of phytochemicals in food
Phytochemistry
Secondary metabolites
References
External links
Aromatase inhibitors
GPER agonists
Phytoalexins
Phytoestrogens
Stilbenoids | Resveratrol | [
"Chemistry"
] | 2,622 | [
"Phytoalexins",
"Chemical ecology"
] |
342,684 | https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion%20principle | In combinatorics, the inclusion–exclusion principle is a counting technique which generalizes the familiar method of obtaining the number of elements in the union of two finite sets; symbolically expressed as
where A and B are two finite sets and |S| indicates the cardinality of a set S (which may be considered as the number of elements of the set, if the set is finite). The formula expresses the fact that the sum of the sizes of the two sets may be too large since some elements may be counted twice. The double-counted elements are those in the intersection of the two sets and the count is corrected by subtracting the size of the intersection.
The inclusion-exclusion principle, being a generalization of the two-set case, is perhaps more clearly seen in the case of three sets, which for the sets A, B and C is given by
This formula can be verified by counting how many times each region in the Venn diagram figure is included in the right-hand side of the formula. In this case, when removing the contributions of over-counted elements, the number of elements in the mutual intersection of the three sets has been subtracted too often, so must be added back in to get the correct total.
Generalizing the results of these examples gives the principle of inclusion–exclusion. To find the cardinality of the union of sets:
Include the cardinalities of the sets.
Exclude the cardinalities of the pairwise intersections.
Include the cardinalities of the triple-wise intersections.
Exclude the cardinalities of the quadruple-wise intersections.
Include the cardinalities of the quintuple-wise intersections.
Continue, until the cardinality of the -tuple-wise intersection is included (if is odd) or excluded ( even).
The name comes from the idea that the principle is based on over-generous inclusion, followed by compensating exclusion.
This concept is attributed to Abraham de Moivre (1718), although it first appears in a paper of Daniel da Silva (1854) and later in a paper by J. J. Sylvester (1883). Sometimes the principle is referred to as the formula of Da Silva or Sylvester, due to these publications. The principle can be viewed as an example of the sieve method extensively used in number theory and is sometimes referred to as the sieve formula.
As finite probabilities are computed as counts relative to the cardinality of the probability space, the formulas for the principle of inclusion–exclusion remain valid when the cardinalities of the sets are replaced by finite probabilities. More generally, both versions of the principle can be put under the common umbrella of measure theory.
In a very abstract setting, the principle of inclusion–exclusion can be expressed as the calculation of the inverse of a certain matrix. This inverse has a special structure, making the principle an extremely valuable technique in combinatorics and related areas of mathematics. As Gian-Carlo Rota put it:
"One of the most useful principles of enumeration in discrete probability and combinatorial theory is the celebrated principle of inclusion–exclusion. When skillfully applied, this principle has yielded the solution to many a combinatorial problem."
Formula
In its general formula, the principle of inclusion–exclusion states that for finite sets , one has the identity
This can be compactly written as
or
In words, to count the number of elements in a finite union of finite sets, first sum the cardinalities of the individual sets, then subtract the number of elements that appear in at least two sets, then add back the number of elements that appear in at least three sets, then subtract the number of elements that appear in at least four sets, and so on. This process always ends since there can be no elements that appear in more than the number of sets in the union. (For example, if there can be no elements that appear in more than sets; equivalently, there can be no elements that appear in at least sets.)
In applications it is common to see the principle expressed in its complementary form. That is, letting be a finite universal set containing all of the and letting denote the complement of in , by De Morgan's laws we have
As another variant of the statement, let be a list of properties that elements of a set may or may not have, then the principle of inclusion–exclusion provides a way to calculate the number of elements of that have none of the properties. Just let be the subset of elements of which have the property and use the principle in its complementary form. This variant is due to J. J. Sylvester.
Notice that if you take into account only the first sums on the right (in the general form of the principle), then you will get an overestimate if is odd and an underestimate if is even.
Examples
Counting derangements
A more complex example is the following.
Suppose there is a deck of n cards numbered from 1 to n. Suppose a card numbered m is in the correct position if it is the mth card in the deck. How many ways, W, can the cards be shuffled with at least 1 card being in the correct position?
Begin by defining set Am, which is all of the orderings of cards with the mth card correct. Then the number of orders, W, with at least one card being in the correct position, m, is
Apply the principle of inclusion–exclusion,
Each value represents the set of shuffles having at least p values m1, ..., mp in the correct position. Note that the number of shuffles with at least p values correct only depends on p, not on the particular values of . For example, the number of shuffles having the 1st, 3rd, and 17th cards in the correct position is the same as the number of shuffles having the 2nd, 5th, and 13th cards in the correct positions. It only matters that of the n cards, 3 were chosen to be in the correct position. Thus there are equal terms in the pth summation (see combination).
is the number of orderings having p elements in the correct position, which is equal to the number of ways of ordering the remaining n − p elements, or (n − p)!. Thus we finally get:
A permutation where no card is in the correct position is called a derangement. Taking n! to be the total number of permutations, the probability Q that a random shuffle produces a derangement is given by
a truncation to n + 1 terms of the Taylor expansion of e−1. Thus the probability of guessing an order for a shuffled deck of cards and being incorrect about every card is approximately e−1 or 37%.
A special case
The situation that appears in the derangement example above occurs often enough to merit special attention. Namely, when the size of the intersection sets appearing in the formulas for the principle of inclusion–exclusion depend only on the number of sets in the intersections and not on which sets appear. More formally, if the intersection
has the same cardinality, say αk = |AJ|, for every k-element subset J of {1, ..., n}, then
Or, in the complementary form, where the universal set S has cardinality α0,
Formula generalization
Given a family (repeats allowed) of subsets A1, A2, ..., An of a universal set S, the principle of inclusion–exclusion calculates the number of elements of S in none of these subsets. A generalization of this concept would calculate the number of elements of S which appear in exactly some fixed m of these sets.
Let N = [n] = {1,2,...,n}. If we define , then the principle of inclusion–exclusion can be written as, using the notation of the previous section; the number of elements of S contained in none of the Ai is:
If I is a fixed subset of the index set N, then the number of elements which belong to Ai for all i in I and for no other values is:
Define the sets
We seek the number of elements in none of the Bk which, by the principle of inclusion–exclusion (with ), is
The correspondence K ↔ J = I ∪ K between subsets of N \ I and subsets of N containing I is a bijection and if J and K correspond under this map then BK = AJ, showing that the result is valid.
In probability
In probability, for events A1, ..., An in a probability space , the inclusion–exclusion principle becomes for n = 2
for n = 3
and in general
which can be written in closed form as
where the last sum runs over all subsets I of the indices 1, ..., n which contain exactly k elements, and
denotes the intersection of all those Ai with index in I.
According to the Bonferroni inequalities, the sum of the first terms in the formula is alternately an upper bound and a lower bound for the LHS. This can be used in cases where the full formula is too cumbersome.
For a general measure space (S,Σ,μ) and measurable subsets A1, ..., An of finite measure, the above identities also hold when the probability measure is replaced by the measure μ.
Special case
If, in the probabilistic version of the inclusion–exclusion principle, the probability of the intersection AI only depends on the cardinality of I, meaning that for every k in {1, ..., n} there is an ak such that
then the above formula simplifies to
due to the combinatorial interpretation of the binomial coefficient . For example, if the events are independent and identically distributed, then for all i, and we have , in which case the expression above simplifies to
(This result can also be derived more simply by considering the intersection of the complements of the events .)
An analogous simplification is possible in the case of a general measure space and measurable subsets of finite measure.
There is another formula used in point processes. Let be a finite set and be a random subset of . Let be any subset of , then
Other formulas
The principle is sometimes stated in the form that says that if
then
The combinatorial and the probabilistic version of the inclusion–exclusion principle are instances of ().
If one sees a number as a set of its prime factors, then () is a generalization of Möbius inversion formula for square-free natural numbers. Therefore, () is seen as the Möbius inversion formula for the incidence algebra of the partially ordered set of all subsets of A.
For a generalization of the full version of Möbius inversion formula, () must be generalized to multisets. For multisets instead of sets, () becomes
where is the multiset for which , and
μ(S) = 1 if S is a set (i.e. a multiset without double elements) of even cardinality.
μ(S) = −1 if S is a set (i.e. a multiset without double elements) of odd cardinality.
μ(S) = 0 if S is a proper multiset (i.e. S has double elements).
Notice that is just the of () in case is a set.
Applications
The inclusion–exclusion principle is widely used and only a few of its applications can be mentioned here.
Counting derangements
A well-known application of the inclusion–exclusion principle is to the combinatorial problem of counting all derangements of a finite set. A derangement of a set A is a bijection from A into itself that has no fixed points. Via the inclusion–exclusion principle one can show that if the cardinality of A is n, then the number of derangements is [n! / e] where [x] denotes the nearest integer to x; a detailed proof is available here and also see the examples section above.
The first occurrence of the problem of counting the number of derangements is in an early book on games of chance: Essai d'analyse sur les jeux de hazard by P. R. de Montmort (1678 – 1719) and was known as either "Montmort's problem" or by the name he gave it, "problème des rencontres." The problem is also known as the hatcheck problem.
The number of derangements is also known as the subfactorial of n, written !n. It follows that if all bijections are assigned the same probability then the probability that a random bijection is a derangement quickly approaches 1/e as n grows.
Counting intersections
The principle of inclusion–exclusion, combined with De Morgan's law, can be used to count the cardinality of the intersection of sets as well. Let represent the complement of Ak with respect to some universal set A such that for each k. Then we have
thereby turning the problem of finding an intersection into the problem of finding a union.
Graph coloring
The inclusion exclusion principle forms the basis of algorithms for a number of NP-hard graph partitioning problems, such as graph coloring.
A well known application of the principle is the construction of the chromatic polynomial of a graph.
Bipartite graph perfect matchings
The number of perfect matchings of a bipartite graph can be calculated using the principle.
Number of onto functions
Given finite sets A and B, how many surjective functions (onto functions) are there from A to B? Without any loss of generality we may take A = {1, ..., k} and B = {1, ..., n}, since only the cardinalities of the sets matter. By using S as the set of all functions from A to B, and defining, for each i in B, the property Pi as "the function misses the element i in B" (i is not in the image of the function), the principle of inclusion–exclusion gives the number of onto functions between A and B as:
Permutations with forbidden positions
A permutation of the set S = {1, ..., n} where each element of S is restricted to not being in certain positions (here the permutation is considered as an ordering of the elements of S) is called a permutation with forbidden positions. For example, with S = {1,2,3,4}, the permutations with the restriction that the element 1 can not be in positions 1 or 3, and the element 2 can not be in position 4 are: 2134, 2143, 3124, 4123, 2341, 2431, 3241, 3421, 4231 and 4321. By letting Ai be the set of positions that the element i is not allowed to be in, and the property Pi to be the property that a permutation puts element i into a position in Ai, the principle of inclusion–exclusion can be used to count the number of permutations which satisfy all the restrictions.
In the given example, there are 12 = 2(3!) permutations with property P1, 6 = 3! permutations with property P2 and no permutations have properties P3 or P4 as there are no restrictions for these two elements. The number of permutations satisfying the restrictions is thus:
4! − (12 + 6 + 0 + 0) + (4) = 24 − 18 + 4 = 10.
The final 4 in this computation is the number of permutations having both properties P1 and P2. There are no other non-zero contributions to the formula.
Stirling numbers of the second kind
The Stirling numbers of the second kind, S(n,k) count the number of partitions of a set of n elements into k non-empty subsets (indistinguishable boxes). An explicit formula for them can be obtained by applying the principle of inclusion–exclusion to a very closely related problem, namely, counting the number of partitions of an n-set into k non-empty but distinguishable boxes (ordered non-empty subsets). Using the universal set consisting of all partitions of the n-set into k (possibly empty) distinguishable boxes, A1, A2, ..., Ak, and the properties Pi meaning that the partition has box Ai empty, the principle of inclusion–exclusion gives an answer for the related result. Dividing by k! to remove the artificial ordering gives the Stirling number of the second kind:
Rook polynomials
A rook polynomial is the generating function of the number of ways to place non-attacking rooks on a board B that looks like a subset of the squares of a checkerboard; that is, no two rooks may be in the same row or column. The board B is any subset of the squares of a rectangular board with n rows and m columns; we think of it as the squares in which one is allowed to put a rook. The coefficient, rk(B) of xk in the rook polynomial RB(x) is the number of ways k rooks, none of which attacks another, can be arranged in the squares of B. For any board B, there is a complementary board consisting of the squares of the rectangular board that are not in B. This complementary board also has a rook polynomial with coefficients
It is sometimes convenient to be able to calculate the highest coefficient of a rook polynomial in terms of the coefficients of the rook polynomial of the complementary board. Without loss of generality we can assume that n ≤ m, so this coefficient is rn(B). The number of ways to place n non-attacking rooks on the complete n × m "checkerboard" (without regard as to whether the rooks are placed in the squares of the board B) is given by the falling factorial:
Letting Pi be the property that an assignment of n non-attacking rooks on the complete board has a rook in column i which is not in a square of the board B, then by the principle of inclusion–exclusion we have:
Euler's phi function
Euler's totient or phi function, φ(n) is an arithmetic function that counts the number of positive integers less than or equal to n that are relatively prime to n. That is, if n is a positive integer, then φ(n) is the number of integers k in the range 1 ≤ k ≤ n which have no common factor with n other than 1. The principle of inclusion–exclusion is used to obtain a formula for φ(n). Let S be the set {1, ..., n} and define the property Pi to be that a number in S is divisible by the prime number pi, for 1 ≤ i ≤ r, where the prime factorization of
Then,
Dirichlet hyperbola method
The Dirichlet hyperbola method re-expresses a sum of a multiplicative function by selecting a suitable Dirichlet convolution , recognizing that the sum
can be recast as a sum over the lattice points in a region bounded by , , and , splitting this region into two overlapping subregions, and finally using the inclusion–exclusion principle to conclude that
Diluted inclusion–exclusion principle
In many cases where the principle could give an exact formula (in particular, counting prime numbers using the sieve of Eratosthenes), the formula arising does not offer useful content because the number of terms in it is excessive. If each term individually can be estimated accurately, the accumulation of errors may imply that the inclusion–exclusion formula is not directly applicable. In number theory, this difficulty was addressed by Viggo Brun. After a slow start, his ideas were taken up by others, and a large variety of sieve methods developed. These for example may try to find upper bounds for the "sieved" sets, rather than an exact formula.
Let A1, ..., An be arbitrary sets and p1, ..., pn real numbers in the closed unit interval . Then, for every even number k in {0, ..., n}, the indicator functions satisfy the inequality:
Proof of main statement
Choose an element contained in the union of all sets and let be the individual sets containing it. (Note that t > 0.) Since the element is counted precisely once by the left-hand side of equation (), we need to show that it is counted precisely once by the right-hand side. On the right-hand side, the only non-zero contributions occur when all the subsets in a particular term contain the chosen element, that is, all the subsets are selected from . The contribution is one for each of these sets (plus or minus depending on the term) and therefore is just the (signed) number of these subsets used in the term. We then have:
By the binomial theorem,
Using the fact that and rearranging terms, we have
and so, the chosen element is counted only once by the right-hand side of equation ().
Algebraic proof
An algebraic proof can be obtained using indicator functions (also known as characteristic functions). The indicator function of a subset S of a set X is the function
If and are two subsets of , then
Let A denote the union of the sets A1, ..., An. To prove the inclusion–exclusion principle in general, we first verify the identity
for indicator functions, where:
The following function
is identically zero because: if x is not in A, then all factors are 0−0 = 0; and otherwise, if x does belong to some Am, then the corresponding mth factor is 1−1=0. By expanding the product on the left-hand side, equation () follows.
To prove the inclusion–exclusion principle for the cardinality of sets, sum the equation () over all x in the union of A1, ..., An. To derive the version used in probability, take the expectation in (). In general, integrate the equation () with respect to μ. Always use linearity in these derivations.
See also
Notes
References
Enumerative combinatorics
Probability theory
Articles containing proofs
Mathematical principles
Abraham de Moivre | Inclusion–exclusion principle | [
"Mathematics"
] | 4,573 | [
"Mathematical principles",
"Articles containing proofs",
"Enumerative combinatorics",
"Combinatorics"
] |
342,776 | https://en.wikipedia.org/wiki/8SVX | 8-Bit Sampled Voice (8SVX) is an audio file format standard developed by Electronic Arts for the Amiga computer series. It is a data subtype of the IFF file container format. It typically contains linear pulse-code modulation (LPCM) digital audio.
Description
The 8SVX subtype stores 8-bit audio data within chunks contained within an IFF file container. 8SVX subtypes can exist alone within IFF file containers (audio only), or can be multiplexed together with other IFF subtypes, such as video animation streams.
Metadata about the 8SVX data stream is contained in separate descriptor chunks that come prior to the main data body chunk. Sample rate, volume and compression type are described in a VHDR chunk. Various other chunks are available to describe the name, author and copyright.
8SVX supports features such as attack, release and section repeat, which are useful for storage of musical instrument samples.
An example layout of an audio-only 8SVX IFF audio file:
Encoding
The majority of 8SVX data streams are encoded using uncompressed linear PCM streams. Optionally, Fibonacci-delta lossy data compression is also available, resulting in a 50% compression ratio at the cost of decreased fidelity. Multi-byte values are stored in big-endian format, the native byte order for the Motorola 68000 family.
Support
IFF-8SVX encoded audio was the default audio format for the Commodore Amiga. Most audio programs for the Amiga supported the format. AmigaOS 3.0 introduced a multimedia framework using the datatype subsystem that included an 8SVX decoder (8SVX.datatype).
Many sound editing programs and music tracker programs of the late 1980s and early 1990s supported the format. It is still a common format for cross-platform audio editing programs (such as Sound eXchange).
8SVX support is also available to modern programs via libavcodec (and the related ffdshow codec package) as well as via libsndfile.
Legacy
The Commodore Amiga computer series never received native hardware support for 16-bit digital audio before the decline of the platform. As such, the related 16SVX and MAUD subtypes never saw wide adoption.
Apple Computer developed a separate subtype known as AIFF which included support for 16-bit samples and additional compression types. It superseded 8SVX as the dominant audio subtype for IFF files.
Microsoft and IBM co-developed the RIFF file container and the related WAVE audio subtype for Windows. Both formats are heavily influenced by the IFF/8SVX container format, but like AIFF, were extended to support higher bit-depths and additional compression types.
See also
AIFF
IFF File Format
WAV
Paula, the digital audio processor for the Commodore Amiga computer
References
External links
IFF file container and subtypes at Multimedia Wiki
IFF chunk registry at Amigan Software
Amiga file formats
Computer file formats
AmigaOS | 8SVX | [
"Technology"
] | 631 | [
"AmigaOS",
"Computing platforms"
] |
342,815 | https://en.wikipedia.org/wiki/Toyota%20Production%20System | The Toyota Production System (TPS) is an integrated socio-technical system, developed by Toyota, that comprises its management philosophy and practices. The TPS is a management system that organizes manufacturing and logistics for the automobile manufacturer, including interaction with suppliers and customers. The system is a major precursor of the more generic "lean manufacturing". Taiichi Ohno and Eiji Toyoda, Japanese industrial engineers, developed the system between 1948 and 1975.
Originally called "just-in-time production", it builds on the approach created by the founder of Toyota, Sakichi Toyoda, his son Kiichiro Toyoda, and the engineer Taiichi Ohno. The principles underlying the TPS are embodied in The Toyota Way.
Goals
The main objectives of the TPS are to design out overburden (muri) and inconsistency (mura), and to eliminate waste (muda). The most significant effects on process value delivery are achieved by designing a process capable of delivering the required results smoothly; by designing out "mura" (inconsistency). It is also crucial to ensure that the process is as flexible as necessary without stress or "muri" (overburden) since this generates "muda" (waste). Finally the tactical improvements of waste reduction or the elimination of muda are very valuable. There are eight kinds of muda that are addressed in the TPS:
Waste of overproduction (largest waste)
Waste of time on hand (waiting)
Waste of transportation
Waste of processing itself
Waste of excess inventory
Waste of movement
Waste of making defective products
Waste of underutilized workers
Concept
Toyota Motor Corporation published an official description of TPS for the first time in 1992; this booklet was revised in 1998. In the foreword it was said: "The TPS is a framework for conserving resources by eliminating waste. People who participate in the system learn to identify expenditures of material, effort and time that do not generate value for customers and furthermore we have avoided a 'how-to' approach. The booklet is not a manual. Rather it is an overview of the concepts, that underlie our production system. It is a reminder that lasting gains in productivity and quality are possible whenever and wherever management and employees are united in a commitment to positive change". TPS is grounded on two main conceptual pillars:
Just-in-time – meaning "Making only what is needed, only when it is needed, and only in the amount that is needed"
Jidoka – (Autonomation) meaning "Automation with a human touch"
Toyota has developed various tools to transfer these concepts into practice and apply them to specific requirements and conditions in the company and business.
Origins
Toyota has long been recognized as a leader in the automotive manufacturing and production industry.
Toyota received their inspiration for the system, not from the American automotive industry (at that time the world's largest by far), but from visiting a supermarket. The idea of just-in-time production was originated by Kiichiro Toyoda, founder of Toyota. The question was how to implement the idea. In reading descriptions of American supermarkets, Ohno saw the supermarket as the model for what he was trying to accomplish in the factory. A customer in a supermarket takes the desired amount of goods off the shelf and purchases them. The store restocks the shelf with enough new product to fill up the shelf space. Similarly, a work-center that needed parts would go to a "store shelf" (the inventory storage point) for the particular part and "buy" (withdraw) the quantity it needed, and the "shelf" would be "restocked" by the work-center that produced the part, making only enough to replace the inventory that had been withdrawn.
While low inventory levels are a key outcome of the System, an important element of the philosophy behind its system is to work intelligently and eliminate waste so that only minimal inventory is needed. Many Western businesses, having observed Toyota's factories, set out to attack high inventory levels directly without understanding what made these reductions possible. The act of imitating without understanding the underlying concept or motivation may have led to the failure of those projects.
Principles
The underlying principles, called the Toyota Way, have been outlined by Toyota as follows:
Continuous improvement
Challenge (We form a long-term vision, meeting challenges with courage and creativity to realize our dreams.)
Kaizen (We improve our business operations continuously, always driving for innovation and evolution.)
Genchi Genbutsu (Go to the source to find the facts to make correct decisions.)
Respect for people
Respect (We respect others, make every effort to understand each other, take responsibility and do our best to build mutual trust.)
Teamwork (We stimulate personal and professional growth, share the opportunities of development and maximize individual and team performance.)
External observers have summarized the principles of the Toyota Way as:
The right process will produce the right results
Create continuous process flow to bring problems to the surface.
Use the "pull" system to avoid overproduction.
Level out the workload (heijunka). (Work like the tortoise, not the hare.)
Build a culture of stopping to fix problems, to get quality right from the start. (Jidoka)
Standardized tasks are the foundation for continuous improvement and employee empowerment.
Use visual control so no problems are hidden.
Use only reliable, thoroughly tested technology that serves your people and processes.
Add value to the organization by developing your people and partners
Grow leaders who thoroughly understand the work, live the philosophy, and teach it to others.
Develop exceptional people and teams who follow your company's philosophy.
Respect your extended network of partners and suppliers by challenging them and helping them improve.
Continuously solving root problems drives organizational learning
Go and see for yourself to thoroughly understand the situation (Genchi Genbutsu, 現地現物);
Make decisions slowly by consensus, thoroughly considering all options (Nemawashi, 根回し); implement decisions rapidly;
Become a learning organization through relentless reflection (Hansei, 反省) and continuous improvement and never stop (Kaizen, 改善).
What this means is that it is a system for thorough waste elimination. Here, waste refers to anything which does not advance the process, everything that does not increase added value. Many people settle for eliminating the waste that everyone recognizes as waste. But much remains that simply has not yet been recognized as waste or that people are willing to tolerate.
People had resigned themselves to certain problems, had become hostage to routine and abandoned the practice of problem-solving. This going back to basics, exposing the real significance of problems and then making fundamental improvements, can be witnessed throughout the Toyota Production System.
The principles of the Toyota Production System have been compared to production methods in the industrialization of construction.
Sharing
Toyota originally began sharing TPS with its parts suppliers in the 1990s. Because of interest in the program from other organizations, Toyota began offering instruction in the methodology to others. Toyota has even "donated" its system to charities, providing its engineering staff and techniques to non-profits in an effort to increase their efficiency and thus ability to serve people. For example, Toyota assisted the Food Bank For New York City to significantly decrease waiting times at soup kitchens, packing times at a food distribution center, and waiting times in a food pantry. Toyota announced on June 29, 2011 the launch of a national program to donate its Toyota Production System expertise towards nonprofit organizations with goal of improving their operations, extending their reach, and increasing their impact. By September, less than three months later, SBP, a disaster relief organization based out of New Orleans, reported that their home rebuilds had been reduced from 12 to 18 weeks, to 6 weeks. Additionally, employing Toyota methods (like kaizen) had reduced construction errors by 50 percent. The company included SBP among its first 20 community organizations, along with AmeriCorps.
Workplace Management
Taiichi Ohno's Workplace Management (2007) outlines in 38 chapters how to implement the TPS. Some important concepts are:
Chapter 1 Wise Mend Their Ways - See the Analects of Confucius for further information.
Chapter 4 Confirm Failures With Your Own Eyes
Chapter 11 Wasted Motion Is Not Work
Chapter 15 Just In Time - Phrase invented by Kiichiro Toyoda - the first president of Toyota. There is conflict on what the actual English translation of what "just in time" really means. Taiichi Ohno quoted from the book says " 'Just In Time' should be interpreted to mean that it is a problem when parts are delivered too early".
Chapter 23 How To Produce At A Lower Cost - "One of the main fundamentals of the Toyota System is to make 'what you need, in the amount you need, by the time you need it', but to tell the truth there is another part to this and that is 'at lower cost'. But that part is not written down." World economies, events, and each individual job also play a part in production specifics.
Commonly used terminology
Andon (行灯) (English: A large lighted board used to alert floor supervisors to a problem at a specific station. Literally: Signboard)
Chaku-Chaku (着々 or 着着) (English: Load-Load)
Gemba (現場) (English: The actual place, the place where the real work is done; On site)
Genchi Genbutsu (現地現物) (English: Go and see for yourself)
Hansei (反省) (English: Self-reflection)
Heijunka (平準化) (English: Production Smoothing)
Jidoka (自働化) (English: Autonomation - automation with human intelligence)
Just-in-Time (ジャストインタイム "Jasutointaimu") (JIT)
Kaizen (改善) (English: Continuous Improvement)
Kanban (看板, also かんばん) (English: Sign, Index Card)
Manufacturing supermarket where all components are available to be withdrawn by a process
Muda (無駄, also ムダ) (English: Waste)
Mura (斑 or ムラ) (English: Unevenness)
Muri (無理) (English: Overburden)
Nemawashi (根回し) (English: Laying the groundwork, building consensus, literally: Going around the roots)
Obeya (大部屋) (English: Manager's meeting. Literally: Large room, war room, council room)
Poka-yoke (ポカヨケ) (English: fail-safing, bulletproofing - to avoid (yokeru) inadvertent errors (poka)
Seibi (English: To Prepare)
Seiri (整理) (English: Sort, removing whatever isn't necessary.)
Seiton (整頓) (English: Organize)
Seiso (清掃) (English: Clean and inspect)
Seiketsu (清潔) (English: Standardize)
Shitsuke (躾) (English: Sustain)
See also
Lean construction
W. Edwards Deming
Training Within Industry
Production flow analysis
Industrial engineering
References
Bibliography
Emiliani, B., with Stec, D., Grasso, L. and Stodder, J. (2007), Better Thinking, Better Results: Case Study and Analysis of an Enterprise-Wide Lean Transformation, second edition, The CLBM, LLC Kensington, Conn.,
Liker, Jeffrey (2003), The Toyota Way: 14 Management Principles from the World's Greatest Manufacturer, First edition, McGraw-Hill, .
Monden, Yasuhiro (1998), Toyota Production System, An Integrated Approach to Just-In-Time, Third edition, Norcross, GA: Engineering & Management Press, .
Spear, Steven, and Bowen, H. Kent (September 1999), "Decoding the DNA of the Toyota Production System," Harvard Business Review
Womack, James P. and Jones, Daniel T. (2003), Lean Thinking: Banish Waste and Create Wealth in Your Corporation, Revised and Updated, HarperBusiness, .
Womack, James P., Jones, Daniel T., and Roos, Daniel (1991), The Machine That Changed the World: The Story of Lean Production, HarperBusiness, .
External links
Toyota Production System
History of the TPS at the Toyota Motor Manufacturing Kentucky Site
Toyota Production System Terms
Article: Lean Primer: Introduction
Lean manufacturing
Toyota Production System | Toyota Production System | [
"Engineering"
] | 2,556 | [
"Lean manufacturing"
] |
342,837 | https://en.wikipedia.org/wiki/Corning%20Inc. | Corning Incorporated is an American multinational technology company that specializes in specialty glass, ceramics, and related materials and technologies including advanced optics, primarily for industrial and scientific applications. The company was named Corning Glass Works until 1989. Corning divested its consumer product lines (including CorningWare and Visions Pyroceram-based cookware, Corelle Vitrelle tableware, and Pyrex glass bakeware) in 1998 by selling the Corning Consumer Products Company subsidiary (later Corelle Brands) to Borden.
, Corning had five major business sectors: display technologies, environmental technologies, life sciences, optical communications, and specialty materials. Corning is involved in two joint ventures: Dow Corning and Pittsburgh Corning. The company completed the corporate spin-offs of Quest Diagnostics and Covance (now Fortrea) in January 1997. Corning is one of the main suppliers to Apple Inc. Since working with Steve Jobs in 2007, to develop the iPhone; Corning develops and manufactures Gorilla Glass, which is used by many smartphone makers. It is one of the world's biggest glassmakers. Corning won the National Medal of Technology and Innovation four times for its product and process innovations.
Corning continues to maintain its world headquarters at Corning, N.Y. The firm also established one of the first industrial research labs there in 1908. It continues to expand the nearby research and development facility, as well as operations associated with catalytic converters and diesel engine filter product lines. Corning has a long history of community development and has assured community leaders that it intends to remain headquartered in its small upstate New York hometown.
History
Corning Glass Works was founded in 1851 by Amory Houghton, in Somerville, Massachusetts, originally as the Bay State Glass Co. It later moved to Williamsburg, Brooklyn, and operated as the Brooklyn Flint Glass Works. The company moved again to its ultimate home and eponym, the city of Corning, New York, in 1868, under leadership of the founder's son, Amory Houghton, Jr.
In 1915, Corning created an improved heat resistant glass formula and launched Pyrex, the first-ever consumer cooking products made with temperature-resistant glass, in 1915.
The California Institute of Technology's telescope mirror at Palomar Observatory was cast by Corning during 1934–1936, out of low expansion borosilicate glass. In 1932, George Ellery Hale approached Corning with the challenge of fabricating the required optic for his Palomar project. A previous effort to fabricate the optic from fused quartz had failed. Corning's first attempt was a failure, the cast blank having voids. Using lessons learned, Corning was successful in the casting of the second blank. After a year of cooling, during which it was almost lost to a flood, in 1935, the blank was completed. The first blank now resides in Corning's Museum of Glass.
In 1935, Corning formed a partnership with bottle maker Owens-Illinois, which formed the company known today as Owens Corning. Owens Corning was spun off as a separate company in 1938.
The company had a history of science-based innovations following World War II and the strategy by management was research and "disruptive" and "on demand" product innovation.
In 1962, Corning developed Chemcor, a new toughened automobile windshield designed to be thinner and lighter than existing windshields, which reduced danger of personal injury by shattering into small granules when smashed. This toughened glass had a chemically hardened outer layer, and its manufacture incorporated an ion exchange and a "fusion process" in special furnaces that Corning built in its Christiansburg, Virginia facility. Corning developed it as an alternative to laminated windshields with the intention of becoming an automotive industry supplier. After being installed as side glass in a limited run of 1968 Plymouth Barracudas and Dodge Darts, Chemcor windshields debuted on the 1970 model year Javelins and AMXs built by American Motors Corporation (AMC). As there were no mandatory safety standards for motor vehicle windshields, the larger automakers had no financial incentive to change from the cheaper existing products. Corning terminated its windshield project in 1971, after it turned out to be one of the company's "biggest and most expensive failures." However, like many Corning innovations, the unique process to manufacture this automotive glass was resurrected and is today the basis of their very profitable LCD glass business.
In the fall of 1970, the company announced that researchers Robert D. Maurer, Donald Keck, Peter C. Schultz, and Frank Zimar had demonstrated an optical fiber with a low optical attenuation of 17 dB per kilometer by doping silica glass with titanium. A few years later they produced a fiber with only 4 dB/km, using germanium oxide as the core dopant. Such low attenuations made fiber optics practical for telecommunications and networking. Corning became the world's leading manufacturer of optical fiber.
In 1977, considerable attention was given to Corning's Z Glass project. Z Glass was a product used in television picture tubes. Due to a number of factors, the exact nature of which are subject to dispute, this project was considered a steep loss in profit and productivity. The following year the project made a partial recovery. This incident has been cited as a case study by the Harvard School of Business.
In 1998, the kitchenware division of Corning Inc. responsible for the development of Pyrex spun off from its parent company as Corning Consumer Products Company, subsequently renamed Corelle Brands. Corning Inc. no longer manufactures or markets consumer products, only industrial ones.
Company profits soared in the late 1990s during the dot-com boom, and Corning expanded its fiber operations significantly through the acquisition of telecommunications company Oak Industries and building several new plants. The company also entered the photonics market, investing heavily with the intent of becoming the leading provider of complete fiber-optic systems. Failure to succeed in photonics and the collapse in 2000 of the dot-com market had a major impact on the company, and Corning stock plummeted to $1 per share. However, the company had posted five straight years of improving financial performance.
Technologies
The turning point for Corning came when Apple approached it to develop a robust display screen for its upcoming iPhone. Later, other companies also adopted its Gorilla Glass screen. In 2011, Corning announced the expansion of existing facilities and the construction of a Gen 10 facility co-located with the Sharp Corporation manufacturing complex in Sakai, Osaka, Japan. The LCD glass substrate is produced without heavy metals. Corning is a leading manufacturer of the glass used in liquid crystal displays.
The company continues to produce optical fiber and cable for the communications industry at its Wilmington and Concord plants in North Carolina. It is also a major manufacturer of ceramic emission control devices for catalytic converters in cars and light trucks that use gasoline engines. The company is also investing in the production of ceramic emission control products for diesel engines as a result of tighter emission standards for those engines both in the U.S. and abroad.
In 2007, Corning introduced an optic fiber, ClearCurve, which uses nanostructure technology to facilitate the small radius bending found in FTTX installations.
Gorilla Glass, an outgrowth of the 1960s Chemcor project, is a high-strength alkali-aluminosilicate thin sheet glass used as a protective cover glass offering scratch resistance and durability in many touchscreens. According to the book Steve Jobs by Walter Isaacson, Gorilla Glass was used in the first iPhone released in 2007.
On October 25, 2011, Corning unveiled Lotus Glass, an environmentally friendly and high-performance glass developed for OLED and LCD displays.
Corning invests about 10% of revenue in research and development, and has allocated US$300 million towards further expansion of its Sullivan Park research facility near headquarters in Corning, New York.
Corning Incorporated manufactures a high-purity fused silica employed in microlithography systems, a low expansion glass utilized in the construction of reflective mirror blanks, windows for U.S. Space Shuttles, and Steuben art glass. The number of Corning facilities employing the traditional tanks of molten glass has declined over the years, but it maintains the capacity to supply bulk or finished glass of many types.
Corning is engaged in research and development on green lasers, mercury abatement, microreactors, photovoltaics, and silicon on glass. Through its Life Sciences division, the company offers products to support life science research, including stem-cell culture products.
In September 2019, Apple announced that it would invest $250 million in Corning, in an effort to develop and manufacture the glass needed for many of its products, including the iPhone, Apple Watch, and iPad. Though not confirmed by either company, the investment could be used to develop new products in the future. Apple had already invested $200 million in Corning in 2017.
In November 2024, The European Commission announced that Corning Inc. was under investigation for potential antitrust violations related to exclusive supply agreements with mobile phone manufacturers and raw glass processors, which may hinder competition in the specialty glass market.
Other activities
Corning employs roughly 61,200 people worldwide and had sales of $14.08 billion in 2021. The company has been listed for many years among Fortune magazine's 500 largest companies, and was ranked #297 in 2015.
Although the company has long been publicly owned, James R. Houghton, great-great-grandson of the founder, served as chairman of the board of directors from 2001 to 2007. Over the years Houghton family ownership has declined to about 2%. Wendell P. Weeks has been with the company since 1983 and was chairman, chief executive officer, and president.
Over its 160-year history Corning invented a process for rapid and inexpensive production of light bulbs, including developing the glass for Thomas Edison's light bulb. Corning was the glass supplier for lightbulbs for General Electric after Edison General Electric merged with Thomson-Houston Electric Company in 1892. It was an early major manufacturer of glass panels and funnels for television tubes, invented and produced Vycor (high temperature glass with high thermal shock resistance). Corning invented and produced Pyrex, CorningWare and Visions Pyroceram glass-ceramic cookware, and Corelle durable glass dinnerware. Corning manufactured the windows for US crewed space vehicles, and supplied the glass blank for the primary mirror in the Hubble Space Telescope.
In 1982, Corning launched Chameleon® Sunglasses and Serengeti® sunglasses at retail, featuring the exclusive combination of Photochromic and Spectral Control® technologies in the lenses.
In July 2008, Corning announced the sale of Steuben Glass Works to Steuben Glass LLC, an affiliate of the private equity firm Schottenstein Stores Corporation. Steuben Glass had been unprofitable for more than a decade, losing 30 million dollars over the previous five years.
In February 2011, Corning acquired MobileAccess Networks, an Israeli company that develops Distributed antenna systems, which are often used by universities, stadiums and airports to ensure seamless wireless coverage throughout a facility. MobileAccess Networks became part of Corning's telecommunications business unit. In July 2017, Corning acquired SpiderCloud Wireless. In December 2017, Corning acquired all of 3M Communication Market Division, in a cash transaction approximately $900 million. Acquisition closed during 2018; 3M Communication Market Division became part of Corning Optical Communications business unit.
Board of directors
:
Donald W. Blair: retired executive vice president and chief financial officer, NIKE, Inc.
Leslie A. Brun: chairman and chief executive officer, Sarr Group
Richard T. Clark: retired chairman, president and chief executive officer, Merck & Co., Inc.
Pamela J. Craig: retired chief financial officer, Accenture plc.
Robert F. Cummings, Jr.: retired vice chairman of investment banking, JPMorgan Chase & Co.
Roger W. Ferguson Jr.: Steven A. Tananbaum Distinguished Fellow for International Economics, Council on Foreign Relations
Thomas D. French: senior partner emeritus, McKinsey & Company, Inc.
Deborah A. Henretta: retired group president of global e-business, Procter & Gamble Company
Daniel P. Huttenlocher: dean, MIT
Kurt M. Landgraf: retired president and chief executive officer, Educational Testing Service
Kevin Martin: vice president, US public policy, Meta Platforms, Inc.
Deborah D. Rieman: retired executive chairman, MetaMarkets Group
Hansel E. Tookes II: retired chairman and chief executive officer, Raytheon Aircraft Company
Wendell P. Weeks: chairman, chief executive officer, and president, Corning Incorporated
Mark S. Wrighton: professor of chemistry and chancellor emeritas, Washington University in St. Louis
See also
Corelle Brands LLC, the later name adopted by the Corning Consumer Products Company subsidiary that was sold to Borden in 1998, before it merged with Instant Brands in 2019.
Corning Museum of Glass
City of Corning, NY
Houghton family
Macor, a machineable glass-ceramic developed by Corning
Overflow downdraw method, a technology applied by Corning Incorporated for producing flat panel displays
References
Further reading
External links
1851 establishments in Massachusetts
American brands
American companies established in 1851
Ceramics manufacturers of the United States
Companies listed on the New York Stock Exchange
Computer companies of the United States
Computer hardware companies
Corning, New York
Glassmaking companies of the United States
Manufacturing companies based in New York (state)
Manufacturing companies established in 1851
Networking hardware companies
Photonics companies
Technology companies established in 1851
Wire and cable manufacturers
Optics manufacturing companies of the United States | Corning Inc. | [
"Technology"
] | 2,811 | [
"Computer hardware companies",
"Computers"
] |
342,851 | https://en.wikipedia.org/wiki/Bilayer | A bilayer is a double layer of closely packed atoms or molecules.
The properties of bilayers are often studied in condensed matter physics, particularly in the context of semiconductor devices, where two distinct materials are united to form junctions, such as p–n junctions, Schottky junctions, etc. Layered materials, such as graphene, boron nitride, or transition metal dichalcogenides, have unique electronic properties as bilayer systems and are an active area of current research.
In biology, a common example is the lipid bilayer, which describes the structure of multiple organic structures, such as the membrane of a cell.
See also
Monolayer
Non-carbon nanotube
Semiconductor
Thin film
References
Phases of matter
Thin films | Bilayer | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 152 | [
"Materials science stubs",
"Planes (geometry)",
"Phases of matter",
"Materials science",
"Nanotechnology stubs",
"Condensed matter physics",
"Nanotechnology",
"Condensed matter stubs",
"Thin films",
"Matter"
] |
342,878 | https://en.wikipedia.org/wiki/Standard%20Delay%20Format | Standard Delay Format (SDF) is an IEEE standard for the representation and interpretation of timing data for use at any stage of an electronic design process. It finds wide applicability in design flows, and forms an efficient bridge between dynamic timing analysis and static timing analysis.
It was originally developed as an OVI standard, and later modified into the IEEE format. Technically only the SDF version 4.0 onwards are IEEE formats.
It is an ASCII format that is represented in a tool and language independent way and includes path delays, timing constraint values, interconnect delays and high level technology parameters.
It has usually two sections: one for interconnect delays and the other for cell delays.
SDF format can be used for back-annotation as well as forward-annotation.
See also
VITAL
EDIF
References
IEC 61523-3:2004
External links
https://web.archive.org/web/20130524215112/http://www.eda.org/sdf/
Standard Delay Format Specification in PDF format, version 3.0 (1995)
EDA file formats
IEEE DASC standards
IEC standards | Standard Delay Format | [
"Technology"
] | 241 | [
"Computer standards",
"IEC standards"
] |
342,899 | https://en.wikipedia.org/wiki/Image%20%28category%20theory%29 | In category theory, a branch of mathematics, the image of a morphism is a generalization of the image of a function.
General definition
Given a category and a morphism in , the image
of is a monomorphism satisfying the following universal property:
There exists a morphism such that .
For any object with a morphism and a monomorphism such that , there exists a unique morphism such that .
Remarks:
such a factorization does not necessarily exist.
is unique by definition of monic.
, therefore by monic.
is monic.
already implies that is unique.
The image of is often denoted by or .
Proposition: If has all equalizers then the in the factorization of (1) is an epimorphism.
Second definition
In a category with all finite limits and colimits, the image is defined as the equalizer of the so-called cokernel pair , which is the cocartesian of a morphism with itself over its domain, which will result in a pair of morphisms , on which the equalizer is taken, i.e. the first of the following diagrams is cocartesian, and the second equalizing.
Remarks:
Finite bicompleteness of the category ensures that pushouts and equalizers exist.
can be called regular image as is a regular monomorphism, i.e. the equalizer of a pair of morphisms. (Recall also that an equalizer is automatically a monomorphism).
In an abelian category, the cokernel pair property can be written and the equalizer condition . Moreover, all monomorphisms are regular.
Examples
In the category of sets the image of a morphism is the inclusion from the ordinary image to . In many concrete categories such as groups, abelian groups and (left- or right) modules, the image of a morphism is the image of the correspondent morphism in the category of sets.
In any normal category with a zero object and kernels and cokernels for every morphism, the image of a morphism can be expressed as follows:
im f = ker coker f
In an abelian category (which is in particular binormal), if f is a monomorphism then f = ker coker f, and so f = im f.
Essential Image
A related notion to image is essential image.
A subcategory of a (strict) category is said to be replete if for every , and for every isomorphism , both and belong to C.
Given a functor between categories, the smallest replete subcategory of the target n-category B containing the image of A under F.
See also
Subobject
Coimage
Image (mathematics)
References
Category theory | Image (category theory) | [
"Mathematics"
] | 570 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory"
] |
342,977 | https://en.wikipedia.org/wiki/Business%20process | A business process, business method, or business function is a collection of related, structured activities or tasks performed by people or equipment in which a specific sequence produces a service or product (that serves a particular business goal) for a particular customer or customers. Business processes occur at all organizational levels and may or may not be visible to the customers. A business process may often be visualized (modeled) as a flowchart of a sequence of activities with interleaving decision points or as a process matrix of a sequence of activities with relevance rules based on data in the process. The benefits of using business processes include improved customer satisfaction and improved agility for reacting to rapid market change. Process-oriented organizations break down the barriers of structural departments and try to avoid functional silos.
Overview
A business process begins with a mission objective (an external event) and ends with achievement of the business objective of providing a result that provides customer value. Additionally, a process may be divided into subprocesses (process decomposition), the particular inner functions of the process. Business processes may also have a process owner, a responsible party for ensuring the process runs smoothly from start to finish.
Broadly speaking, business processes can be organized into three types, according to von Rosing et al.:
Operational processes, which constitute the core business and create the primary value stream, e.g., taking orders from customers, opening an account, and manufacturing a component
Management processes, the processes that oversee operational processes, including corporate governance, budgetary oversight, and employee oversight
Supporting processes, which support the core operational processes, e.g., accounting, recruitment, call center, technical support, and safety training
There are other definitions of the classification of processes proposed by
Strategic processes, which are managerial, directive or steering processes. Management has an important role in each of these. This type of process is related to strategic planning, partnerships, etc.
Operational processes, which are business processes, are of a productive or "missional" nature. These processes generate a product or service to be delivered to customers. These are considered to be unique or specific to each organisation.
Support processes, which are auxiliary in nature, support for operational and strategic processes. These are responsible for providing resources and are presented in most organizations.
A business made up of many process may be decomposed into various subprocesses, each of which have their own peculiar aspects but also contribute to achieving the objectives of the business. The business review analyzes processes, that usually include the mapping or modeling of processes and sub-processes down to a group of activities at different levels. Processes can be modeled using a large number of methods and techniques. For instance, the Business Process Modeling Notation is a business process modeling technique that can be used for drawing business processes in a visualized workflow. While decomposing processes into process classifications, categories can be helpful, but care must be taken in doing so as there may be crossover. At last, all processes are part of a largely unified customer-focused result, one of "customer value creation." This goal is expedited with business process management, which aims to analyze, improve, and enact business processes.
History
Adam Smith
An important early (1776) description of processes was that of economist Adam Smith in his famous example of a pin factory. Inspired by an article in Diderot's Encyclopédie, Smith described the production of a pin in the following way:
One man draws out the wire; another straights it; a third cuts it; a fourth points it; a fifth grinds it at the top for receiving the head; to make the head requires two or three distinct operations; to put it on is a peculiar business; to whiten the pins is another ... and the important business of making a pin is, in this manner, divided into about eighteen distinct operations, which, in some manufactories, are all performed by distinct hands, though in others the same man will sometimes perform two or three of them.
Smith also first recognized how output could be increased through the use of labor division. Previously, in a society where production was dominated by handcrafted goods, one man would perform all the activities required during the production process, while Smith described how the work was divided into a set of simple tasks which would be performed by specialized workers. The result of labor division in Smith's example resulted in productivity increasing by 24,000 percent (sic), i.e. that the same number of workers made 240 times as many pins as they had been producing before the introduction of labor division.
Smith did not advocate labor division at any price or per se. The appropriate level of task division was defined through experimental design of the production process. In contrast to Smith's view which was limited to the same functional domain and comprised activities that are in direct sequence in the manufacturing process, today's process concept includes cross-functionality as an important characteristic. Following his ideas, the division of labor was adopted widely, while the integration of tasks into a functional, or cross-functional, process was not considered as an alternative option until much later.
Frederick Winslow Taylor
American engineer Frederick Winslow Taylor greatly influenced and improved the quality of industrial processes in the early twentieth century. His Principles of Scientific Management focused on standardization of processes, systematic training and clearly defining the roles of management and employees. His methods were widely adopted in the United States, Russia and parts of Europe and led to further developments such as "time and motion study" and visual task optimization techniques, such as Gantt charts.
Peter Drucker
In the latter part of the twentieth century, management guru Peter Drucker focused much of his work on the simplification and decentralization of processes, which led to the concept of outsourcing. He also coined the concept of the "knowledge worker," as differentiated from manual workers – and how knowledge management would become part of an entity's processes.
Other definitions
Davenport (1993) defines a (business) process as:
a structured, measured set of activities designed to produce a specific output for a particular customer or market. It implies a strong emphasis on how work is done within an organization, in contrast to a product focus's emphasis on what. A process is thus a specific ordering of work activities across time and space, with a beginning and an end, and clearly defined inputs and outputs: a structure for action. ... Taking a process approach implies adopting the customer's point of view. Processes are the structure by which an organization does what is necessary to produce value for its customers.
This definition contains certain characteristics that a process must possess. These characteristics are achieved by focusing on the business logic of the process (how work is done) instead of taking a product perspective (what is done). Following Davenport's definition of a process, we can conclude that a process must have clearly defined boundaries, input and output, consist of smaller parts and activities which are ordered in time and space, that there must be a receiver of the process outcome—a customer – and that the transformation taking place within the process must add customer value.
Hammer & Champy's (1993) definition can be considered as a subset of Davenport's. They define a process as:
a collection of activities that takes one or more kinds of input and creates an output that is of value to the customer.
As we can note, Hammer & Champy have a more transformation-oriented perception and put less emphasis on the structural component – process boundaries and the order of activities in time and space.
Rummler & Brache (1995) use a definition that clearly encompasses a focus on the organization's external customers, when stating that
a business process is a series of steps designed to produce a product or service. Most processes (...) are cross-functional, spanning the 'white space' between the boxes on the organization chart. Some processes result in a product or service that is received by an organization's external customer. We call these primary processes. Other processes produce products that are invisible to the external customer but essential to the effective management of the business. We call these support processes.
The above definition distinguishes two types of processes, primary and support processes, depending on whether a process is directly involved in the creation of customer value or concerned with the organization's internal activities. In this sense, Rummler and Brache's definition follows Porter's value chain model, which also builds on a division of primary and secondary activities. According to Rummler and Brache, a typical characteristic of a successful process-based organization is the absence of secondary activities in the primary value flow that is created in the customer oriented primary processes. The characteristic of processes as spanning the white space on the organization chart indicates that processes are embedded in some form of organizational structure. Also, a process can be cross-functional, i.e. it ranges over several business functions.
Johansson et al. (1993). define a process as:
a set of linked activities that take an input and transform it to create an output. Ideally, the transformation that occurs in the process should add value to the input and create an output that is more useful and effective to the recipient either upstream or downstream.
This definition also emphasizes the constitution of links between activities and the transformation that takes place within the process. Johansson et al. also include the upstream part of the value chain as a possible recipient of the process output. Summarizing the four definitions above, we can compile the following list of characteristics for a business process:
Definability: It must have clearly defined boundaries, input and output.
Order: It must consist of activities that are ordered according to their position in time and space (a sequence).
Customer: There must be a recipient of the process' outcome, a customer.
Value-adding: The transformation taking place within the process must add value to the recipient, either upstream or downstream.
Embeddedness: A process cannot exist in itself, it must be embedded in an organizational structure.
Cross-functionality: A process regularly can, but not necessarily must, span several functions.
Frequently, identifying a process owner (i.e., the person responsible for the continuous improvement of the process) is considered as a prerequisite. Sometimes the process owner is the same person who is performing the process.
Related concepts
Workflow
Workflow is the procedural movement of information, material, and tasks from one participant to another. Workflow includes the procedures, people and tools involved in each step of a business process. A single workflow may either be sequential, with each step contingent upon completion of the previous one, or parallel, with multiple steps occurring simultaneously. Multiple combinations of single workflows may be connected to achieve a resulting overall process.
Business process re-engineering
Business process re-engineering (BPR) was originally conceptualized by Hammer and Davenport as a means to improve organizational effectiveness and productivity. It can involve starting from a "blank slate" and completely recreating major business processes, or it can involve comparing the "as-is" process and the "to-be" process and mapping the path for change from one to the other. Often BPR will involve the use of information technology to secure significant performance improvement. The term unfortunately became associated with corporate "downsizing" in the mid-1990s.
Business process management (BPM)
Though the term has been used contextually to mixed effect, "business process management" (BPM) can generally be defined as a discipline involving a combination of a wide variety of business activity flows (e.g., business process automation, modeling, and optimization) that strives to support the goals of an enterprise within and beyond multiple boundaries, involving many people, from employees to customers and external partners. A major part of BPM's enterprise support involves the continuous evaluation of existing processes and the identification of ways to improve upon it, resulting in a cycle of overall organizational improvement.
Knowledge management
Knowledge management is the definition of the knowledge that employees and systems use to perform their functions and maintaining it in a format that can be accessed by others. Duhon and the Gartner Group have defined it as "a discipline that promotes an integrated approach to identifying, capturing, evaluating, retrieving, and sharing all of an enterprise's information assets. These assets may include databases, documents, policies, procedures, and previously un-captured expertise and experience in individual workers."
Customer Service
Customer Service is a key component to an effective business business plan. Customer service in the 21st century is always evolving, and it is important to grow with your customer base. Not only does a social media presence matter, but also clear communication, clear expectation setting, speed, and accuracy. If the customer service provided by a business is not effective, it can be detrimental to the business success.
Total quality management
Total quality management (TQM) emerged in the early 1980s as organizations sought to improve the quality of their products and services. It was followed by the Six Sigma methodology in the mid-1980s, first introduced by Motorola. Six Sigma consists of statistical methods to improve business processes and thus reduce defects in outputs. The "lean approach" to quality management was introduced by the Toyota Motor Company in the 1990s and focused on customer needs and reducing of wastage.
Creating a Strong Brand Presence through Social Media
Creating a strong brand presence through social media is an important component to running a successful business. Companies can market, gain consumer insights, and advertise through social media. "According to a Salesforce survey, 85% of consumers conduct research before they make a purchase online, and among the most used channels for research are websites (74%) and social media (38%). Consequently, businesses need to have an effective online strategy to increase brand awareness and grow." (Paun, 2020)
Customers engage and interact through social media and businesses who are effectively part of social media drive more successful businesses. The most common social media sites that are used for business are Facebook, Instagram, and Twitter. Businesses with the strongest brand recognition and consumer engagement build social presences on all these platforms.
Resources:
Paun, Goran (2020). Building A Brand: Why A Strong Digital Presence Matters. Forbes. Sourced from
Information technology as an enabler for business process management
Advances in information technology over the years have changed business processes within and between business enterprises. In the 1960s, operating systems had limited functionality, and any workflow management systems that were in use were tailor-made for the specific organization. The 1970s and 1980s saw the development of data-driven approaches as data storage and retrieval technologies improved. Data modeling, rather than process modeling was the starting point for building an information system. Business processes had to adapt to information technology because process modeling was neglected. The shift towards process-oriented management occurred in the 1990s. Enterprise resource planning software with workflow management components such as SAP, Baan, PeopleSoft, Oracle and JD Edwards emerged, as did business process management systems (BPMS) later.
The world of e-business created a need to automate business processes across organizations, which in turn raised the need for standardized protocols and web services composition languages that can be understood across the industry. The Business Process Modeling Notation (BPMN) and Business Motivation Model (BMM) are widely used standards for business modeling. The Business Modeling and Integration Domain Task Force (BMI DTF) is a consortium of vendors and user companies that continues to work together to develop standards and specifications to promote collaboration and integration of people, systems, processes and information within and across enterprises.
The most recent trends in BPM are influenced by the emergence of cloud technology, the prevalence of social media and mobile technology, and the development of analytical techniques. Cloud-based technologies allow companies to purchase resources quickly and as required, independent of their location. Social media, websites and smart phones are the newest channels through which organizations reach and support their customers. The abundance of customer data collected through these channels as well as through call center interactions, emails, voice calls, and customer surveys has led to a huge growth in data analytics which in turn is utilized for performance management and improving the ways in which the company services its customers.
Importance of the process chain
Business processes comprise a set of sequential sub-processes or tasks with alternative paths, depending on certain conditions as applicable, performed to achieve a given objective or produce given outputs. Each process has one or more needed inputs. The inputs and outputs may be received from, or sent to other business processes, other organizational units, or internal or external stakeholders.
Business processes are designed to be operated by one or more business functional units, and emphasize the importance of the "process chain" rather than the individual units.
In general, the various tasks of a business process can be performed in one of two ways:
manually
by means of business data processing systems such as ERP systems
Typically, some process tasks will be manual, while some will be computer-based, and these tasks may be sequenced in many ways. In other words, the data and information that are being handled through the process may pass through manual or computer tasks in any given order.
Policies, processes and procedures
The above improvement areas are equally applicable to policies, processes, detailed procedures (sub-processes/tasks) and work instructions. There is a cascading effect of improvements made at a higher level on those made at a lower level.
For example, if a recommendation to replace a given policy with a better one is made with proper justification and accepted in principle by business process owners, then corresponding changes in the consequent processes and procedures will follow naturally in order to enable implementation of the policies.
Reporting as an essential base for execution
Business processes must include up-to-date and accurate reports to ensure effective action. An example of this is the availability of purchase order status reports for supplier delivery follow-up as described in the section on effectiveness above. There are numerous examples of this in every possible business process.
Another example from production is the process of analyzing line rejections occurring on the shop floor. This process should include systematic periodical analysis of rejections by reason and present the results in a suitable information report that pinpoints the major reasons and trends in these reasons for management to take corrective actions to control rejections and keep them within acceptable limits. Such a process of analysis and summarisation of line rejection events is clearly superior to a process which merely inquires into each individual rejection as it occurs.
Business process owners and operatives should realise that process improvement often occurs with introduction of appropriate transaction, operational, highlight, exception or M.I.S. reports, provided these are consciously used for day-to-day or periodical decision-making. With this understanding would hopefully come the willingness to invest time and other resources in business process improvement by introduction of useful and relevant reporting systems.
Supporting theories and concepts
Span of control
The span of control is the number of subordinates a supervisor manages within a structural organization. Introducing a business process concept has a considerable impact on the structural elements of the organization and, thus also on the span of control.
Large organizations that are not organized as markets need to be organized in smaller units, or departments – which can be defined according to different principles.
Information management concepts
Information management and the organization's infrastructure strategies related to it, are a theoretical cornerstone of the business process concept, requiring "a framework for measuring the level of IT support for business processes."
See also
Business functions
Business method patent
Business process automation
Business Process Definition Metamodel
Business process mapping
Business process outsourcing
References
Further reading
Paul's Harmon (2007). Business Process Change: 2nd Ed, A Guide for Business Managers and BPM and Six Sigma Professionals. Morgan Kaufmann
E. Obeng and S. Crainer S (1993). Making Re-engineering Happen. Financial Times Prentice Hall
Howard Smith and Peter Fingar (2003). Business Process Management. The Third Wave, MK Press
Slack et al., edited by: David Barnes (2000) The Open University, Understanding Business: Processes
Malakooti, B. (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons.
External links
Enterprise modelling
Management cybernetics | Business process | [
"Engineering"
] | 4,097 | [
"Systems engineering",
"Enterprise modelling"
] |
343,031 | https://en.wikipedia.org/wiki/Friedmann%E2%80%93Lema%C3%AEtre%E2%80%93Robertson%E2%80%93Walker%20metric | The Friedmann–Lemaître–Robertson–Walker metric (FLRW; ) is a metric that describes a homogeneous, isotropic, expanding (or otherwise, contracting) universe that is path-connected, but not necessarily simply connected. The general form of the metric follows from the geometric properties of homogeneity and isotropy; Einstein's field equations are only needed to derive the scale factor of the universe as a function of time. Depending on geographical or historical preferences, the set of the four scientists – Alexander Friedmann, Georges Lemaître, Howard P. Robertson and Arthur Geoffrey Walker – are variously grouped as Friedmann, Friedmann–Robertson–Walker (FRW), Robertson–Walker (RW), or Friedmann–Lemaître (FL). This model is sometimes called the Standard Model of modern cosmology, although such a description is also associated with the further developed Lambda-CDM model. The FLRW model was developed independently by the named authors in the 1920s and 1930s.
Concept
The metric is a consequence of assuming that the mass in the universe has constant density – homogeneity – and is the same in all directions – isotropy. Assuming isotropy alone is sufficient to reduce the possible motions of mass in the universe to radial velocity variations. The Copernican principle, that our observation point in the universe is the equivalent to every other point, combined with isotropy ensures homogeneity. Direct observation of stars has shown their velocities to be dominated by radial recession, validating these assumptions for cosmological models.
To measure distances in this space, that is to define a metric, we can compare the positions of two points in space moving along with their local radial velocity of mass. Such points can be thought of as ideal galaxies. Each galaxy can be given a clock to track local time, with the clocks synchronized by imagining the radial velocities run backwards until the clocks coincide in space. The equivalence principle applied to each galaxy means distance measurements can be made using special relativity locally. So a distance can be related to the local time and the coordinates:
An isotropic, homogeneous mass distribution is highly symmetric. Rewriting the metric in spherical coordinates reduces four coordinates to three coordinates. The radial coordinate is written as a product of a comoving coordinate, , and a time dependent scale factor . The resulting metric can be written in several forms. Two common ones are:
or
where is the angle between the two locations and
(The meaning of in these equations is not the same). Other common variations use a dimensionless scale factor
where time zero is now.
FLRW models
Relativisitic cosmology models based on the FLRW metric and obeying the Friedmann equations are called FRW models.
These models are the basis of the standard Big Bang cosmological model including the current ΛCDM model.
To apply the metric to cosmology and predict its time evolution via the scale factor requires Einstein's field equations together with a way of calculating the density, such as a cosmological equation of state.
This process allows an approximate analytic solution Einstein's field equations giving the Friedmann equations when the energy–momentum tensor is similarly assumed to be isotropic and homogeneous. The resulting equations are:
Because the FLRW model assumes homogeneity, some popular accounts mistakenly assert that the Big Bang model cannot account for the observed lumpiness of the universe. In a strictly FLRW model, there are no clusters of galaxies or stars, since these are objects much denser than a typical part of the universe. Nonetheless, the FLRW model is used as a first approximation for the evolution of the real, lumpy universe because it is simple to calculate, and models that calculate the lumpiness in the universe are added onto the FLRW models as extensions. Most cosmologists agree that the observable universe is well approximated by an almost FLRW model, i.e., a model that follows the FLRW metric apart from primordial density fluctuations. , the theoretical implications of the various extensions to the FLRW model appear to be well understood, and the goal is to make these consistent with observations from COBE and WMAP.
Interpretation
The pair of equations given above is equivalent to the following pair of equations
with , the spatial curvature index, serving as a constant of integration for the first equation.
The first equation can be derived also from thermodynamical considerations and is equivalent to the first law of thermodynamics, assuming the expansion of the universe is an adiabatic process (which is implicitly assumed in the derivation of the Friedmann–Lemaître–Robertson–Walker metric).
The second equation states that both the energy density and the pressure cause the expansion rate of the universe to decrease, i.e., both cause a deceleration in the expansion of the universe. This is a consequence of gravitation, with pressure playing a similar role to that of energy (or mass) density, according to the principles of general relativity. The cosmological constant, on the other hand, causes an acceleration in the expansion of the universe.
Cosmological constant
The cosmological constant term can be omitted if we make the following replacements
Therefore, the cosmological constant can be interpreted as arising from a form of energy that has negative pressure, equal in magnitude to its (positive) energy density:
which is an equation of state of vacuum with dark energy.
An attempt to generalize this to
would not have general invariance without further modification.
In fact, in order to get a term that causes an acceleration of the universe expansion, it is enough to have a scalar field that satisfies
Such a field is sometimes called quintessence.
Newtonian interpretation
This is due to McCrea and Milne, although sometimes incorrectly ascribed to Friedmann. The Friedmann equations are equivalent to this pair of equations:
The first equation says that the decrease in the mass contained in a fixed cube (whose side is momentarily a) is the amount that leaves through the sides due to the expansion of the universe plus the mass equivalent of the work done by pressure against the material being expelled. This is the conservation of mass–energy (first law of thermodynamics) contained within a part of the universe.
The second equation says that the kinetic energy (seen from the origin) of a particle of unit mass moving with the expansion plus its (negative) gravitational potential energy (relative to the mass contained in the sphere of matter closer to the origin) is equal to a constant related to the curvature of the universe. In other words, the energy (relative to the origin) of a co-moving particle in free-fall is conserved. General relativity merely adds a connection between the spatial curvature of the universe and the energy of such a particle: positive total energy implies negative curvature and negative total energy implies positive curvature.
The cosmological constant term is assumed to be treated as dark energy and thus merged into the density and pressure terms.
During the Planck epoch, one cannot neglect quantum effects. So they may cause a deviation from the Friedmann equations.
General metric
The FLRW metric assume homogeneity and isotropy of space. It also assumes that the spatial component of the metric can be time-dependent. The generic metric that meets these conditions is
where ranges over a 3-dimensional space of uniform curvature, that is, elliptical space, Euclidean space, or hyperbolic space. It is normally written as a function of three spatial coordinates, but there are several conventions for doing so, detailed below. does not depend on t – all of the time dependence is in the function a(t), known as the "scale factor".
Reduced-circumference polar coordinates
In reduced-circumference polar coordinates the spatial metric has the form
k is a constant representing the curvature of the space. There are two common unit conventions:
k may be taken to have units of length−2, in which case r has units of length and a(t) is unitless. k is then the Gaussian curvature of the space at the time when . r is sometimes called the reduced circumference because it is equal to the measured circumference of a circle (at that value of r), centered at the origin, divided by 2 (like the r of Schwarzschild coordinates). Where appropriate, a(t) is often chosen to equal 1 in the present cosmological era, so that measures comoving distance.
Alternatively, k may be taken to belong to the set (for negative, zero, and positive curvature respectively). Then r is unitless and a(t) has units of length. When , a(t) is the radius of curvature of the space, and may also be written R(t).
A disadvantage of reduced circumference coordinates is that they cover only half of the 3-sphere in the case of positive curvature—circumferences beyond that point begin to decrease, leading to degeneracy. (This is not a problem if space is elliptical, i.e. a 3-sphere with opposite points identified.)
Hyperspherical coordinates
In hyperspherical or curvature-normalized coordinates the coordinate r is proportional to radial distance; this gives
where is as before and
As before, there are two common unit conventions:
k may be taken to have units of length−2, in which case r has units of length and a(t) is unitless. k is then the Gaussian curvature of the space at the time when . Where appropriate, a(t) is often chosen to equal 1 in the present cosmological era, so that measures comoving distance.
Alternatively, as before, k may be taken to belong to the set (for negative, zero, and positive curvature respectively). Then r is unitless and a(t) has units of length. When , a(t) is the radius of curvature of the space, and may also be written R(t). Note that when , r is essentially a third angle along with θ and φ. The letter χ may be used instead of r.
Though it is usually defined piecewise as above, S is an analytic function of both k and r. It can also be written as a power series
or as
where sinc is the unnormalized sinc function and is one of the imaginary, zero or real square roots of k. These definitions are valid for all k.
Cartesian coordinates
When k = 0 one may write simply
This can be extended to by defining
, and
where r is one of the radial coordinates defined above, but this is rare.
Curvature
Cartesian coordinates
In flat FLRW space using Cartesian coordinates, the surviving components of the Ricci tensor are
and the Ricci scalar is
Spherical coordinates
In more general FLRW space using spherical coordinates (called "reduced-circumference polar coordinates" above), the surviving components of the Ricci tensor are
and the Ricci scalar is
Name and history
The Soviet mathematician Alexander Friedmann first derived the main results of the FLRW model in 1922 and 1924. Although the prestigious physics journal Zeitschrift für Physik published his work, it remained relatively unnoticed by his contemporaries. Friedmann was in direct communication with Albert Einstein, who, on behalf of Zeitschrift für Physik, acted as the scientific referee of Friedmann's work. Eventually Einstein acknowledged the correctness of Friedmann's calculations, but failed to appreciate the physical significance of Friedmann's predictions.
Friedmann died in 1925. In 1927, Georges Lemaître, a Belgian priest, astronomer and periodic professor of physics at the Catholic University of Leuven, arrived independently at results similar to those of Friedmann and published them in the Annales de la Société Scientifique de Bruxelles (Annals of the Scientific Society of Brussels). In the face of the observational evidence for the expansion of the universe obtained by Edwin Hubble in the late 1920s, Lemaître's results were noticed in particular by Arthur Eddington, and in 1930–31 Lemaître's paper was translated into English and published in the Monthly Notices of the Royal Astronomical Society.
Howard P. Robertson from the US and Arthur Geoffrey Walker from the UK explored the problem further during the 1930s. In 1935 Robertson and Walker rigorously proved that the FLRW metric is the only one on a spacetime that is spatially homogeneous and isotropic (as noted above, this is a geometric result and is not tied specifically to the equations of general relativity, which were always assumed by Friedmann and Lemaître).
This solution, often called the Robertson–Walker metric since they proved its generic properties, is different from the dynamical "Friedmann–Lemaître" models, which are specific solutions for a(t) that assume that the only contributions to stress–energy are cold matter ("dust"), radiation, and a cosmological constant.
Einstein's radius of the universe
Einstein's radius of the universe is the radius of curvature of space of Einstein's universe, a long-abandoned static model that was supposed to represent our universe in idealized form. Putting
in the Friedmann equation, the radius of curvature of space of this universe (Einstein's radius) is
where is the speed of light, is the Newtonian constant of gravitation, and is the density of space of this universe. The numerical value of Einstein's radius is of the order of 1010 light years, or 10 billion light years.
Current status
The current standard model of cosmology, the Lambda-CDM model, uses the FLRW metric. By combining the observation data from some experiments such as WMAP and Planck with theoretical results of Ehlers–Geren–Sachs theorem and its generalization, astrophysicists now agree that the early universe is almost homogeneous and isotropic (when averaged over a very large scale) and thus nearly a FLRW spacetime. That being said, attempts to confirm the purely kinematic interpretation of the Cosmic Microwave Background (CMB) dipole through studies of radio galaxies and quasars show disagreement in the magnitude. Taken at face value, these observations are at odds with the Universe being described by the FLRW metric. Moreover, one can argue that there is a maximum value to the Hubble constant within an FLRW cosmology tolerated by current observations, = , and depending on how local determinations converge, this may point to a breakdown of the FLRW metric in the late universe, necessitating an explanation beyond the FLRW metric.
References
Further reading
. (See Chapter 23 for a particularly clear and concise introduction to the FLRW models.)
Coordinate charts in general relativity
Exact solutions in general relativity
Physical cosmology
Metric tensors | Friedmann–Lemaître–Robertson–Walker metric | [
"Physics",
"Astronomy",
"Mathematics",
"Engineering"
] | 3,075 | [
"Exact solutions in general relativity",
"Tensors",
"Theoretical physics",
"Mathematical objects",
"Astrophysics",
"Equations",
"Metric tensors",
"Physical cosmology",
"Coordinate systems",
"Coordinate charts in general relativity",
"Astronomical sub-disciplines"
] |
343,085 | https://en.wikipedia.org/wiki/Cousin%20prime | In number theory, cousin primes are prime numbers that differ by four. Compare this with twin primes, pairs of prime numbers that differ by two, and sexy primes, pairs of prime numbers that differ by six.
The cousin primes (sequences and in OEIS) below 1000 are:
(3, 7), (7, 11), (13, 17), (19, 23), (37, 41), (43, 47), (67, 71), (79, 83), (97, 101), (103, 107), (109, 113), (127, 131), (163, 167), (193, 197), (223, 227), (229, 233), (277, 281), (307, 311), (313, 317), (349, 353), (379, 383), (397, 401), (439, 443), (457, 461), (463,467), (487, 491), (499, 503), (613, 617), (643, 647), (673, 677), (739, 743), (757, 761), (769, 773), (823, 827), (853, 857), (859, 863), (877, 881), (883, 887), (907, 911), (937, 941), (967, 971)
Properties
The only prime belonging to two pairs of cousin primes is 7. One of the numbers will always be divisible by 3, so is the only case where all three are primes.
An example of a large proven cousin prime pair is for
which has 20008 digits. In fact, this is part of a prime triple since is also a twin prime (because is also a proven prime).
, the largest-known pair of cousin primes was found by S. Batalov and has 86,138 digits. The primes are:
If the first Hardy–Littlewood conjecture holds, then cousin primes have the same asymptotic density as twin primes. An analogue of Brun's constant for twin primes can be defined for cousin primes, called Brun's constant for cousin primes, with the initial term (3, 7) omitted, by the convergent sum:
Using cousin primes up to 242, the value of was estimated by Marek Wolf in 1996 as
This constant should not be confused with Brun's constant for prime quadruplets, which is also denoted .
The Skewes number for cousin primes is 5206837 ().
Notes
References
.
Classes of prime numbers
Unsolved problems in mathematics | Cousin prime | [
"Mathematics"
] | 608 | [
"Unsolved problems in mathematics",
"Mathematical problems"
] |
343,116 | https://en.wikipedia.org/wiki/Sexy%20prime | In number theory, sexy primes are prime numbers that differ from each other by . For example, the numbers and are a pair of sexy primes, because both are prime and .
The term "sexy prime" is a pun stemming from the Latin word for six: .
If or (where is the lower prime) is also prime, then the sexy prime is part of a prime triplet. In August 2014, the Polymath group, seeking the proof of the twin prime conjecture, showed that if the generalized Elliott–Halberstam conjecture is proven, one can show the existence of infinitely many pairs of consecutive primes that differ by at most 6 and as such they are either twin, cousin or sexy primes.
The sexy primes (sequences and in OEIS) below 500 are:
(5,11), (7,13), (11,17), (13,19), (17,23), (23,29), (31,37), (37,43), (41,47), (47,53), (53,59), (61,67), (67,73), (73,79), (83,89), (97,103), (101,107), (103,109), (107,113), (131,137), (151,157), (157,163), (167,173), (173,179), (191,197), (193,199), (223,229), (227,233), (233,239), (251,257), (257,263), (263,269), (271,277), (277,283), (307,313), (311,317), (331,337), (347,353), (353,359), (367,373), (373,379), (383,389), (433,439), (443,449), (457,463), (461,467).
References
External links
Classes of prime numbers
Unsolved problems in number theory | Sexy prime | [
"Mathematics"
] | 471 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Unsolved problems in number theory",
"Number theory"
] |
343,120 | https://en.wikipedia.org/wiki/Hand%20tool | A hand tool is any tool that is powered by hand rather than a motor. Categories of hand tools include wrenches, pliers, cutters, files, striking tools, struck or hammered tools, screwdrivers, vises, clamps, snips, hacksaws, drills, and knives.
Outdoor tools such as garden forks, pruning shears, and rakes are additional forms of hand tools. Portable power tools are not hand tools.
History
Hand tools have been used by humans since the Stone Age, when stone tools were used for hammering and cutting. During the Bronze Age, tools were made by casting alloys of copper and tin. Bronze tools were sharper and harder than those made of stone. During the Iron Age iron replaced bronze, and tools became even stronger and more durable. The Romans developed tools during this period which are similar to those being produced today. After the Industrial Revolution, most tools were made in factories rather than by craftspeople.
A large collection of British hand tools dating from 1700 to 1950 is held by St Albans Museum. Most of the tools were collected by Raphael Salaman (1906–1993), who wrote two classic works on the subject: Dictionary of Woodworking Tools and Dictionary of Leather-working Tools. David Russell's vast collection of Western hand tools from the Stone Age to the twentieth century led to the publication of his book Antique Woodworking Tools.
General categories
The American Industrial Hygiene Association gives the following categories of hand tools: wrenches, pliers, cutters, striking tools, struck or hammered tools, screwdrivers, vises, clamps, snips, saws, drills and knives.
See also
Antique tool
:Category:Hand tools
Cutting tool
Garden tool
List of timber framing tools
List of tool-lending libraries
Manual labour
Surgical instrument
References | Hand tool | [
"Engineering"
] | 368 | [
"Human–machine interaction",
"Hand tools"
] |
343,156 | https://en.wikipedia.org/wiki/Schnirelmann%20density | In additive number theory, the Schnirelmann density of a sequence of numbers is a way to measure how "dense" the sequence is. It is named after Russian mathematician Lev Schnirelmann, who was the first to study it.
Definition
The Schnirelmann density of a set of natural numbers A is defined as
where A(n) denotes the number of elements of A not exceeding n and inf is infimum.
The Schnirelmann density is well-defined even if the limit of A(n)/n as fails to exist (see upper and lower asymptotic density).
Properties
By definition, and for all n, and therefore , and if and only if . Furthermore,
Sensitivity
The Schnirelmann density is sensitive to the first values of a set:
.
In particular,
and
Consequently, the Schnirelmann densities of the even numbers and the odd numbers, which one might expect to agree, are 0 and 1/2 respectively. Schnirelmann and Yuri Linnik exploited this sensitivity.
Schnirelmann's theorems
If we set , then Lagrange's four-square theorem can be restated as . (Here the symbol denotes the sumset of and .) It is clear that . In fact, we still have , and one might ask at what point the sumset attains Schnirelmann density 1 and how does it increase. It actually is the case that and one sees that sumsetting once again yields a more populous set, namely all of . Schnirelmann further succeeded in developing these ideas into the following theorems, aiming towards Additive Number Theory, and proving them to be a novel resource (if not greatly powerful) to attack important problems, such as Waring's problem and Goldbach's conjecture.
Theorem. Let and be subsets of . Then
Note that . Inductively, we have the following generalization.
Corollary. Let be a finite family of subsets of . Then
The theorem provides the first insights on how sumsets accumulate. It seems unfortunate that its conclusion stops short of showing being superadditive. Yet, Schnirelmann provided us with the following results, which sufficed for most of his purpose.
Theorem. Let and be subsets of . If , then
Theorem. (Schnirelmann) Let . If then there exists such that
Additive bases
A subset with the property that for a finite sum, is called an additive basis, and the least number of summands required is called the degree (sometimes order) of the basis. Thus, the last theorem states that any set with positive Schnirelmann density is an additive basis. In this terminology, the set of squares is an additive basis of degree 4. (About an open problem for additive bases, see Erdős–Turán conjecture on additive bases.)
Mann's theorem
Historically the theorems above were pointers to the following result, at one time known as the hypothesis. It was used by Edmund Landau and was finally proved by Henry Mann in 1942.
Theorem. Let and be subsets of . In case that , we still have
An analogue of this theorem for lower asymptotic density was obtained by Kneser. At a later date, E. Artin and P. Scherk simplified the proof of Mann's theorem.
Waring's problem
Let and be natural numbers. Let . Define to be the number of non-negative integral solutions to the equation
and to be the number of non-negative integral solutions to the inequality
in the variables , respectively. Thus . We have
The volume of the -dimensional body defined by , is bounded by the volume of the hypercube of size , hence . The hard part is to show that this bound still works on the average, i.e.,
Lemma. (Linnik) For all there exists and a constant , depending only on , such that for all ,
for all
With this at hand, the following theorem can be elegantly proved.
Theorem. For all there exists for which .
We have thus established the general solution to Waring's Problem:
Corollary. For all there exists , depending only on , such that every positive integer can be expressed as the sum of at most many -th powers.
Schnirelmann's constant
In 1930 Schnirelmann used these ideas in conjunction with the Brun sieve to prove Schnirelmann's theorem, that any natural number greater than 1 can be written as the sum of not more than C prime numbers, where C is an effectively computable constant: Schnirelmann obtained C < 800000. Schnirelmann's constant is the lowest number C with this property.
Olivier Ramaré showed in that Schnirelmann's constant is at most 7, improving the earlier upper bound of 19 obtained by Hans Riesel and R. C. Vaughan.
Schnirelmann's constant is at least 3; Goldbach's conjecture implies that this is the constant's actual value.
In 2013, Harald Helfgott proved Goldbach's weak conjecture for all odd numbers. Therefore, Schnirelmann's constant is at most 4.
Essential components
Khintchin proved that the sequence of squares, though of zero Schnirelmann density, when added to a sequence of Schnirelmann density between 0 and 1, increases the density:
This was soon simplified and extended by Erdős, who showed, that if A is any sequence with Schnirelmann density α and B is an additive basis of order k then
and this was improved by Plünnecke to
Sequences with this property, of increasing density less than one by addition, were named essential components by Khintchin. Linnik showed that an essential component need not be an additive basis as he constructed an essential component that has xo(1) elements less than x. More precisely, the sequence has
elements less than x for some c < 1. This was improved by E. Wirsing to
For a while, it remained an open problem how many elements an essential component must have. Finally, Ruzsa determined that for every ε > 0 there is an essential component which has at most c(log x)1+ε elements up to x, but there is no essential component which has c(log x)1+o(1) elements up to x.
References
Has a proof of Mann's theorem and the Schnirelmann-density proof of Waring's conjecture.
Additive number theory
Mathematical constants | Schnirelmann density | [
"Mathematics"
] | 1,371 | [
"Mathematical constants",
"Mathematical objects",
"Numbers",
"nan"
] |
343,179 | https://en.wikipedia.org/wiki/Culm%20%28botany%29 | A culm is the aerial (above-ground) stem of a grass or sedge. It is derived from Latin , meaning "stalk." It originally referred to the stem of any type of plant.
In horticulture or agriculture, it is especially used to describe the stalk or woody stems of bamboo, cane or grain grasses.
Malting
In the production of malted grains, the culms refer to the rootlets of the germinated grains. The culms are normally removed in a process known as "deculming" after kilning when producing barley malt, but form an important part of the product when making sorghum or millet malt. These culms are very nutritious and are sold off as animal feed.
References
Plant morphology | Culm (botany) | [
"Biology"
] | 162 | [
"Plant morphology",
"Plants"
] |
343,225 | https://en.wikipedia.org/wiki/Sea%20ice | Sea ice arises as seawater freezes. Because ice is less dense than water, it floats on the ocean's surface (as does fresh water ice). Sea ice covers about 7% of the Earth's surface and about 12% of the world's oceans. Much of the world's sea ice is enclosed within the polar ice packs in the Earth's polar regions: the Arctic ice pack of the Arctic Ocean and the Antarctic ice pack of the Southern Ocean. Polar packs undergo a significant yearly cycling in surface extent, a natural process upon which depends the Arctic ecology, including the ocean's ecosystems. Due to the action of winds, currents and temperature fluctuations, sea ice is very dynamic, leading to a wide variety of ice types and features. Sea ice may be contrasted with icebergs, which are chunks of ice shelves or glaciers that calve into the ocean. Depending on location, sea ice expanses may also incorporate icebergs.
General features and dynamics
Sea ice does not simply grow and melt. During its lifespan, it is very dynamic. Due to the combined action of winds, currents, water temperature and air temperature fluctuations, sea ice expanses typically undergo a significant amount of deformation. Sea ice is classified according to whether or not it is able to drift and according to its age.
Fast ice versus drift (or pack) ice
Sea ice can be classified according to whether or not it is attached (or frozen) to the shoreline (or between shoals or to grounded icebergs). If attached, it is called landfast ice, or more often, fast ice (as in fastened). Alternatively and unlike fast ice, drift ice occurs further offshore in very wide areas and encompasses ice that is free to move with currents and winds. The physical boundary between fast ice and drift ice is the fast ice boundary. The drift ice zone may be further divided into a shear zone, a marginal ice zone and a central pack. Drift ice consists of floes, individual pieces of sea ice or more across. There are names for various floe sizes: small – ; medium – ; big – ; vast – ; and giant – more than . The term pack ice is used either as a synonym to drift ice, or to designate drift ice zone in which the floes are densely packed. The overall sea ice cover is termed the ice canopy from the perspective of submarine navigation.
Classification based on age
Another classification used by scientists to describe sea ice is based on age, that is, on its development stages. These stages are: new ice, nilas, young ice, first-year and old.
New ice, nilas and young ice
New ice is a general term used for recently frozen sea water that does not yet make up solid ice. It may consist of frazil ice (plates or spicules of ice suspended in water), slush (water saturated snow), or shuga (spongy white ice lumps a few centimeters across). Other terms, such as grease ice and pancake ice, are used for ice crystal accumulations under the action of wind and waves. When sea ice begins to form on a beach with a light swell, ice eggs up to the size of a football can be created.
Nilas designates a sea ice crust up to in thickness. It bends without breaking around waves and swells. Nilas can be further subdivided into dark nilas – up to in thickness and very dark and light nilas – over in thickness and lighter in color.
Young ice is a transition stage between nilas and first-year ice and ranges in thickness from to , Young ice can be further subdivided into grey ice – to in thickness and grey-white ice – to in thickness. Young ice is not as flexible as nilas, but tends to break under wave action. Under compression, it will either raft (at the grey ice stage) or ridge (at the grey-white ice stage).
First-year sea ice
First-year sea ice is ice that is thicker than young ice but has no more than one year growth. In other words, it is ice that grows in the fall and winter (after it has gone through the new ice – nilas – young ice stages and grows further) but does not survive the spring and summer months (it melts away). The thickness of this ice typically ranges from to . First-year ice may be further divided into thin ( to ), medium ( to ) and thick (>).
Old sea ice
Old sea ice is sea ice that has survived at least one melting season (i.e. one summer). For this reason, this ice is generally thicker than first-year sea ice. The thickness of old sea ice typically ranges from 2 to 4 m. Old ice is commonly divided into two types: second-year ice, which has survived one melting season and multiyear ice, which has survived more than one. (In some sources, old ice is more than two years old.) Multi-year ice is much more common in the Arctic than it is in the Antarctic. The reason for this is that sea ice in the south drifts into warmer waters where it melts. In the Arctic, much of the sea ice is land-locked.
Driving forces
While fast ice is relatively stable (because it is attached to the shoreline or the seabed), drift (or pack) ice undergoes relatively complex deformation processes that ultimately give rise to sea ice's typically wide variety of landscapes. Wind is the main driving force, along with ocean currents. The Coriolis force and sea ice surface tilt have also been invoked. These driving forces induce a state of stress within the drift ice zone. An ice floe converging toward another and pushing against it will generate a state of compression at the boundary between both. The ice cover may also undergo a state of tension, resulting in divergence and fissure opening. If two floes drift sideways past each other while remaining in contact, this will create a state of shear.
Deformation
Sea ice deformation results from the interaction between ice floes as they are driven against each other. The result may be of three types of features: 1) Rafted ice, when one piece is overriding another; 2) Pressure ridges, a line of broken ice forced downward (to make up the keel) and upward (to make the sail); and 3) Hummock, a hillock of broken ice that forms an uneven surface. A shear ridge is a pressure ridge that formed under shear – it tends to be more linear than a ridge induced only by compression. A new ridge is a recent feature – it is sharp-crested, with its side sloping at an angle exceeding 40 degrees. In contrast, a weathered ridge is one with a rounded crest and with sides sloping at less than 40 degrees. Stamukhi are yet another type of pile-up but these are grounded and are therefore relatively stationary. They result from the interaction between fast ice and the drifting pack ice.
Level ice is sea ice that has not been affected by deformation and is therefore relatively flat.
Leads and polynyas
Leads and polynyas are areas of open water that occur within sea ice expanses even though air temperatures are below freezing. They provide a direct interaction between the ocean and the atmosphere, which is important for the wildlife. Leads are narrow and linear, varying in width from meters to kilometers. During the winter, the water in leads quickly freezes up. They are also used for navigation purposes – even when refrozen, the ice in leads is thinner, allowing icebreakers access to an easier sail path and submarines to surface more easily. Polynyas are more uniform in size than leads and are also larger – two types are recognized: 1) Sensible-heat polynyas, caused by the upwelling of warmer water and 2) Latent-heat polynyas, resulting from persistent winds from the coastline.
Formation
Only the top layer of water needs to cool to the freezing point. Convection of the surface layer involves the top , down to the pycnocline of increased density.
In calm water, the first sea ice to form on the surface is a skim of separate crystals which initially are in the form of tiny discs, floating flat on the surface and of diameter less than . Each disc has its c-axis vertical and grows outwards laterally. At a certain point such a disc shape becomes unstable and the growing isolated crystals take on a hexagonal, stellar form, with long fragile arms stretching out over the surface. These crystals also have their c-axis vertical. The dendritic arms are very fragile and soon break off, leaving a mixture of discs and arm fragments. With any kind of turbulence in the water, these fragments break up further into random-shaped small crystals which form a suspension of increasing density in the surface water, an ice type called frazil or grease ice. In quiet conditions the frazil crystals soon freeze together to form a continuous thin sheet of young ice; in its early stages, when it is still transparent – that is the ice called nilas. Once nilas has formed, a quite different growth process occurs, in which water freezes on to the bottom of the existing ice sheet, a process called congelation growth. This growth process yields first-year ice.
In rough water, fresh sea ice is formed by the cooling of the ocean as heat is lost into the atmosphere. The uppermost layer of the ocean is supercooled to slightly below the freezing point, at which time tiny ice platelets (frazil ice) form. With time, this process leads to a mushy surface layer, known as grease ice. Frazil ice formation may also be started by snowfall, rather than supercooling. Waves and wind then act to compress these ice particles into larger plates, of several meters in diameter, called pancake ice. These float on the ocean surface and collide with one another, forming upturned edges. In time, the pancake ice plates may themselves be rafted over one another or frozen together into a more solid ice cover, known as consolidated pancake ice. Such ice has a very rough appearance on top and bottom.
If sufficient snow falls on sea ice to depress the freeboard below sea level, sea water will flow in and a layer of ice will form of mixed snow/sea water. This is particularly common around Antarctica.
Russian scientist Vladimir Vize (1886–1954) devoted his life to study the Arctic ice pack and developed the Scientific Prediction of Ice Conditions Theory, for which he was widely acclaimed in academic circles. He applied this theory in the field in the Kara Sea, which led to the discovery of Vize Island.
Yearly freeze and melt cycle
The annual freeze and melt cycle is set by the annual cycle of solar insolation and of ocean and atmospheric temperature and of variability in this annual cycle.
In the Arctic, the area of ocean covered by sea ice increases over winter from a minimum in September to a maximum in March or sometimes February, before melting over the summer. In the Antarctic, where the seasons are reversed, the annual minimum is typically in February and the annual maximum in September or October. The presence of sea ice abutting the calving fronts of ice shelves has been shown to influence glacier flow and potentially the stability of the Antarctic ice sheet.
The growth and melt rate are also affected by the state of the ice itself. During growth, the ice thickening due to freezing (as opposed to dynamics) is itself dependent on the thickness, so that the ice growth slows as the ice thickens. Likewise, during melt, thinner sea ice melts faster. This leads to different behaviour between multiyear and first year ice. In addition, melt ponds on the ice surface during the melt season lower the albedo such that more solar radiation is absorbed, leading to a feedback where melt is accelerated. The presence of melt ponds is affected by the permeability of the sea ice (i.e. whether meltwater can drain) and the topography of the sea ice surface (i.e. the presence of natural basins for the melt ponds to form in). First year ice is flatter than multiyear ice due to the lack of dynamic ridging, so ponds tend to have greater area. They also have lower albedo since they are on thinner ice, which blocks less of the solar radiation from reaching the dark ocean below.
Physical properties
Sea ice is a composite material made up of pure ice, liquid brine, air, and salt. The volumetric fractions of these components—ice, brine, and air—determine the key physical properties of sea ice, including thermal conductivity, heat capacity, latent heat, density, elastic modulus, and mechanical strength. Brine volume fraction depends on sea-ice salinity and temperature, while sea-ice salinity mainly depends on ice age and thickness. During the ice growth period, its bulk brine volume is typically below 5%. Air volume fraction during ice growth period is typically around 1–2 %, but may substantially increase upon ice warming. Air volume of sea ice in can be as high as 15 % in summer and 4 % in autumn. Both brine and air volumes influence sea-ice density values, which are typically around 840–910 kg/m3 for first-year ice. Sea-ice density is a significant source of errors in sea-ice thickness retrieval using radar and laser satellite altimetry, resulting in uncertainties of 0.3–0.4 m.
Monitoring and observations
Changes in sea ice conditions are best demonstrated by the rate of melting over time. A composite record of Arctic ice demonstrates that the floes' retreat began around 1900, experiencing more rapid melting beginning within the past 50 years. Satellite study of sea ice began in 1979 and became a much more reliable measure of long-term changes in sea ice. In comparison to the extended record, the sea-ice extent in the polar region by September 2007 was only half the recorded mass that had been estimated to exist within the 1950–1970 period.
Arctic sea ice extent ice hit an all-time low in September 2012, when the ice was determined to cover only 24% of the Arctic Ocean, offsetting the previous low of 29% in 2007. Predictions of when the first "ice free" Arctic summer might occur vary.
Antarctic sea ice extent gradually increased in the period of satellite observations, which began in 1979, until a rapid decline in southern hemisphere spring of 2016.
Effects of climate change
Sea ice provides an ecosystem for various polar species, particularly the polar bear, whose environment is being threatened as global warming causes the ice to melt more as the Earth's temperature gets warmer. Furthermore, the sea ice itself functions to help keep polar climates cool, since the ice exists in expansive enough amounts to maintain a cold environment. At this, sea ice's relationship with global warming is cyclical; the ice helps to maintain cool climates, but as the global temperature increases, the ice melts and is less effective in keeping those climates cold. The bright, shiny surface (albedo) of the ice also serves a role in maintaining cooler polar temperatures by reflecting much of the sunlight that hits it back into space. As the sea ice melts, its surface area shrinks, diminishing the size of the reflective surface and therefore causing the earth to absorb more of the sun's heat. As the ice melts it lowers the albedo thus causing more heat to be absorbed by the Earth and further increase the amount of melting ice. Though the size of the ice floes is affected by the seasons, even a small change in global temperature can greatly affect the amount of sea ice and due to the shrinking reflective surface that keeps the ocean cool, this sparks a cycle of ice shrinking and temperatures warming. As a result, the polar regions are the most susceptible places to climate change on the planet.
Furthermore, sea ice affects the movement of ocean waters. In the freezing process, much of the salt in ocean water is squeezed out of the frozen crystal formations, though some remains frozen in the ice. This salt becomes trapped beneath the sea ice, creating a higher concentration of salt in the water beneath ice floes. This concentration of salt contributes to the salinated water's density and this cold, denser water sinks to the bottom of the ocean. This cold water moves along the ocean floor towards the equator, while warmer water on the ocean surface moves in the direction of the poles. This is referred to as "conveyor belt motion" and is a regularly occurring process.
Modelling
In order to gain a better understanding about the variability, numerical sea ice models are used to perform sensitivity studies. The two main ingredients are the ice dynamics and the thermodynamical properties (see Sea ice emissivity modelling, Sea ice growth processes and Sea ice thickness). There are many sea ice model computer codes available for doing this, including the CICE numerical suite.
Many global climate models (GCMs) have sea ice implemented in their numerical simulation scheme in order to capture the ice–albedo feedback correctly. Examples include:
The Louvain-la-Neuve Sea Ice Model is a numerical model of sea ice designed for climate studies and operational oceanography developed at Université catholique de Louvain. It is coupled to the ocean general circulation model OPA (Ocean Parallélisé) and is freely available as a part of the Nucleus for European Modeling of the Ocean.
The MIT General Circulation Model is a global circulation model developed at Massachusetts Institute of Technology includes a package for sea-ice. The code is freely available there.
The University Corporation for Atmospheric Research develops the Community Sea Ice Model.
CICE is run by the Los Alamos National Laboratory. The project is open source and designed as a component of GCM, although it provides a standalone mode.
The Finite-Element Sea-Ice Ocean Model developed at Alfred Wegener Institute uses an unstructured grid.
The neXt Generation Sea-Ice model (neXtSIM) is a Lagrangian model using an adaptive and unstructured triangular mesh and includes a new and unique class of rheological model called Maxwell-Elasto-Brittle to treat the ice dynamics. This model is developed at the Nansen Center in Bergen, Norway.
The Coupled Model Intercomparison Project offers a standard protocol for studying the output of coupled atmosphere-ocean general circulation models. The coupling takes place at the atmosphere-ocean interface where the sea ice may occur.
In addition to global modeling, various regional models deal with sea ice. Regional models are employed for seasonal forecasting experiments and for process studies.
Ecology
Sea ice is part of the Earth's biosphere. When sea water freezes, the ice is riddled with brine-filled channels which sustain sympagic organisms such as bacteria, algae, copepods and annelids, which in turn provide food for animals such as krill and specialised fish like the bald notothen, fed upon in turn by larger animals such as emperor penguins and minke whales.
A decline of seasonal sea ice puts the survival of Arctic species such as ringed seals and polar bears at risk.
Extraterrestrial presence
Other element and compounds have been speculated to exist as oceans and seas on extraterrestrial planets. Scientists notably suspect the existence of "icebergs" of solid diamond and corresponding seas of liquid carbon on the ice giants, Neptune and Uranus. This is due to extreme pressure and heat at the core, that would turn carbon into a supercritical fluid.
See also
Ice types or features
Physics and chemistry
Applied sciences and engineering endeavours
References
Sea ice glossaries
External links
Daily maps of sea ice concentration from the University of Bremen
Sea ice maps from the National Snow and Ice Data Center
Aquatic ecology
Earth phenomena
Bodies of ice
Articles containing video clips
Oceanographical terminology
Cryosphere
Water ice | Sea ice | [
"Physics",
"Biology",
"Environmental_science"
] | 4,054 | [
"Physical phenomena",
"Earth phenomena",
"Sea ice",
"Hydrology",
"Cryosphere",
"Ecosystems",
"Aquatic ecology"
] |
343,230 | https://en.wikipedia.org/wiki/Autopilot | An autopilot is a system used to control the path of a vehicle without requiring constant manual control by a human operator. Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle, allowing the operator to focus on broader aspects of operations (for example, monitoring the trajectory, weather and on-board systems).
When present, an autopilot is often used in conjunction with an autothrottle, a system for controlling the power delivered by the engines.
An autopilot system is sometimes colloquially referred to as "George" (e.g. "we'll let George fly for a while"; "George is flying the plane now".). The etymology of the nickname is unclear: some claim it is a reference to American inventor George De Beeson (1897 - 1965), who patented an autopilot in the 1930s, while others claim that Royal Air Force pilots coined the term during World War II to symbolize that their aircraft technically belonged to King George VI.
First autopilots
In the early days of aviation, aircraft required the continuous attention of a pilot to fly safely. As aircraft range increased, allowing flights of many hours, the constant attention led to serious fatigue. An autopilot is designed to perform some of the pilot's tasks.
The first aircraft autopilot was developed by Sperry Corporation in 1912. The autopilot connected a gyroscopic heading indicator, and attitude indicator to hydraulically operated elevators and rudder. (Ailerons were not connected as wing dihedral was counted upon to produce the necessary roll stability.) It permitted the aircraft to fly straight and level on a compass course without a pilot's attention, greatly reducing the pilot's workload.
Lawrence Sperry, the son of famous inventor Elmer Sperry, demonstrated it in 1914 at an aviation safety contest held in Paris. Sperry demonstrated the credibility of the invention by flying the aircraft with his hands away from the controls and visible to onlookers. Elmer Sperry Jr., the son of Lawrence Sperry, and Capt Shiras continued work on the same autopilot after the war, and in 1930, they tested a more compact and reliable autopilot which kept a U.S. Army Air Corps aircraft on a true heading and altitude for three hours.
In 1930, the Royal Aircraft Establishment in the United Kingdom developed an autopilot called a pilots' assister that used a pneumatically spun gyroscope to move the flight controls.
The autopilot was further developed, to include, for example, improved control algorithms and hydraulic servomechanisms. Adding more instruments, such as radio-navigation aids, made it possible to fly at night and in bad weather. In 1947, a U.S. Air Force C-53 made a transatlantic flight, including takeoff and landing, completely under the control of an autopilot.
Bill Lear developed his F-5 automatic pilot, and automatic approach control system, and was awarded the Collier Trophy in 1949.
In the early 1920s, the Standard Oil tanker J.A. Moffet became the first ship to use an autopilot.
The Piasecki HUP-2 Retriever was the first production helicopter with an autopilot.
The lunar module digital autopilot of the Apollo program is an early example of a fully digital autopilot system in spacecraft.
Modern autopilots
Not all of the passenger aircraft flying today have an autopilot system. Older and smaller general aviation aircraft especially are still hand-flown, and even small airliners with fewer than twenty seats may also be without an autopilot as they are used on short-duration flights with two pilots. The installation of autopilots in aircraft with more than twenty seats is generally made mandatory by international aviation regulations. There are three levels of control in autopilots for smaller aircraft. A single-axis autopilot controls an aircraft in the roll axis only; such autopilots are also known colloquially as "wing levellers", reflecting their single capability. A two-axis autopilot controls an aircraft in the pitch axis as well as roll, and may be little more than a wing leveller with limited pitch oscillation-correcting ability; or it may receive inputs from on-board radio navigation systems to provide true automatic flight guidance once the aircraft has taken off until shortly before landing; or its capabilities may lie somewhere between these two extremes. A three-axis autopilot adds control in the yaw axis and is not required in many small aircraft.
Autopilots in modern complex aircraft are three-axis and generally divide a flight into taxi, takeoff, climb, cruise (level flight), descent, approach, and landing phases. Autopilots that automate all of these flight phases except taxi and takeoff exist. An autopilot-controlled approach to landing on a runway and controlling the aircraft on rollout (i.e. keeping it on the centre of the runway) is known as an Autoland, where the autopilot utilizes an Instrument Landing System (ILS) Cat IIIc approach, which is used when the visibility is zero. These approaches are available at many major airports' runways today, especially at airports subject to adverse weather phenomena such as fog. The aircraft can typically stop on their own, but will require the disengagement of the autopilot in order to exit the runway and taxi to the gate. An autopilot is often an integral component of a Flight Management System.
Modern autopilots use computer software to control the aircraft. The software reads the aircraft's current position, and then controls a flight control system to guide the aircraft. In such a system, besides classic flight controls, many autopilots incorporate thrust control capabilities that can control throttles to optimize the airspeed.
The autopilot in a modern large aircraft typically reads its position and the aircraft's attitude from an inertial guidance system. Inertial guidance systems accumulate errors over time. They will incorporate error reduction systems such as the carousel system that rotates once a minute so that any errors are dissipated in different directions and have an overall nulling effect. Error in gyroscopes is known as drift. This is due to physical properties within the system, be it mechanical or laser guided, that corrupt positional data. The disagreements between the two are resolved with digital signal processing, most often a six-dimensional Kalman filter. The six dimensions are usually roll, pitch, yaw, altitude, latitude, and longitude. Aircraft may fly routes that have a required performance factor, therefore the amount of error or actual performance factor must be monitored in order to fly those particular routes. The longer the flight, the more error accumulates within the system. Radio aids such as DME, DME updates, and GPS may be used to correct the aircraft position.
Control Wheel Steering
An option midway between fully automated flight and manual flying is Control Wheel Steering (CWS). Although it is becoming less used as a stand-alone option in modern airliners, CWS is still a function on many aircraft today. Generally, an autopilot that is CWS equipped has three positions: off, CWS, and CMD. In CMD (Command) mode the autopilot has full control of the aircraft, and receives its input from either the heading/altitude setting, radio and navaids, or the FMS (Flight Management System). In CWS mode, the pilot controls the autopilot through inputs on the yoke or the stick. These inputs are translated to a specific heading and attitude, which the autopilot will then hold until instructed to do otherwise. This provides stability in pitch and roll. Some aircraft employ a form of CWS even in manual mode, such as the MD-11 which uses a constant CWS in roll. In many ways, a modern Airbus fly-by-wire aircraft in Normal Law is always in CWS mode. The major difference is that in this system the limitations of the aircraft are guarded by the flight control computer, and the pilot cannot steer the aircraft past these limits.
Computer system details
The hardware of an autopilot varies between implementations, but is generally designed with redundancy and reliability as foremost considerations. For example, the Rockwell Collins AFDS-770 Autopilot Flight Director System used on the Boeing 777 uses triplicated FCP-2002 microprocessors which have been formally verified and are fabricated in a radiation-resistant process.
Software and hardware in an autopilot are tightly controlled, and extensive test procedures are put in place.
Some autopilots also use design diversity. In this safety feature, critical software processes will not only run on separate computers, and possibly even using different architectures, but each computer will run software created by different engineering teams, often being programmed in different programming languages. It is generally considered unlikely that different engineering teams will make the same mistakes. As the software becomes more expensive and complex, design diversity is becoming less common because fewer engineering companies can afford it. The flight control computers on the Space Shuttle used this design: there were five computers, four of which redundantly ran identical software, and a fifth backup running software that was developed independently. The software on the fifth system provided only the basic functions needed to fly the Shuttle, further reducing any possible commonality with the software running on the four primary systems.
Stability augmentation systems
A stability augmentation system (SAS) is another type of automatic flight control system; however, instead of maintaining the aircraft required altitude or flight path, the SAS will move the aircraft control surfaces to damp unacceptable motions. SAS automatically stabilizes the aircraft in one or more axes. The most common type of SAS is the yaw damper which is used to reduce the Dutch roll tendency of swept-wing aircraft. Some yaw dampers are part of the autopilot system while others are stand-alone systems.
Yaw dampers use a sensor to detect how fast the aircraft is rotating (either a gyroscope or a pair of accelerometers), a computer/amplifier and an actuator. The sensor detects when the aircraft begins the yawing part of Dutch roll. A computer processes the signal from the sensor to determine the rudder deflection required to damp the motion. The computer tells the actuator to move the rudder in the opposite direction to the motion since the rudder has to oppose the motion to reduce it. The Dutch roll is damped and the aircraft becomes stable about the yaw axis. Because Dutch roll is an instability that is inherent in all swept-wing aircraft, most swept-wing aircraft need some sort of yaw damper.
There are two types of yaw damper: the series yaw damper and the parallel yaw damper. The actuator of a parallel yaw damper will move the rudder independently of the pilot's rudder pedals while the actuator of a series yaw damper is clutched to the rudder control quadrant, and will result in pedal movement when the rudder moves.
Some aircraft have stability augmentation systems that will stabilize the aircraft in more than a single axis. The Boeing B-52, for example, requires both pitch and yaw SAS in order to provide a stable bombing platform. Many helicopters have pitch, roll and yaw SAS systems. Pitch and roll SAS systems operate much the same way as the yaw damper described above; however, instead of damping Dutch roll, they will damp pitch and roll oscillations to improve the overall stability of the aircraft.
Autopilot for ILS landings
Instrument-aided landings are defined in categories by the International Civil Aviation Organization, or ICAO. These are dependent upon the required visibility level and the degree to which the landing can be conducted automatically without input by the pilot.
CAT I – This category permits pilots to land with a decision height of and a forward visibility or Runway Visual Range (RVR) of . Autopilots are not required.
CAT II – This category permits pilots to land with a decision height between and and a RVR of . Autopilots have a fail passive requirement.
CAT IIIa -This category permits pilots to land with a decision height as low as and a RVR of . It needs a fail-passive autopilot. There must be only a 10−6 probability of landing outside the prescribed area.
CAT IIIb – As IIIa but with the addition of automatic roll out after touchdown incorporated with the pilot taking control some distance along the runway. This category permits pilots to land with a decision height less than 50 feet or no decision height and a forward visibility of in Europe (76 metres, compare this to aircraft size, some of which are now over long) or in the United States. For a landing-without-decision aid, a fail-operational autopilot is needed. For this category some form of runway guidance system is needed: at least fail-passive but it needs to be fail-operational for landing without decision height or for RVR below .
CAT IIIc – As IIIb but without decision height or visibility minimums, also known as "zero-zero". Not yet implemented as it would require the pilots to taxi in zero-zero visibility. An aircraft that is capable of landing in a CAT IIIb that is equipped with autobrake would be able to fully stop on the runway but would have no ability to taxi.
Fail-passive autopilot: in case of failure, the aircraft stays in a controllable position and the pilot can take control of it to go around or finish landing. It is usually a dual-channel system.
Fail-operational autopilot: in case of a failure below alert height, the approach, flare and landing can still be completed automatically. It is usually a triple-channel system or dual-dual system.
Radio-controlled models
In radio-controlled modelling, and especially RC aircraft and helicopters, an autopilot is usually a set of extra hardware and software that deals with pre-programming the model's flight.
Flight Director
A flight director (FD) is a flight instrument that is overlaid on the attitude indicator that shows the pilot of an aircraft the attitude required to execute the desired flight path. While the flight director is separate from the autopilot, they are closely linked. With a flight plan programmed into the flight computer, the flight director will command rolls when turns are required.
Without a flight director, the autopilot is limited to more basic modes, such as maintaining an altitude or a heading, or turning on to a new heading when commanded by the pilot.
When the autopilot and flight director are used together, more complex autopilot modes are possible. The autopilot can follow flight director commands, thus following the flight plan route without pilot intervention.
See also
Acronyms and abbreviations in avionics
Autonomous aircraft
Uninterruptible autopilot
Dynamic positioning
Gyrocompass
Self-driving car
Cruise control
References
External links
"How Fast Can You Fly Safely", June 1933, Popular Mechanics page 858 photo of Sperry Automatic Pilot and drawing of its basic functions in flight when set
Avionics
Aircraft instruments
Uncrewed vehicles
American inventions
1912 introductions
Aircraft automation | Autopilot | [
"Technology",
"Engineering"
] | 3,137 | [
"Avionics",
"Automation",
"Measuring instruments",
"Aircraft instruments",
"Aircraft automation"
] |
343,257 | https://en.wikipedia.org/wiki/List%20of%20craters%20on%20Mars%3A%20A%E2%80%93G | This is a partial list of craters on Mars. There are hundreds of thousands of impact craters on Mars, but only some of them have names. This list here only contains named Martian craters starting with the letter A – G (see also lists for H – N and O – Z).
Large Martian craters (greater than 60 kilometers in diameter) are named after famous scientists and science fiction authors; smaller ones (less than 60 km in diameter) get their names from towns on Earth. Craters cannot be named for living people, and small crater names are not intended to be commemorative – that is, a small crater isn't actually named after a specific town on Earth, but rather its name comes at random from a pool of terrestrial place names, with some exceptions made for craters near landing sites. Latitude and longitude are given as planetographic coordinates with west longitude.
A
B
C
D
E
F
G
See also
List of catenae on Mars
List of craters on Mars
List of mountains on Mars
References
External links
USGS: Martian system nomenclature
USGS: Mars Nomenclature: Craters
Mars: A–G | List of craters on Mars: A–G | [
"Astronomy"
] | 221 | [
"Astronomy-related lists",
"Lists of impact craters"
] |
343,276 | https://en.wikipedia.org/wiki/Regenerative%20circuit | A regenerative circuit is an amplifier circuit that employs positive feedback (also known as regeneration or reaction). Some of the output of the amplifying device is applied back to its input to add to the input signal, increasing the amplification. One example is the Schmitt trigger (which is also known as a regenerative comparator), but the most common use of the term is in RF amplifiers, and especially regenerative receivers, to greatly increase the gain of a single amplifier stage.
The regenerative receiver was invented in 1912 and patented in 1914 by American electrical engineer Edwin Armstrong when he was an undergraduate at Columbia University. It was widely used between 1915 and World War II. Advantages of regenerative receivers include increased sensitivity with modest hardware requirements, and increased selectivity because the Q of the tuned circuit will be increased when the amplifying vacuum tube or transistor has its feedback loop around the tuned circuit (via a "tickler" winding or a tapping on the coil) because it introduces some negative resistance.
Due partly to its tendency to radiate interference when oscillating, by the 1930s the regenerative receiver was largely superseded by other TRF receiver designs (for example "reflex" receivers) and especially by another Armstrong invention - superheterodyne receivers and is largely considered obsolete. Regeneration (now called positive feedback) is still widely used in other areas of electronics, such as in oscillators, active filters, and bootstrapped amplifiers.
A receiver circuit that used larger amounts of regeneration in a more complicated way to achieve even higher amplification, the superregenerative receiver, was also invented by Armstrong in 1922. It was never widely used in general commercial receivers, but due to its small parts count it was used in specialized applications. One widespread use during WWII was IFF transceivers, where single tuned circuit completed the entire electronics system. It is still used in a few specialized low data rate applications, such as garage door openers, wireless networking devices, walkie-talkies and toys.
Regenerative receiver
The gain of any amplifying device, such as a vacuum tube, transistor, or op amp, can be increased by feeding some of the energy from its output back into its input in phase with the original input signal. This is called positive feedback or regeneration. Because of the large amplification possible with regeneration, regenerative receivers often use only a single amplifying element (tube or transistor). In a regenerative receiver the output of the tube or transistor is connected back to its own input through a tuned circuit (LC circuit). The tuned circuit allows positive feedback only at its resonant frequency. In regenerative receivers using only one active device, the same tuned circuit is coupled to the antenna and also serves to select the radio frequency to be received, usually by means of variable capacitance. In the regenerative circuit discussed here, the active device also functions as a detector; this circuit is also known as a regenerative detector. A regeneration control is usually provided for adjusting the amount of feedback (the loop gain). It is desirable for the circuit design to provide regeneration control that can gradually increase feedback to the point of oscillation and that provides control of the oscillation from small to larger amplitude and back to no oscillation without jumps of amplitude or hysteresis in control.
Two important attributes of a radio receiver are sensitivity and selectivity. The regenerative detector provides sensitivity and selectivity due to voltage amplification and the characteristics of a resonant circuit consisting of inductance and capacitance. The regenerative voltage amplification is where is the non-regenerative amplification and is the portion of the output signal fed back to the L2 C2 circuit. As becomes smaller the amplification increases. The of the tuned circuit (L2 C2) without regeneration is where is the reactance of the coil and represents the total dissipative loss of the tuned circuit. The positive feedback compensates the energy loss caused by , so it may be viewed as introducing a negative resistance to the tuned circuit. The of the tuned circuit with regeneration is . The regeneration increases the . Oscillation begins when .
Regeneration can increase the detection gain of a detector by a factor of 1,700 or more. This is quite an improvement, especially for the low-gain vacuum tubes of the 1920s and early 1930s. The type 36 screen-grid tube (obsolete since the mid-1930s) had a non-regenerative detection gain (audio frequency plate voltage divided by radio frequency input voltage) of only 9.2 at 7.2 MHz, but in a regenerative detector, had detection gain as high as 7,900 at critical regeneration (non-oscillating) and as high as 15,800 with regeneration just above critical. The "... non-oscillating regenerative amplification is limited by the stability of the circuit elements, tube [or device] characteristics and [stability of] supply voltages which determine the maximum value of regeneration obtainable without self-oscillation". Intrinsically, there is little or no difference in the gain and stability available from vacuum tubes, JFETs, MOSFETs or bipolar junction transistors (BJTs).
A major improvement in stability and a small improvement in available gain for reception of CW radiotelegraphy is provided by the use of a separate oscillator, known as a heterodyne oscillator or beat oscillator. Providing the oscillation separately from the detector allows the regenerative detector to be set for maximum gain and selectivity - which is always in the non-oscillating condition. Interaction between the detector and the beat oscillator can be minimized by operating the beat oscillator at half of the receiver operating frequency, using the second harmonic of the beat oscillator in the detector.
AM reception
For AM reception, the gain of the loop is adjusted so it is just below the level required for oscillation (a loop gain of just less than one). The result of this is to greatly increase the gain of the amplifier at the bandpass frequency (resonant frequency), while not increasing it at other frequencies. So the incoming radio signal is amplified by a large factor, 103 - 105, increasing the receiver's sensitivity to weak signals. The high gain also has the effect of reducing the circuit's bandwidth (increasing the Q) by an equal factor, increasing the selectivity of the receiver.
CW reception (autodyne mode)
For the reception of CW radiotelegraphy (Morse code), the feedback is increased just to the point of oscillation. The tuned circuit is adjusted to provide typically 400 to 1000 Hertz difference between the receiver oscillation frequency and the desired transmitting station's signal frequency. The two frequencies beat in the nonlinear amplifier, generating heterodyne or beat frequencies. The difference frequency, typically 400 to 1000 Hertz, is in the audio range; so it is heard as a tone in the receiver's speaker whenever the station's signal is present.
Demodulation of a signal in this manner, by use of a single amplifying device as oscillator and mixer simultaneously, is known as autodyne reception. The term autodyne predates multigrid tubes and is not applied to use of tubes specifically designed for frequency conversion.
SSB reception
For the reception of single-sideband (SSB) signals, the circuit is also adjusted to oscillate as in CW reception. The tuning is adjusted until the demodulated voice is intelligible.
Advantages and disadvantages
Regenerative receivers require fewer components than other types of receiver circuit, such as the TRF and superheterodyne. The circuit's advantage was that it got much more amplification (gain) out of the expensive vacuum tubes, thus reducing the number of tubes required and therefore the cost of a receiver. Early vacuum tubes had low gain and tended to oscillate at radio frequencies (RF). TRF receivers often required 5 or 6 tubes; each stage requiring tuning and neutralization, making the receiver cumbersome, power hungry, and hard to adjust. A regenerative receiver, by contrast, could often provide adequate reception with the use of only one tube. In the 1930s the regenerative receiver was replaced by the superheterodyne circuit in commercial receivers due to the superheterodyne's superior performance and the falling cost of tubes. Since the advent of the transistor in 1946, the low cost of active devices has removed most of the advantage of the circuit. However, in recent years the regenerative circuit has seen a modest comeback in receivers for low cost digital radio applications such as garage door openers, keyless locks, RFID readers and some cell phone receivers.
A disadvantage of this receiver, especially in designs that couple the detector tuned circuit to the antenna, is that the regeneration (feedback) level must be adjusted when the receiver is tuned to a different frequency. The antenna impedance varies with frequency, changing the loading of the input tuned circuit by the antenna, requiring the regeneration to be adjusted. In addition, the Q of the detector tuned circuit components vary with frequency, requiring adjustment of the regeneration control.
A disadvantage of the single active device regenerative detector in autodyne operation is that the local oscillation causes the operating point to move significantly away from the ideal operating point, resulting in the detection gain being reduced.
Another drawback is that when the circuit is adjusted to oscillate it can radiate a signal from its antenna, so it can cause interference to other nearby receivers. Adding an RF amplifier stage between the antenna and the regenerative detector can reduce unwanted radiation, but would add expense and complexity.
Other shortcomings of regenerative receivers are the sensitive and unstable tuning. These problems have the same cause: a regenerative receiver's gain is greatest when it operates on the verge of oscillation, and in that condition, the circuit behaves chaotically. Simple regenerative receivers electrically couple the antenna to the detector tuned circuit, resulting in the electrical characteristics of the antenna influencing the resonant frequency of the detector tuned circuit. Any movement of the antenna or large objects near the antenna can change the tuning of the detector.
History
The inventor of FM radio, Edwin Armstrong, filed US patent 1113149 in 1913 about regenerative circuit while he was a junior in college. He patented the superregenerative circuit in 1922, and the superheterodyne receiver in 1918.
Lee De Forest filed US patent 1170881 in 1914 that became the cause of a contentious lawsuit with Armstrong, whose patent for the regenerative circuit had been issued in 1914. The lawsuit lasted until 1934, winding its way through the appeals process and ending up at the Supreme Court. Armstrong won the first case, lost the second, stalemated at the third, and then lost the final round at the Supreme Court.
At the time the regenerative receiver was introduced, vacuum tubes were expensive and consumed much power, with the added expense and encumbrance of heavy batteries. So this design, getting most gain out of one tube, filled the needs of the growing radio community and immediately thrived. Although the superheterodyne receiver is the most common receiver in use today, the regenerative radio made the most out of very few parts.
In World War II the regenerative circuit was used in some military equipment. An example is the German field radio "Torn.E.b". Regenerative receivers needed far fewer tubes and less power consumption for nearly equivalent performance.
A related circuit, the superregenerative detector, found several highly important military uses in World War II in Friend or Foe identification equipment and in the top-secret proximity fuze. An example here is the miniature RK61 thyratron marketed in 1938, which was designed specifically to operate like a vacuum triode below its ignition voltage, allowing it to amplify analog signals as a self-quenching superregenerative detector in radio control receivers, and was the major technical development which led to the wartime development of radio-controlled weapons and the parallel development of radio controlled modelling as a hobby.
In the 1930s, the superheterodyne design began to gradually supplant the regenerative receiver, as tubes became far less expensive. In Germany the design was still used in the millions of mass-produced German "peoples receivers" (Volksempfänger) and "German small receivers" (DKE, Deutscher Kleinempfänger). Even after WWII, the regenerative design was still present in early after-war German minimal designs along the lines of the "peoples receivers" and "small receivers", dictated by lack of materials. Frequently German military tubes like the "RV12P2000" were employed in such designs. There were even superheterodyne designs, which used the regenerative receiver as a combined IF and demodulator with fixed regeneration. The superregenerative design was also present in early FM broadcast receivers around 1950. Later it was almost completely phased out of mass production, remaining only in hobby kits, and some special applications, like gate openers.
Superregenerative receiver
The superregenerative receiver uses a second lower-frequency oscillation (within the same stage or by using a second oscillator stage) to provide single-device circuit gains of around one million. This second oscillation periodically interrupts or "quenches" the main RF oscillation. Ultrasonic quench rates between 30 and 100 kHz are typical. After each quenching, RF oscillation grows exponentially, starting from the tiny energy picked up by the antenna plus circuit noise. The amplitude reached at the end of the quench cycle (linear mode) or the time taken to reach limiting amplitude (log mode) depends on the strength of the received signal from which exponential growth started. A low-pass filter in the audio amplifier filters the quench and RF frequencies from the output, leaving the AM modulation. This provides a crude but very effective automatic gain control (AGC).
Advantages and applications
Superregenerative detectors work well for AM and can also be used for wide-band signals such as FM, where they perform "slope detection". Regenerative detectors work well for narrow-band signals, especially for CW and SSB which need a heterodyne oscillator or BFO. A superregenerative detector does not have a usable heterodyne oscillator – even though the superregen always self-oscillates, so CW (Morse code)and SSB (single side band) signals can't be received properly.
Superregeneration is most valuable above 27 MHz, and for signals where broad tuning is desirable. The superregen uses many fewer components for nearly the same sensitivity as more complex designs. It is easily possible to build superregen receivers which operate at microwatt power levels, in the 30 to 6,000 MHz range. It removes the need for the operator to manually adjust regeneration level to just below the point of oscillation - the circuit automatically is taken out of oscillation periodically, but with the disadvantage that small amounts of interference may be a problem for others. These are ideal for remote-sensing applications or where long battery life is important. For many years, superregenerative circuits have been used for commercial products such as garage-door openers, radar detectors, microwatt RF data links, and very low cost walkie-talkies.
Because the superregenerative detectors tend to receive the strongest signal and ignore other signals in the nearby spectrum, the superregen works best with bands that are relatively free of interfering signals. Due to Nyquist's theorem, its quenching frequency must be at least twice the signal bandwidth. But quenching with overtones acts further as a heterodyne receiver mixing additional unneeded signals from those bands into the working frequency. Thus the overall bandwidth of superregenerator cannot be less than 4 times that of the quench frequency, assuming the quenching oscillator produces an ideal sine wave.
Patents
1940.
See also
Audion receiver
Tuned electrical circuit
Q multiplier
References
. History of radio in 1925. Has May 5, 1924, appellate decision by Josiah Alexander Van Orsdel in De Forest v Armstrong, pp 46–55. Appellate court credited De Forest with the regenerative circuit: "The decisions of the Commissioner are reversed and priority awarded to De Forest." p 55.
Ulrich L. Rohde, Ajay Poddar www.researchgate.net/publication/4317999_A_Unifying_Theory_and_Characterization_of_Super-Regenerative_Receiver_(SRR)
External links
Some Recent Developments in the Audion Receiver by EH Armstrong, Proceedings of the IRE (Institute of Radio Engineers), volume 3, 1915, pp. 215–247.
a one transistor regenerative receiver
Armstrong v. De Forest Radio Telephone & Telegraph Co. (2nd Cir. 1926) 10 F.2d 727, February 8, 1926; cert denied 270 U.S. 663, 46 S.Ct. 471. opinion on leagle.com
Armstrong v. De Forest, 13 F.2d 438 (2d Cir. 1926)
Radio electronics
Electronic circuits
History of radio
Receiver (radio) | Regenerative circuit | [
"Engineering"
] | 3,651 | [
"Radio electronics",
"Electronic engineering",
"Electronic circuits",
"Receiver (radio)"
] |
343,286 | https://en.wikipedia.org/wiki/Toffoli%20gate | In logic circuits, the Toffoli gate, also known as the CCNOT gate (“controlled-controlled-not”), invented by Tommaso Toffoli, is a CNOT gate with two control qubits and one target qubit. That is, the target qubit (third qubit) will be inverted if the first and second qubits are both 1. It is a universal reversible logic gate, which means that any classical reversible circuit can be constructed from Toffoli gates.
The truth table and matrix are as follows:
Background
An input-consuming logic gate L is reversible if it meets the following conditions: (1) L(x) = y is a gate where for any output y, there is a unique input x; (2) The gate L is reversible if there is a gate L´(y) = x which maps y to x, for all y.
An example of a reversible logic gate is a NOT, which can be described from its truth table below:
The common AND gate is not reversible, because the inputs 00, 01 and 10 are all mapped to the output 0.
Reversible gates have been studied since the 1960s. The original motivation was that reversible gates dissipate less heat (or, in principle, no heat).
More recent motivation comes from quantum computing. In quantum mechanics the quantum state can evolve in two ways: by Schrödinger's equation (unitary transformations), or by their collapse. Logic operations for quantum computers, of which the Toffoli gate is an example, are unitary transformations and therefore evolve reversibly.
Hardware description
The classical Toffoli gate is implemented in a hardware description language such as Verilog:
module toffoli_gate (
input u1, input u2, input in,
output v1, output v2, output out);
always @(*) begin
v1 = u1;
v2 = u2;
out = in ^ (u1 && u2);
end
endmodule
Universality and Toffoli gate
Any reversible gate that consumes its inputs and allows all input computations must have no more input bits than output bits, by the pigeonhole principle. For one input bit, there are two possible reversible gates. One of them is NOT. The other is the identity gate, which maps its input to the output unchanged. For two input bits, the only non-trivial gate is the controlled NOT gate (CNOT), which XORs the first bit to the second bit and leaves the first bit unchanged.
Unfortunately, there are reversible functions that cannot be computed using just those gates. For example, AND cannot be achieved by those gates. In other words, the set consisting of NOT and XOR gates is not universal. To compute an arbitrary function using reversible gates, the Toffoli gate, proposed in 1980 by Toffoli, can indeed achieve the goal. It can be also described as mapping bits {a, b, c} to {a, b, c XOR (a AND b)}. This can also be understood as a modulo operation on bit c: {a, b, c} → {a, b, (c + ab) mod 2}, often written as {a, b, c} → {a, b, c ⨁ ab}.
The Toffoli gate is universal; this means that for any Boolean function f(x1, x2, ..., xm), there is a circuit consisting of Toffoli gates that takes x1, x2, ..., xm and some extra bits set to 0 or 1 to outputs x1, x2, ..., xm, f(x1, x2, ..., xm), and some extra bits (called garbage). A NOT gate, for example, can be constructed from a Toffoli gate by setting the three input bits to {a, 1, 1}, making the third output bit (1 XOR (a AND 1)) = NOT a; (a AND b) is the third output bit from {a, b, 0}. Essentially, this means that one can use Toffoli gates to build systems that will perform any desired Boolean function computation in a reversible manner.
Related logic gates
The Fredkin gate is a universal reversible 3-bit gate that swaps the last two bits if the first bit is 1; a controlled-swap operation.
The n-bit Toffoli gate is a generalization of the Toffoli gate. It takes n bits x1, x2, ..., xn as inputs and outputs n bits. The first n − 1 output bits are just x1, ..., xn−1. The last output bit is (x1 AND ... AND xn−1) XOR xn.
The Toffoli gate can be realized by five two-qubit quantum gates, but it can be shown that it is not possible using fewer than five.
Another universal gate, the Deutsch gate, can be realized by five optical pulses with neutral atoms. The Deutsch gate is a universal gate for quantum computing.
The Margolus gate (named after Norman Margolus), also called simplified Toffoli, is very similar to a Toffoli gate but with a −1 in the diagonal: RCCX = diag(1, 1, 1, 1, 1, −1, X). The Margolus gate is also universal for reversible circuits and acts very similar to a Toffoli gate, with the advantage that it can be constructed with about half of the CNOTs compared to the Toffoli gate.
The iToffoli gate was implemented in superconducting qubits with pair-wise coupling by simultaneously applying noncommuting operations.
Relation to quantum computing
Any reversible gate can be implemented on a quantum computer, and hence the Toffoli gate is also a quantum operator. However, the Toffoli gate cannot be used for universal quantum computation, though it does mean that a quantum computer can implement all possible classical computations. The Toffoli gate has to be implemented along with some inherently quantum gate(s) in order to be universal for quantum computation. In fact, any single-qubit gate with real coefficients that can create a nontrivial quantum state suffices.
A Toffoli gate based on quantum mechanics was successfully realized in January 2009 at the University of Innsbruck, Austria. While the implementation of an n-qubit Toffoli with circuit model requires 2n CNOT gates, the best known upper bound stands at 6n − 12 CNOT gates. It has been suggested that trapped Ion Quantum computers may be able to implement an n-qubit Toffoli gate directly. The application of many-body interaction could be used for direct operation of the gate in trapped ions, Rydberg atoms and superconducting circuit implementations. Following the dark-state manifold, Khazali-Mølmer Cn-NOT gate operates with only three pulses, departing from the circuit model paradigm. The iToffoli gate was implemented in a single step using three superconducting qubits with pair-wise coupling.
See also
Controlled NOT gate
Fredkin gate
Reversible computing
Bijection
Quantum computing
Quantum logic gate
Quantum programming
Adiabatic logic
References
External links
CNOT and Toffoli Gates in Multi-Qubit Setting at the Wolfram Demonstrations Project.
Logic gates
Quantum gates
Reversible computing
Italian inventions | Toffoli gate | [
"Physics"
] | 1,581 | [
"Spacetime",
"Reversible computing",
"Physical quantities",
"Time"
] |
343,325 | https://en.wikipedia.org/wiki/60%20%28number%29 | 60 (sixty) () is the natural number following 59 and preceding 61. Being three times 20, it is called threescore in older literature (kopa in Slavic, Schock in Germanic).
In mathematics
60 is the 4th superior highly composite number, the 4th colossally abundant number, the 9th highly composite number, a unitary perfect number, and an abundant number. It is the smallest number divisible by the numbers 1 to 6.
The smallest group that is not a solvable is the alternating group A5, which has 60 elements.
There are 60 one-sided hexominoes, the polyominoes made from six squares.
There are 60 seconds in a minute, as well as 60 minutes in a degree.
In science and technology
The first fullerene to be discovered was buckminsterfullerene C60, an allotrope of carbon with 60 atoms in each molecule, arranged in a truncated icosahedron. This ball is known as a buckyball, and looks like a soccer ball.
The atomic number of neodymium is 60, and cobalt-60 (60Co) is a radioactive isotope of cobalt.
The electrical utility frequency in western Japan, South Korea, Taiwan, the Philippines, the United States, and several other countries in the Americas is 60 Hz.
An exbibyte (sometimes called exabyte) is 260 bytes.
Cultural number systems
The Babylonian cuneiform numerals had a base of 60, inherited from the Sumerian and Akkadian civilizations, and possibly motivated by the large number of divisors that 60 has. The sexagesimal measurement of time and of geometric angles is a legacy of the Babylonian system.
The number system in the Mali Empire was based on 60, reflected in the counting system of the Maasina Fulfulde, a variant of the Fula language spoken in contemporary Mali. The Ekagi of Western New Guinea used base 60, and the sexagenary cycle plays a role in Chinese calendar and numerology.
From Polish–Lithuanian Commonwealth in Slavic and Baltic languages 60 has its own name kopa (, , , , , ), in Germanic languages: , , , , and in refer to 60 = 5 dozen = small gross. This quantity was used in international medieval treaties e.g. for ransom of captured Teutonic Knights.
In religion
In Hinduism, the 60th birthday of a man is called Sashti poorthi. A ceremony called Sashti (60) Abda (years) Poorthi (completed) in Sanskrit is conducted to felicitate this birthday. It represents a milestone in his life. There are 60 years mentioned in the historic Indian calendars.
In other fields
It is:
In time, the number of seconds in a minute, and the number of minutes in an hour. (a legacy of the Babylonian number system)
The number of feet in the standard measurement tool to evaluate an automotive launch on a dragstrip, as the time taken to travel the first of the track.
The number of miles per hour an automobile accelerates to from rest (0-60) as one of the standard measurements of performance
The number of years in a sexagenary cycle
In years of marriage, the diamond wedding anniversary
The age for senior citizens in some cultures
See also
List of highways numbered 60
References
External links
Integers | 60 (number) | [
"Mathematics"
] | 678 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
343,334 | https://en.wikipedia.org/wiki/70%20%28number%29 | 70 (seventy) is the natural number following 69 and preceding 71.
Mathematics
Properties of the integer
70 is the fourth discrete sphenic number, as the first of the form . It is the smallest weird number, a natural number that is abundant but not semiperfect, where it is also the second-smallest primitive abundant number, after 20. 70 is in equivalence with the sum between the smallest number that is the sum of two abundant numbers, and the largest that is not (24, 46).
70 is the tenth Erdős–Woods number, since it is possible to find sequences of seventy consecutive integers such that each inner member shares a factor with either the first or the last member. It is also the sixth Pell number, preceding the tenth prime number 29, in the sequence .
70 is a palindromic number in bases 9 (779), 13 (5513) and 34 (2234).
Happy number
70 is the thirteenth happy number in decimal, where 7 is the first such number greater than 1 in base ten: the sum of squares of its digits eventually reduces to 1. For both 7 and 70, there is
97, which reduces from the sum of squares of digits of 49, is the only prime after 7 in the successive sums of squares of digits (7, 49, 97, 130, 10) before reducing to 1. More specifically, 97 is also the seventh happy prime in base ten.
70 = 2 × 5 × 7 simplifies to 7 × 10, or the product of the first happy prime in decimal, and the base (10).
Aliquot sequence
70 contains an aliquot sum of 74, in an aliquot sequence of four composite numbers (70, 74, 40, 50, 43) in the prime 43-aliquot tree.
The composite index of 70 is 50, which is the first non-trivial member of the 43-aliquot tree.
40, the Euler totient of 100, is the second non-trivial member of the 43-aliquot tree.
The composite index of 100 is 74 (the aliquot part of 70), the third non-trivial member of the 43-aliquot tree.
The sum 43 + 50 + 40 = 133 represents the one-hundredth composite number, where the sum of all members in this aliquot sequence up to 70 is the fifty-ninth prime, 277 (this prime index value represents the seventeenth prime number and seventh super-prime, 59).
Figurate numbers
70 is the seventh pentagonal number.
70 is also the fourth 13-gonal (tridecagonal) number.
70 is the fifth pentatope number.
The sum of the first seven prime numbers aside from 7 (i.e., 2, 3, 5, 11, …, 19) is 70; the first four primes in this sequence sum to 21 = 3 × 7, where the sum of the sixth, seventh and eighth indexed primes (in the sequence of prime numbers) 13 + 17 + 19 is the seventh square number, 49.
Central binomial coefficient
70 is the fourth central binomial coefficient, preceding , as the number of ways to choose 4 objects out of 8 if order does not matter; this is in equivalence with the number of possible values of an 8-bit binary number for which half the bits are on, and half are off.
Geometric properties
7-simplex
In seven dimensions, the number of tetrahedral cells in a 7-simplex is 70. This makes 70 the central element in a seven by seven matrix configuration of a 7-simplex in seven-dimensional space:
Aside from the 7-simplex, there are a total of seventy other uniform 7-polytopes with symmetry. The 7-simplex can be constructed as the join of a point and a 6-simplex, whose order is 7!, where the 6-simplex has a total of seventy three-dimensional and two-dimensional elements (there are thirty-five 3-simplex cells, and thirty-five faces that are triangular).
70 is also the fifth pentatope number, as the number of 3-dimensional unit spheres which can be packed into a 4-simplex (or four-dimensional analogue of the regular tetrahedron) of edge-length 5.
Leech lattice
The sum of the first 24 squares starting from 1 is 70 = 4900, i.e. a square pyramidal number. This is the only non trivial solution to the cannonball problem, and relates 70 to the Leech lattice in twenty-four dimensions and thus string theory.
In religion
In Jewish tradition, Ptolemy II Philadelphus ordered 72 Jewish elders to translate the Torah into Greek; the result was the Septuagint (from the Latin for "seventy"). The Roman numeral seventy, LXX, is the scholarly symbol for the Septuagint.
In Islamic history and in Islamic interpretation the number 70 or 72 is most often and generally hyperbole for an infinite amount:
There are 70 dead among the Prophet Muhammad's adversaries during the Battle of Badr.
70 of the Prophet Muhammad's followers are martyred at the Battle of Uhud.
In Shia Islam, there are 70 martyrs among Imam Hussein's followers during the Battle of Karbala.
In other fields
In some traditions, 70 years of marriage is marked by a platinum wedding anniversary.
Under Social Security (United States), the age at which a person can receive the maximum retirement benefits (and may do so and continue working without reduction of benefits).
Number name
Several languages, especially ones with vigesimal number systems, do not have a specific word for 70: for example, ; , short for . (For French, this is true only in France; other French-speaking regions such as Belgium, Switzerland, Aosta Valley and Jersey use .)
Notes
References
External links
Integers
la:Septuaginta | 70 (number) | [
"Mathematics"
] | 1,223 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
343,338 | https://en.wikipedia.org/wiki/80%20%28number%29 | 80 (eighty) is the natural number following 79 and preceding 81.
In mathematics
80 is:
the sum of Euler's totient function φ(x) over the first sixteen integers.
a semiperfect number, since adding up some subsets of its divisors (e.g., 1, 4, 5, 10, 20 and 40) gives 80.
a ménage number.
palindromic in bases 3 (22223), 6 (2126), 9 (889), 15 (5515), 19 (4419) and 39 (2239).
a repdigit in bases 3, 9, 15, 19 and 39.
the sum of the first 4 twin prime pairs ((3 + 5) + (5 + 7) + (11 + 13) + (17 + 19)).
The Pareto principle (also known as the 80-20 rule) states that, for many events, roughly 80% of the effects come from 20% of the causes.
Every solvable configuration of the 15 puzzle can be solved in no more than 80 single-tile moves.
References
External links
wiktionary:eighty for 80 in other languages.
Integers | 80 (number) | [
"Mathematics"
] | 250 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
343,340 | https://en.wikipedia.org/wiki/90%20%28number%29 | 90 (ninety) is the natural number following 89 and preceding 91.
In the English language, the numbers 90 and 19 are often confused, as they sound very similar. When carefully enunciated, they differ in which syllable is stressed: 19 /naɪnˈtiːn/ vs 90 /ˈnaɪnti/. However, in dates such as 1999, and when contrasting numbers in the teens and when counting, such as 17, 18, 19, the stress shifts to the first syllable: 19 /ˈnaɪntiːn/.
In mathematics
Ninety is a pronic number as it is the product of 9 and 10, and along with 12 and 56, one of only a few pronic numbers whose digits in decimal are also successive. 90 is divisible by the sum of its base-ten digits, which makes it the thirty-second Harshad number.
Properties of the number
90 is the only number to have an aliquot sum of 144 = 122.
Only three numbers have a set of divisors that generate a sum equal to 90, they are 40, 58, and 89.
90 is also the twentieth abundant and highly abundant number (with 20 the first primitive abundant number and 70 the second).
The number of divisors of 90 is 12. As no smaller number has more than 12 divisors, 90 is a largely composite number.
90 is the tenth and largest number to hold an Euler totient value of 24; no number has a totient that is 90, which makes it the eleventh nontotient (with 50 the fifth).
The twelfth triangular number 78 is the only number to have an aliquot sum equal to 90, aside from the square of the twenty-fourth prime, 892 (which is centered octagonal). 90 is equal to the fifth sum of non-triangular numbers, respectively between the fifth and sixth triangular numbers, 15 and 21 (equivalently 16 + 17 ... + 20). It is also twice 45, which is the ninth triangular number, and the second-smallest sum of twelve non-zero integers, from two through thirteen .
90 can be expressed as the sum of distinct non-zero squares in six ways, more than any smaller number (see image):
.
The square of eleven 112 = 121 is the ninetieth indexed composite number, where the sum of integers is 65, which in-turn represents the composite index of 90. In the fractional part of the decimal expansion of the reciprocal of 11 in base-10, "90" repeats periodically (when leading zeroes are moved to the end).
The eighteenth Stirling number of the second kind is 90, from a of 6 and a of 3, as the number of ways of dividing a set of six objects into three non-empty subsets. 90 is also the sixteenth Perrin number from a sum of 39 and 51, whose difference is 12.
Prime sextuplets
The members of the first prime sextuplet (7, 11, 13, 17, 19, 23) generate a sum equal to 90, and the difference between respective members of the first and second prime sextuplets is also 90, where the second prime sextuplet is (97, 101, 103, 107, 109, 113). The last member of the second prime sextuplet, 113, is the 30th prime number. Since prime sextuplets are formed from prime members of lower order prime k-tuples, 90 is also a record maximal gap between various smaller pairs of prime k-tuples (which include quintuplets, quadruplets, and triplets).
Unitary perfect number
90 is the third unitary perfect number (after 6 and 60), since it is the sum of its unitary divisors excluding itself, and because it is equal to the sum of a subset of its divisors, it is also the twenty-first semiperfect number.
Right angle
An angle measuring 90 degrees is called a right angle. In normal space, the interior angles of a rectangle measure 90 degrees each, while in a right triangle, the angle opposing the hypotenuse measures 90 degrees, with the other two angles adding up to 90 for a total of degrees.
Icosahedral symmetry
Solids
The rhombic enneacontahedron is a zonohedron with a total of 90 rhombic faces: 60 broad rhombi akin to those in the rhombic dodecahedron with diagonals in ratio, and another 30 slim rhombi with diagonals in golden ratio. The obtuse angle of the broad rhombic faces is also the dihedral angle of a regular icosahedron, with the obtuse angle in the faces of golden rhombi equal to the dihedral angle of a regular octahedron and the tetrahedral vertex-center-vertex angle, which is also the angle between Plateau borders: 109.471°. It is the dual polyhedron to the rectified truncated icosahedron, a near-miss Johnson solid. On the other hand, the final stellation of the icosahedron has 90 edges. It also has 92 vertices like the rhombic enneacontahedron, when interpreted as a simple polyhedron. Meanwhile, the truncated dodecahedron and truncated icosahedron both have 90 edges. A further four uniform star polyhedra (U37, U55, U58, U66) and four uniform compound polyhedra (UC32, UC34, UC36, UC55) contain 90 edges or vertices.
Witting polytope
The self-dual Witting polytope contains ninety van Oss polytopes such that sections by the common plane of two non-orthogonal hyperplanes of symmetry passing through the center yield complex 3{4}3 Möbius–Kantor polygons. The root vectors of simple Lie group E8 are represented by the vertex arrangement of the polytope, which shares 240 vertices with the Witting polytope in four-dimensional complex space. By Coxeter, the incidence matrix configuration of the Witting polytope can be represented as:
or
This Witting configuration when reflected under the finite space splits into 85 = 45 + 40 points and planes, alongside 27 + 90 + 240 = 357 lines.
Whereas the rhombic enneacontahedron is the zonohedrification of the regular dodecahedron, a honeycomb of Witting polytopes holds vertices isomorphic to the E8 lattice, whose symmetries can be traced back to the regular icosahedron via the icosian ring.
Cutting an annulus
The maximal number of pieces that can be obtained by cutting an annulus with twelve cuts is 90 (and equivalently, the number of 12-dimensional polyominoes that are prime).
References
Integers | 90 (number) | [
"Mathematics"
] | 1,404 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
343,400 | https://en.wikipedia.org/wiki/Simple%20theorems%20in%20the%20algebra%20of%20sets | The simple theorems in the algebra of sets are some of the elementary properties of the algebra of union (infix operator: ∪), intersection (infix operator: ∩), and set complement (postfix ') of sets.
These properties assume the existence of at least two sets: a given universal set, denoted U, and the empty set, denoted {}. The algebra of sets describes the properties of all possible subsets of U, called the power set of U and denoted P(U). P(U) is assumed closed under union, intersection, and set complement. The algebra of sets is an interpretation or model of Boolean algebra, with union, intersection, set complement, U, and {} interpreting Boolean sum, product, complement, 1, and 0, respectively.
The properties below are stated without proof, but can be derived from a small number of properties taken as axioms. A "*" follows the algebra of sets interpretation of Huntington's (1904) classic postulate set for Boolean algebra. These properties can be visualized with Venn diagrams. They also follow from the fact that P(U) is a Boolean lattice. The properties followed by "L" interpret the lattice axioms.
Elementary discrete mathematics courses sometimes leave students with the impression that the subject matter of set theory is no more than these properties. For more about elementary set theory, see set, set theory, algebra of sets, and naive set theory. For an introduction to set theory at a higher level, see also axiomatic set theory, cardinal number, ordinal number, Cantor–Bernstein–Schroeder theorem, Cantor's diagonal argument, Cantor's first uncountability proof, Cantor's theorem, well-ordering theorem, axiom of choice, and Zorn's lemma.
The properties below include a defined binary operation, relative complement, denoted by the infix operator "\". The "relative complement of A in B," denoted B \A, is defined as (A ∪) and as ∩B.
PROPOSITION 1. For any U and any subset A of U:
{} = U;
= {};
A \ {} = A;
{} \ A = {};
A ∩ {} = {};
A ∪ {} = A; *
A ∩ U = A; *
A ∪ U = U;
∪ A = U; *
∩ A = {}; *
A \ A = {};
U \ A = ;
A \ U = {};
= A;
A ∩ A = A;
A ∪ A = A.
PROPOSITION 2. For any sets A, B, and C:
A ∩ B = B ∩ A; * L
A ∪ B = B ∪ A; * L
A ∪ (A ∩ B) = A; L
A ∩ (A ∪ B) = A; L
(A ∪ B) \ A = B \ A;
A ∩ B = {} if and only if B \ A = B;
( ∪ B) ∪ ( ∪ ) = A;
(A ∩ B) ∩ C = A ∩ (B ∩ C); L
(A ∪ B) ∪ C = A ∪ (B ∪ C); L
C \ (A ∩ B) = (C \ A) ∪ (C \ B);
C \ (A ∪ B) = (C \ A) ∩ (C \ B);
C \ (B \ A) = (C \ B) ∪(C ∩ A);
(B \ A) ∩ C = (B ∩ C) \ A = B ∩ (C \ A);
(B \ A) ∪ C = (B ∪ C) \ (A \ C).
The distributive laws:
A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C); *
A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C). *
PROPOSITION 3. Some properties of ⊆:
A ⊆ B if and only if A ∩ B = A;
A ⊆ B if and only if A ∪ B = B;
A ⊆ B if and only if ⊆ ;
A ⊆ B if and only if A \ B = {};
A ∩ B ⊆ A ⊆ A ∪ B.
See also
References
Edward Huntington (1904) "Sets of independent postulates for the algebra of logic," Transactions of the American Mathematical Society 5: 288-309.
Whitesitt, J. E. (1961) Boolean Algebra and Its Applications. Addison-Wesley. Dover reprint, 1999. | Simple theorems in the algebra of sets | [
"Mathematics"
] | 951 | [
"Basic concepts in set theory",
"Operations on sets"
] |
343,415 | https://en.wikipedia.org/wiki/Silicon%20Forest | Silicon Forest is a Washington County cluster of high-tech companies located in the Portland metropolitan area in the U.S. state of Oregon. The term most frequently refers to the industrial corridor between Beaverton and Hillsboro in northwest Oregon. The high-technology industry accounted for 19 percent of Oregon's economy in 2005, and the Silicon Forest name has been applied to the industry throughout the state in such places as Corvallis, Bend, and White City. Nevertheless, the name refers primarily to the Portland metropolitan area, where about 1,500 high-tech firms were located as of 2006.
The name is analogous to Silicon Valley. In the greater Portland area, these companies have traditionally specialized in hardware — specifically test-and-measurement equipment (Tektronix), computer chips (Intel and an array of smaller chip manufacturers), electronic displays (InFocus, Planar Systems and Pixelworks) and printers (Hewlett-Packard Co, Xerox and Epson). There is a small clean technology emphasis in the area.
History
Silicon Forest can refer to all the technology companies in Oregon, but initially referred to Washington County on Portland’s west side. First used in a Japanese company’s press release dating to 1981, Lattice Semiconductor trademarked the term in 1984 but does not use the term in its marketing materials. Lattice’s founder is sometimes mentioned as the person who came up with the term.
The high-tech industry in the Portland area dates back to at least the 1940s, with Tektronix and Electro Scientific Industries as pioneers. Tektronix and ESI both started out in Portland proper, but moved to Washington County in 1951 and 1962, respectively, and developed sites designed to attract other high-tech companies. Floating Point Systems, co-founded by three former Tektronix employees in Beaverton in 1970, was the first spin-off company in Silicon Forest and the third (after Tek and ESI) to be traded on the NYSE. These three companies, and later Intel, led to the creation of a number of other spin-offs and startups, some of which were remarkably successful. A 2003 dissertation on these spin-offs led to a poster depicting the genealogy of 894 Silicon Forest companies. High-tech employment in the state reached a peak of almost 73,000 in 2001, but has never recovered from the dot-com bust. Statewide, tech employment totaled 57,000 in the spring of 2012.
Unlike other regions with a "silicon" appellation, semiconductors truly are the heart of Oregon's tech industry.
Intel's headquarters remain in Santa Clara, California, but in the 1990s the company began moving its most advanced technical operations to Oregon. Its Ronler Acres campus eventually became its most advanced anywhere, and Oregon is now Intel's largest operating hub. In late 2012, Intel had close to 17,000 employees in Oregon—more than anywhere else the company operated; by 2022, the number had grown to about 22,000.
Companies and subsidiaries
The following is a sample of past and present notable companies in the Silicon Forest. They may have been founded in the Silicon Forest or have a major subsidiary there. A list of Portland tech startups (technology companies founded in Portland) is provided separately.
Current
Act-On
Adtran (after acquired a startup named ”SmartRG”)
Aistock
Airbnb
Amazon Web Services (via acquisition of Elemental Technologies)
Ambric (acquired by Nethra Imaging in April 2009)
Analog Devices
Apple Inc. (Software Engineering in Vancouver, WA. This was previously the Claris products group) and a new R&D facility around Hillsboro.
Arris Group (via acquisition of C-COR)
ASML Holding
Atos
Autodesk Inc
Biotronik
Brandlive
Block-CashApp
Cambia Health Solutions (HealthSparq, Hubbub, and SpendWell)
Cascade Microtech
CD Baby
CollegeNET
Consumer Cellular
DAT Solutions
Digimarc
eBay
Electro Scientific Industries
EPSON
Expensify
Extensis
Thermo Fisher Scientific (via acquisition of FEI Company)
FLIR Systems
ForgeRock
GemStone Systems
Genentech
Google
Grass Valley
Hewlett-Packard
IBM (by acquisition of Sequent)
InFocus
Intel
Integra Telecom
IP Fabrics
IXIA
Jaguar Land Rover
Janrain
Kryptiq Corporation
Kyocera
LaCie
Laika
Lam Research (through merging with Novellus Systems)
Lattice Semiconductor
Lightspeed Systems
Linear Technology
Logitech
Maxim Integrated Products
McAfee
Mentor Graphics
Mozilla
Microchip Technology (purchased Fujitsu old facility)
Microsoft, especially for hardware engineering design center
New Relic (Engineering Headquarters)
Nike, Inc. (Consumer Digital Division)
Nvidia Corporation
NuScale Power
OpenSesame Inc
ON Semiconductor
Oracle Corporation (by acquisition of Sun Microsystems)
Oregon Scientific
Panic Software
PacStar
Phoseon Technology
Pivotal Labs
Pixelworks
Planar Systems
Pop Art, Inc.
Puppet
Qorvo
RadiSys Corporation
Razorfish
Rentrak
RFPIO
Rivos
Rohde & Schwarz
Rockwell Collins
Sage Software (by the acquisition of Timberline)
Salesforce.com
Sensory, Inc.
SEH America
Sharp Corporation
Silicon Labs
Siltronic
Simple
Shimadzu Corp.
Skyworks (by the acquisition of Avnera)
Smarsh
Sellgo
SurveyMonkey
Squarespace
Synopsys
Tektronix
Tripwire
Urban Airship
Vacasa
Vape-Jet
VeriWave
Vernier Software & Technology
Vevo
Wacom (North American Headquarters are based in Portland)
WaferTech (TSMC subsidiary)
Webtrends
WebMD
Welch Allyn
Workday, Inc
Xerox
ZoomInfo
Former
BiiN (defunct)
Central Point Software (defunct)
ClearEdge Power
Etec Systems, Inc. (acquired by Applied Materials)
Floating Point Systems (defunct)
Fujitsu (factory closed, sold to Microchip Technology)
Jive Software (acquired & closed)
MathStar (defunct)
Merix Corporation (acquired by Viasystems)
Microsoft's Surface Hub R&D (closed down)
nCUBE. Beaverton HQ was established in 1983. Acquired by C-COR in 2005, which was in turn acquired by ARRIS in 2007. CommScope acquired ARRIS in 2019, and closed the Beaverton office in the aftermath of the COVID-19 pandemic.
NEC (factory closed)
Open Source Development Labs (defunct)
Oregon Graduate Institute (merged with OHSU in 2001; Washington County campus closed in 2014)
Sequent Computer Systems (purchased by IBM in 1993) Sequent, founded by a team that included three Intel VPs and 15 other employees, also mostly from Intel, made a major contribution to multiprocessing and was largely responsible for the demise of large minicomputers, which could be replaced by much smaller and cheaper micro-processor-based multiprocessor systems. It went public in 1987 and was beginning to also encroach on the market for large mainframe transaction processing systems when IBM bought it out.
SolarWorld
SunPower (in former SolarWorld facility)
See also
List of places with "Silicon" names
References
External links
The Oregonian's Silicon Forest Blog
Portland Tech portal at AboutUs.org
Silicon Florist: Coverage of the web-based startup scene
Silicon Forest
High-technology business districts in the United States
Economy of Portland, Oregon
Washington County, Oregon
Information technology places
1981 in Oregon | Silicon Forest | [
"Technology"
] | 1,499 | [
"Information technology",
"Information technology places"
] |
343,426 | https://en.wikipedia.org/wiki/List%20of%20craters%20on%20Europa | This is a list of craters on Europa. The surface of Jupiter's moon Europa is very young, geologically speaking, and as a result there are very few craters. Furthermore, as Europa's surface is potentially made of weak water ice over a liquid ocean, most surviving craters have slumped so that their structure is very low in relief. Most of the craters that are large enough to have names are named after prominent figures in Celtic myths and folklore.
List
External links
USGS: Europa nomenclature
USGS: Europa Nomenclature: Craters
Europa
Europa
Europa | List of craters on Europa | [
"Astronomy"
] | 111 | [
"Astronomy-related lists",
"Lists of impact craters"
] |
343,436 | https://en.wikipedia.org/wiki/Slip-critical%20joint | Slip-critical joint, from structural engineering, is a type of bolted structural steel connection which relies on friction between the two connected elements rather than bolt shear or bolt bearing to join two structural elements.
Shear (and tension) loads can be transferred between two structural elements by either a bearing-type connection or a slip-critical connection.
In a slip-critical connection, loads are transferred from one element to another through friction forces developed between the faying surfaces of the connection. These friction forces are generated by the extreme tightness of the structural bolts holding the connection together. These bolts, usually tension control bolts or compressible washer tension indicating type bolts, are tensioned to a minimum required amount to generate large enough friction forces between the faying surfaces such that the shear (or tension) load is transferred by the structural members and not by the bolts (in shear) and the connection plates (in bearing). The "turn of the nut" method is also widely used to achieve that state of friction.
If slip-critical connections fail (by slipping), they revert to bearing-type connections, with structural forces now transferred through bolt shear and connection plate bearing. Thus a slippage failure of a slip-critical connection is not necessarily a catastrophic failure. However, slippage of a slip-critical connection in columns may lead to column instability. Slippage of a slip critical joint in a roof truss could result in unintended ponding effects.
The faying surfaces of slip-critical connections must be properly prepared in order to maximize friction forces between the surfaces joined. Usually, this requires cleaning, descaling, roughening, and/or blasting of the faying surfaces. Painting the faying surfaces with a class B primer also allows being in accordance with most of the design that asks for Slip-critical joint.
Structural connectors | Slip-critical joint | [
"Engineering"
] | 373 | [
"Structural engineering",
"Structural connectors"
] |
343,457 | https://en.wikipedia.org/wiki/Maternal%20effect | A maternal effect is a situation where the phenotype of an organism is determined not only by the environment it experiences and its genotype, but also by the environment and genotype of its mother. In genetics, maternal effects occur when an organism shows the phenotype expected from the genotype of the mother, irrespective of its own genotype, often due to the mother supplying messenger RNA or proteins to the egg. Maternal effects can also be caused by the maternal environment independent of genotype, sometimes controlling the size, sex, or behaviour of the offspring. These adaptive maternal effects lead to phenotypes of offspring that increase their fitness. Further, it introduces the concept of phenotypic plasticity, an important evolutionary concept. It has been proposed that maternal effects are important for the evolution of adaptive responses to environmental heterogeneity.
In genetics
In genetics, a maternal effect occurs when the phenotype of an organism is determined by the genotype of its mother. For example, if a mutation is maternal effect recessive, then a female homozygous for the mutation may appear phenotypically normal, however her offspring will show the mutant phenotype, even if they are heterozygous for the mutation.
Maternal effects often occur because the mother supplies a particular mRNA or protein to the oocyte, hence the maternal genome determines whether the molecule is functional. Maternal supply of mRNAs to the early embryo is important, as in many organisms the embryo is initially transcriptionally inactive. Because of the inheritance pattern of maternal effect mutations, special genetic screens are required to identify them. These typically involve examining the phenotype of the organisms one generation later than in a conventional (zygotic) screen, as their mothers will be potentially homozygous for maternal effect mutations that arise.
In Drosophila early embryogenesis
A Drosophila melanogaster oocyte develops in an egg chamber in close association with a set of cells called nurse cells. Both the oocyte and the nurse cells are descended from a single germline stem cell, however cytokinesis is incomplete in these cell divisions, and the cytoplasm of the nurse cells and the oocyte is connected by structures known as ring canals. Only the oocyte undergoes meiosis and contributes DNA to the next generation.
Many maternal effect Drosophila mutants have been found that affect the early steps in embryogenesis such as axis determination, including bicoid, dorsal, gurken and oskar. For example, embryos from homozygous bicoid mothers fail to produce head and thorax structures.
Once the gene that is disrupted in the bicoid mutant was identified, it was shown that bicoid mRNA is transcribed in the nurse cells and then relocalized to the oocyte. Other maternal effect mutants either affect products that are similarly produced in the nurse cells and act in the oocyte, or parts of the transportation machinery that are required for this relocalization. Since these genes are expressed in the (maternal) nurse cells and not in the oocyte or fertilised embryo, the maternal genotype determines whether they can function.
Maternal effect genes are expressed during oogenesis by the mother (expressed prior to fertilization) and develop the anterior-posterior and dorsal ventral polarity of the egg. The anterior end of the egg becomes the head; posterior end becomes the tail. the dorsal side is on the top; the ventral side is in underneath. The products of maternal effect genes called maternal mRNAs are produced by nurse cell and follicle cells and deposited in the egg cells (oocytes). At the start of development process, mRNA gradients are formed in oocytes along anterior-posterior and dorsal ventral axes.
About thirty maternal genes are involved in pattern formation have been identified. In particular, products of four maternal effect genes are critical to the formation of anterior-posterior axis. The product of two maternal effect gene, bicoid and hunchback, regulates formation of anterior structure while another pair nanos and caudal, specifies protein that regulates formation of posterior part of embryo.
The transcript of all four genes-bicoid, hunchback, caudal, nanos are synthesized by nurse and follicle cells and transported into the oocytes.
In birds
In birds, mothers may pass down hormones in their eggs that affect an offspring's growth and behavior. Experiments in domestic canaries have shown that eggs that contain more yolk androgens develop into chicks that display more social dominance. Similar variation in yolk androgen levels has been seen in bird species like the American coot, though the mechanism of effect has yet to be established.
In humans
In 2015, obesity theorist Edward Archer published "The Childhood Obesity Epidemic as a Result of Nongenetic Evolution: The Maternal Resources Hypothesis" and a series of works on maternal effects in human obesity and health. In this body of work, Archer argued that accumulative maternal effects via the non-genetic evolution of matrilineal nutrient metabolism is responsible for the increased global prevalence of obesity and diabetes mellitus type 2. Archer posited that decrements in maternal metabolic control altered fetal pancreatic beta cell, adipocyte (fat cell) and myocyte (muscle cell) development thereby inducing an enduring competitive advantage of adipocytes in the acquisition and sequestering on nutrient energy.
In plants
The environmental cues such as light, temperature, soil moisture and nutrients that the mother plant encounters can cause variations in seed quality, even within the same genotype. Thus, the mother plant greatly influences seed traits such as seed size, germination rate, and viability.
Environmental maternal effects
The environment or condition of the mother can also in some situations influence the phenotype of her offspring, independent of the offspring's genotype.
Paternal effect genes
In contrast, a paternal effect is when a phenotype results from the genotype of the father, rather than the genotype of the individual. The genes responsible for these effects are components of sperm that are involved in fertilization and early development. An example of a paternal-effect gene is the ms(3)sneaky in Drosophila. Males with a mutant allele of this gene produce sperm that are able to fertilize an egg, but the sneaky-inseminated eggs do not develop normally. However, females with this mutation produce eggs that undergo normal development when fertilized.
Adaptive maternal effects
Adaptive maternal effects induce phenotypic changes in offspring that result in an increase in fitness. These changes arise from mothers sensing environmental cues that work to reduce offspring fitness, and then responding to them in a way that then “prepares” offspring for their future environments. A key characteristic of “adaptive maternal effects” phenotypes is their plasticity. Phenotypic plasticity gives organisms the ability to respond to different environments by altering their phenotype. With these “altered” phenotypes increasing fitness it becomes important to look at the likelihood that adaptive maternal effects will evolve and become a significant phenotypic adaptation to an environment.
Defining adaptive maternal effects
When traits are influenced by either the maternal environment or the maternal phenotype, it is said to be influenced by maternal effects. Maternal effects work to alter the phenotypes of the offspring through pathways other than DNA. Adaptive maternal effects are when these maternal influences lead to a phenotypic change that increases the fitness of the offspring. In general, adaptive maternal effects are a mechanism to cope with factors that work to reduce offspring fitness; they are also environment specific.
It can sometimes be difficult to differentiate between maternal and adaptive maternal effects. Consider the following: Gypsy moths reared on foliage of black oak, rather than chestnut oak, had offspring that developed faster. This is a maternal, not an adaptive maternal effect. In order to be an adaptive maternal effect, the mother's environment would have to have led to a change in the eating habits or behavior of the offspring. The key difference between the two therefore, is that adaptive maternal effects are environment specific. The phenotypes that arise are in response to the mother sensing an environment that would reduce the fitness of her offspring. By accounting for this environment she is then able to alter the phenotypes to actually increase the offspring's fitness. Maternal effects are not in response to an environmental cue, and further they have the potential to increase offspring fitness, but they may not.
When looking at the likelihood of these “altered” phenotypes evolving there are many factors and cues involved. Adaptive maternal effects evolve only when offspring can face many potential environments; when a mother can “predict” the environment into which her offspring will be born; and when a mother can influence her offspring's phenotype, thereby increasing their fitness. The summation of all of these factors can then lead to these “altered” traits becoming favorable for evolution.
The phenotypic changes that arise from adaptive maternal effects are a result of the mother sensing that a certain aspect of the environment may decrease the survival of her offspring. When sensing a cue the mother “relays” information to the developing offspring and therefore induces adaptive maternal effects. This tends to then cause the offspring to have a higher fitness because they are “prepared” for the environment they are likely to experience. These cues can include responses to predators, habitat, high population density, and food availability
The increase in size of Northern American red squirrels is a great example of an adaptive maternal effect producing a phenotype that resulted in an increased fitness. The adaptive maternal effect was induced by the mothers sensing the high population density and correlating it to low food availability per individual. Her offspring were on average larger than other squirrels of the same species; they also grew faster. Ultimately, the squirrels born during this period of high population density showed an increased survival rate (and therefore fitness) during their first winter.
Phenotypic plasticity
When analyzing the types of changes that can occur to a phenotype, we can see changes that are behavioral, morphological, or physiological. A characteristic of the phenotype that arises through adaptive maternal effects, is the plasticity of this phenotype. Phenotypic plasticity allows organisms to adjust their phenotype to various environments, thereby enhancing their fitness to changing environmental conditions. Ultimately it is a key attribute to an organism's, and a population's, ability to adapt to short term environmental change.
Phenotypic plasticity can be seen in many organisms, one species that exemplifies this concept is the seed beetle Stator limbatus. This seed beetle reproduces on different host plants, two of the more common ones being Cercidium floridum and Acacia greggii. When C. floridum is the host plant, there is selection for a large egg size; when A. greggii is the host plant, there is a selection for a smaller egg size. In an experiment it was seen that when a beetle who usually laid eggs on A. greggii was put onto C. floridum, the survivorship of the laid eggs was lower compared to those eggs produced by a beetle that was conditioned and remained on the C. florium host plant. Ultimately these experiments showed the plasticity of egg size production in the beetle, as well as the influence of the maternal environment on the survivorship of the offspring.
Further examples of adaptive maternal effects
In many insects:
Cues such as rapidly cooling temperatures or decreasing daylight can result in offspring that enter into a dormant state. They therefore will better survive the cooling temperatures and preserve energy.
When parents are forced to lay eggs on environments with low nutrients, offspring will be provided with more resources, such as higher nutrients, through an increased egg size.
Cues such as poor habitat or crowding can lead to offspring with wings. The wings allow the offspring to move away from poor environments to ones that will provide better resources.
Maternal diet and environment influence epigenetic effects
Related to adaptive maternal effects are epigenetic effects. Epigenetics is the study of long lasting changes in gene expression that are produced by modifications to chromatin instead of changes in DNA sequence, as is seen in DNA mutation. This "change" refers to DNA methylation, histone acetylation, or the interaction of non-coding RNAs with DNA. DNA methylation is the addition of methyl groups to the DNA. When DNA is methylated in mammals, the transcription of the gene at that location is turned down or turned off entirely. The induction of DNA methylation is highly influenced by the maternal environment. Some maternal environments can lead to a higher methylation of an offspring's DNA, while others lower methylation.[22] The fact that methylation can be influenced by the maternal environment, makes it similar to adaptive maternal effects. Further similarities are seen by the fact that methylation can often increase the fitness of the offspring. Additionally, epigenetics can refer to histone modifications or non-coding RNAs that create a sort of cellular memory. Cellular memory refers to a cell's ability to pass nongenetic information to its daughter cell during replication. For example, after differentiation, a liver cell performs different functions than a brain cell; cellular memory allows these cells to "remember" what functions they are supposed to perform after replication. Some of these epigenetic changes can be passed down to future generations, while others are reversible within a particular individual's lifetime. This can explain why individuals with identical DNA can differ in their susceptibility to certain chronic diseases.
Currently, researchers are examining the correlations between maternal diet during pregnancy and its effect on the offspring's susceptibility for chronic diseases later in life. The fetal programming hypothesis highlights the idea that environmental stimuli during critical periods of fetal development can have lifelong effects on body structure and health and in a sense they prepare offspring for the environment they will be born into. Many of these variations are thought to be due to epigenetic mechanisms brought on by maternal environment such as stress, diet, gestational diabetes, and exposure to tobacco and alcohol. These factors are thought to be contributing factors to obesity and cardiovascular disease, neural tube defects, cancer, diabetes, etc. Studies to determine these epigenetic mechanisms are usually performed through laboratory studies of rodents and epidemiological studies of humans.
Importance for the general population
Knowledge of maternal diet induced epigenetic changes is important not only for scientists, but for the general public. Perhaps the most obvious place of importance for maternal dietary effects is within the medical field. In the United States and worldwide, many non-communicable diseases, such as cancer, obesity, and heart disease, have reached epidemic proportions. The medical field is working on methods to detect these diseases, some of which have been discovered to be heavily driven by epigenetic alterations due to maternal dietary effects. Once the genomic markers for these diseases are identified, research can begin to be implemented to identify the early onset of these diseases and possibly reverse the epigenetic effects of maternal diet in later life stages. The reversal of epigenetic effects will utilize the pharmaceutical field in an attempt to create drugs which target the specific genes and genomic alterations. The creation of drugs to cure these non-communicable diseases could be used to treat individuals who already have these illnesses. General knowledge of the mechanisms behind maternal dietary epigenetic effects is also beneficial in terms of awareness. The general public can be aware of the risks of certain dietary behaviors during pregnancy in an attempt to curb the negative consequences which may arise in offspring later in their lives. Epigenetic knowledge can lead to an overall healthier lifestyle for the billions of people worldwide.
The effect of maternal diet in species other than humans is also relevant. Many of the long term effects of global climate change are unknown. Knowledge of epigenetic mechanisms can help scientists better predict the impacts of changing community structures on species which are ecologically, economically, and/or culturally important around the world. Since many ecosystems will see changes in species structures, the nutrient availability will also be altered, ultimately affecting the available food choices for reproducing females. Maternal dietary effects may also be used to improve agricultural and aquaculture practices. Breeders may be able to utilize scientific data to create more sustainable practices, saving money for themselves, as well as the consumers.
Maternal diet and environment epigenetically influences susceptibility for adult diseases
Hyperglycemia during gestation correlated with obesity and heart disease in adulthood
Hyperglycemia during pregnancy is thought to cause epigenetic changes in the leptin gene of newborns leading to a potential increased risk for obesity and heart disease. Leptin is sometimes known as the “satiety hormone” because it is released by fat cells to inhibit hunger. By studying both animal models and human observational studies, it has been suggested that a leptin surge in the perinatal period plays a critical role in contributing to long-term risk of obesity. The perinatal period begins at 22 weeks gestation and ends a week after birth.[34] DNA methylation near the leptin locus has been examined to determine if there was a correlation between maternal glycemia and neonatal leptin levels. Results showed that glycemia was inversely associated with the methylation states of LEP gene, which controls the production of the leptin hormone. Therefore, higher glycemic levels in mothers corresponded to lower methylation states in LEP gene in their children. With this lower methylation state, the LEP gene is transcribed more often, thereby inducing higher blood leptin levels. These higher blood leptin levels during the perinatal period were linked to obesity in adulthood, perhaps due to the fact that a higher “normal” level of leptin was set during gestation. Because obesity is a large contributor to heart disease, this leptin surge is not only correlated with obesity but also heart disease.
High fat diets during gestation correlated with metabolic syndrome
High fat diets in utero are believed to cause metabolic syndrome. Metabolic syndrome is a set of symptoms including obesity and insulin resistance that appear to be related. This syndrome is often associated with type II diabetes as well as hypertension and atherosclerosis. Using mice models, researchers have shown that high fat diets in utero cause modifications to the adiponectin and leptin genes that alter gene expression; these changes contribute to metabolic syndrome. The adiponectin genes regulate glucose metabolism as well as fatty acid breakdown; however, the exact mechanisms are not entirely understood. In both human and mice models, adiponectin has been shown to add insulin-sensitizing and anti-inflammatory properties to different types of tissue, specifically muscle and liver tissue. Adiponectin has also been shown to increase the rate of fatty acid transport and oxidation in mice, which causes an increase in fatty acid metabolism. With a high fat diet during gestation, there was an increase in methylation in the promoter of the adiponectin gene accompanied by a decrease in acetylation. These changes likely inhibit the transcription of the adiponectin genes because increases in methylation and decreases in acetylation usually repress transcription. Additionally, there was an increase in methylation of the leptin promoter, which turns down the production of the leptin gene. Therefore, there was less adiponectin to help cells take up glucose and break down fat, as well as less leptin to cause a feeling of satiety. The decrease in these hormones caused fat mass gain, glucose intolerance, hypertriglyceridemia, abnormal adiponectin and leptin levels, and hypertension throughout the animal's lifetime. However, the effect was abolished after three subsequent generations with normal diets. This study highlights the fact that these epigenetic marks can be altered in as many as one generation and can even be completely eliminated over time. This study highlighted the connection between high fat diets to the adiponectin and leptin in mice. In contrast, few studies have been done in humans to show the specific effects of high fat diets in utero on humans. However, it has been shown that decreased adiponectin levels are associated with obesity, insulin resistance, type II diabetes, and coronary artery disease in humans. It is postulated that a similar mechanism as the one described in mice may also contribute to metabolic syndrome in humans.
High fat diets during gestation correlated with chronic inflammation
In addition, high fat diets cause chronic low-grade inflammation in the placenta, adipose, liver, brain, and vascular system. Inflammation is an important aspect of the bodies’ natural defense system after injury, trauma, or disease. During an inflammatory response, a series of physiological reactions, such as increased blood flow, increased cellular metabolism, and vasodilation, occur in order to help treat the wounded or infected area. However, chronic low-grade inflammation has been linked to long-term consequences such as cardiovascular disease, renal failure, aging, diabetes, etc. This chronic low-grade inflammation is commonly seen in obese individuals on high fat diets. In a mice model, excessive cytokines were detected in mice fed on a high fat diet. Cytokines aid in cell signaling during immune responses, specifically sending cells towards sites of inflammation, infection, or trauma. The mRNA of proinflammatory cytokines was induced in the placenta of mothers on high fat diets. The high fat diets also caused changes in microbiotic composition, which led to hyperinflammatory colonic responses in offspring. This hyperinflammatory response can lead to inflammatory bowel diseases such as Crohn's disease or ulcerative colitis.[35] As previously mentioned, high fat diets in utero contribute to obesity; however, some proinflammatory factors, like IL-6 and MCP-1, are also linked to body fat deposition. It has been suggested that histone acetylation is closely associated with inflammation because the addition of histone deacetylase inhibitors has been shown to reduce the expression of proinflammatory mediators in glial cells. This reduction in inflammation resulted in improved neural cell function and survival. This inflammation is also often associated with obesity, cardiovascular disease, fatty liver, brain damage, as well as preeclampsia and preterm birth. Although it has been shown that high fat diets induce inflammation, which contribute to all these chronic diseases; it is unclear as to how this inflammation acts as a mediator between diet and chronic disease.
Undernutrition during gestation correlated with cardiovascular disease
A study done after the Dutch Hunger Winter of 1944-1945 showed that undernutrition during the early stages of pregnancy are associated with hypomethylation of the insulin-like growth factor II (IGF2) gene even after six decades. These individuals had significantly lower methylation rates as compared to their same sex sibling who had not been conceived during the famine. A comparison was done with children conceived prior to the famine so that their mothers were nutrient deprived during the later stages of gestation; these children had normal methylation patterns. The IGF2 stands for insulin-like growth factor II; this gene is a key contributor in human growth and development. IGF2 gene is also maternally imprinted meaning that the mother's gene is silenced. The mother's gene is typically methylated at the differentially methylated region (DMR); however, when hypomethylated, the gene is bi-allelically expressed. Thus, individuals with lower methylation states likely lost some of the imprinting effect. Similar results have been demonstrated in the Nr3c1 and Ppara genes of the offspring of rats fed on an isocaloric protein-deficient diet before starting pregnancy. This further implies that the undernutrition was the cause of the epigenetic changes. Surprisingly, there was not a correlation between methylation states and birth weight. This displayed that birth weight may not be an adequate way to determine nutritional status during gestation. This study stressed that epigenetic effects vary depending on the timing of exposure and that early stages of mammalian development are crucial periods for establishing epigenetic marks. Those exposed earlier in gestation had decreased methylation while those who were exposed at the end of gestation had relatively normal methylation levels. The offspring and descendants of mothers with hypomethylation were more likely to develop cardiovascular disease. Epigenetic alterations that occur during embryogenesis and early fetal development have greater physiologic and metabolic effects because they are transmitted over more mitotic divisions. In other words, the epigenetic changes that occur earlier are more likely to persist in more cells.
Nutrient restriction during gestation correlated with diabetes mellitus type 2
In another study, researchers discovered that perinatal nutrient restriction resulting in intrauterine growth restriction (IUGR) contributes to diabetes mellitus type 2 (DM2). IUGR refers to the poor growth of the baby in utero. In the pancreas, IUGR caused a reduction in the expression of the promoter of the gene encoding a critical transcription factor for beta cell function and development. Pancreatic beta cells are responsible for making insulin; decreased beta cell activity is associated with DM2 in adulthood. In skeletal muscle, IUGR caused a decrease in expression of the Glut-4 gene. The Glut-4 gene controls the production of the Glut-4 transporter; this transporter is specifically sensitive to insulin. Thus, when insulin levels rise, more glut-4 transporters are brought to the cell membrane to increase the uptake of glucose into the cell. This change is caused by histone modifications in the cells of skeletal muscle that decrease the effectiveness of the glucose transport system into the muscle. Because the main glucose transporters are not operating at optimal capacity, these individuals are more likely to develop insulin resistance with energy rich diets later in life, contributing to DM2.
High protein diet during gestation correlated with higher blood pressure and adiposity
Further studies have examined the epigenetic changes resulting from a high protein/low carbohydrate diet during pregnancy. This diet caused epigenetic changes that were associated with higher blood pressure, higher cortisol levels, and a heightened Hypothalamic-pituitary-adrenal (HPA) axis response to stress. Increased methylation in the 11β-hydroxysteroid dehydrogenase type 2 (HSD2), glucocorticoid receptor (GR), and H19 ICR were positively correlated with adiposity and blood pressure in adulthood. Glucocorticoids play a vital role in tissue development and maturation as well as having effects on metabolism. Glucocorticoids’ access to GR is regulated by HSD1 and HSD2. H19 is an imprinted gene for a long coding RNA (lncRNA), which has limiting effects on body weight and cell proliferation. Therefore, higher methylation rates in H19 ICR repress transcription and prevent the lncRNA from regulating body weight. Mothers who reported higher meat/fish and vegetable intake and lower bread/potato intake in late pregnancy had a higher average methylation in GR and HSD2. However, one common challenge of these types of studies is that many epigenetic modifications have tissue and cell-type specificity DNA methylation patterns. Thus, epigenetic modification patterns of accessible tissues, like peripheral blood, may not represent the epigenetic patterns of the tissue involved in a particular disease.
Neonatal estrogen exposure correlated with prostate cancer
Strong evidence in rats supports the conclusion that neonatal estrogen exposure plays a role in the development of prostate cancer. Using a human fetal prostate xenograft model, researchers studied the effects of early exposure to estrogen with and without secondary estrogen and testosterone treatment. A xenograft model is a graft of tissue transplanted between organisms of different species. In this case, human tissue was transplanted into rats; therefore, there was no need to extrapolate from rodents to humans. Histopathological lesions, proliferation, and serum hormone levels were measured at various time-points after xenografting. At day 200, the xenograft that had been exposed to two treatments of estrogen showed the most severe changes. Additionally, researchers looked at key genes involved in prostatic glandular and stromal growth, cell-cycle progression, apoptosis, hormone receptors, and tumor suppressors using a custom PCR array. Analysis of DNA methylation showed methylation differences in CpG sites of the stromal compartment after estrogen treatment. These variations in methylation are likely a contributing cause to the changes in the cellular events in the KEGG prostate cancer pathway that inhibit apoptosis and increase cell cycle progression that contribute to the development of cancer.
Supplementation may reverse epigenetic changes
In utero or neonatal exposure to bisphenol A (BPA), a chemical used in manufacturing polycarbonate plastic, is correlated with higher body weight, breast cancer, prostate cancer, and an altered reproductive function. In a mice model, the mice fed on a BPA diet were more likely to have a yellow coat corresponding to their lower methylation state in the promoter regions of the retrotransposon upstream of the Agouti gene. The Agouti gene is responsible for determining whether an animal's coat will be banded (agouti) or solid (non-agouti). However, supplementation with methyl donors like folic acid or phytoestrogen abolished the hypomethylating effect. This demonstrates that the epigenetic changes can be reversed through diet and supplementation.
Maternal diet effects and ecology
Maternal dietary effects are not just seen in humans, but throughout many taxa in the animal kingdom. These maternal dietary effects can result in ecological changes on a larger scale throughout populations and from generation to generation. The plasticity involved in these epigenetic changes due to maternal diet represents the environment into which the offspring will be born. Many times, epigenetic effects on offspring from the maternal diet during development will genetically prepare the offspring to be better adapted for the environment in which they will first encounter. The epigenetic effects of maternal diet can be seen in many species, utilizing different ecological cues and epigenetic mechanisms to provide an adaptive advantage to future generations.
Within the field of ecology, there are many examples of maternal dietary effects. Unfortunately, the epigenetic mechanisms underlying these phenotypic changes are rarely investigated. In the future, it would be beneficial for ecological scientists as well as epigenetic and genomic scientists to work together to fill the holes within the ecology field to produce a complete picture of environmental cues and epigenetic alterations producing phenotypic diversity.
Parental diet affects offspring immunity
A pyralid moth species, Plodia interpunctella, commonly found in food storage areas, exhibits maternal dietary effects, as well as paternal dietary effects, on its offspring. Epigenetic changes in moth offspring affect the production of phenoloxidase, an enzyme involved with melanization and correlated with resistance of certain pathogens in many invertebrate species. In this study, parent moths were housed in food rich or food poor environments during their reproductive period. Moths who were housed in food poor environments produced offspring with less phenoloxidase, and thus had a weaker immune system, than moths who reproduced in food rich environments. This is believed to be adaptive because the offspring develop while receiving cues of scarce nutritional opportunities. These cues allow the moth to allocate energy differentially, decreasing energy allocated for the immune system and devoting more energy towards growth and reproduction to increase fitness and insure future generations. One explanation for this effect may be imprinting, the expression of only one parental gene over the other, but further research has yet to be done.
Parental-mediated dietary epigenetic effects on immunity has a broader significance on wild organisms. Changes in immunity throughout an entire population may make the population more susceptible to an environmental disturbance, such as the introduction of a pathogen. Therefore, these transgenerational epigenetic effects can influence the population dynamics by decreasing the stability of populations who inhabit environments different from the parental environment that offspring are epigenetically modified for.
Maternal diet affects offspring growth rate
Food availability also influences the epigenetic mechanisms driving growth rate in the mouthbrooding cichlid, Simochromis pleurospilus. When nutrient availability is high, reproducing females will produce many small eggs, versus fewer, larger eggs in nutrient poor environments. Egg size often correlates with fish larvae body size at hatching: smaller larvae hatch from smaller eggs. In the case of the cichlid, small larvae grow at a faster rate than their larger egg counterparts. This is due to the increased expression of GHR, the growth hormone receptor. Increased transcription levels of GHR genes increase the receptors available to bind with growth hormone, GH, leading to an increased growth rate in smaller fish. Fish of larger size are less likely to be eaten by predators, therefore it is advantageous to grow quickly in early life stages to insure survival. The mechanism by which GHR transcription is regulated is unknown, but it may be due to hormones within the yolk produced by the mother, or just by the yolk quantity itself. This may lead to DNA methylation or histone modifications which control genic transcription levels.
Ecologically, this is an example of the mother utilizing her environment and determining the best method to maximize offspring survival, without actually making a conscious effort to do so. Ecology is generally driven by the ability of an organism to compete to obtain nutrients and successfully reproduce. If a mother is able to gather a plentiful amount of resources, she will have a higher fecundity and produce offspring who are able to grow quickly to avoid predation. Mothers who are unable to obtain as many nutrients will produce fewer offspring, but the offspring will be larger in hopes that their large size will help insure survival into sexual maturation. Unlike the moth example, the maternal effects provided to the cichlid offspring do not prepare the cichlids for the environment that they will be born into; this is because mouth brooding cichlids provide parental care to their offspring, providing a stable environment for the offspring to develop. Offspring who have a greater growth rate can become independent more quickly than slow growing counterparts, therefore decreasing the amount of energy spent by the parents during the parental care period.
A similar phenomenon occurs in the sea urchin, Strongylocentrotus droebachiensis. Urchin mothers in nutrient rich environments produce a large number of small eggs. Offspring from these small eggs grow at a faster rate than their large egg counterparts from nutrient poor mothers. Again, it is beneficial for sea urchin larvae, known as planula, to grow quickly to decrease the duration of their larval phase and metamorphose into a juvenile to decrease predation risks. Sea urchin larvae have the ability to develop into one of two phenotypes, based on their maternal and larval nutrition. Larvae who grow at a fast rate from high nutrition, are able to devote more of their energy towards development into the juvenile phenotype. Larvae who grow at a slower rate with low nutrition, devote more energy towards growing spine-like appendages to protect themselves from predators in an attempt to increase survival into the juvenile phase. The determination of these phenotypes is based on both the maternal and the juvenile nutrition. The epigenetic mechanisms behind these phenotypic changes is unknown, but it is believed that there may be a nutritional threshold that triggers epigenetic changes affecting development and, ultimately, the larval phenotype.
See also
Extranuclear inheritance
Maternal effect dominant embryonic arrest
Xenia (plants)
References
Developmental biology
Ecology
Evolutionary biology
Genetics | Maternal effect | [
"Biology"
] | 7,411 | [
"Evolutionary biology",
"Behavior",
"Developmental biology",
"Genetics",
"Reproduction",
"Ecology"
] |
343,492 | https://en.wikipedia.org/wiki/Electrolytic%20capacitor | An electrolytic capacitor is a polarized capacitor whose anode or positive plate is made of a metal that forms an insulating oxide layer through anodization. This oxide layer acts as the dielectric of the capacitor. A solid, liquid, or gel electrolyte covers the surface of this oxide layer, serving as the cathode or negative plate of the capacitor. Because of their very thin dielectric oxide layer and enlarged anode surface, electrolytic capacitors have a much higher capacitance-voltage (CV) product per unit volume than ceramic capacitors or film capacitors, and so can have large capacitance values. There are three families of electrolytic capacitor: aluminium electrolytic capacitors, tantalum electrolytic capacitors, and niobium electrolytic capacitors.
The large capacitance of electrolytic capacitors makes them particularly suitable for passing or bypassing low-frequency signals, and for storing large amounts of energy. They are widely used for decoupling or noise filtering in power supplies and DC link circuits for variable-frequency drives, for coupling signals between amplifier stages, and storing energy as in a flashlamp.
Electrolytic capacitors are polarized components because of their asymmetrical construction and must be operated with a higher potential (i.e., more positive) on the anode than on the cathode at all times. For this reason the polarity is marked on the device housing. Applying a reverse polarity voltage, or a voltage exceeding the maximum rated working voltage of as little as 1 or 1.5 volts, can damage the dielectric causing catastrophic failure of the capacitor itself. Failure of electrolytic capacitors can result in an explosion or fire, potentially causing damage to other components as well as injuries. Bipolar electrolytic capacitors which may be operated with either polarity are also made, using special constructions with two anodes connected in series. A bipolar electrolytic capacitor can be made by connecting two normal electrolytic capacitors in series, anode to anode or cathode to cathode, along with diodes.
General information
Electrolytic capacitors family tree
As to the basic construction principles of electrolytic capacitors, there are three different types: aluminium, tantalum, and niobium capacitors. Each of these three capacitor families uses non-solid and solid manganese dioxide or solid polymer electrolytes, so a great spread of different combinations of anode material and solid or non-solid electrolytes is available.
Charge principle
Like other conventional capacitors, electrolytic capacitors store the electric energy statically by charge separation in an electric field in the dielectric oxide layer between two electrodes. The non-solid or solid electrolyte in principle is the cathode, which thus forms the second electrode of the capacitor. This and the storage principle distinguish them from electrochemical capacitors or supercapacitors, in which the electrolyte generally is the ionic conductive connection between two electrodes and the storage occurs with statically double-layer capacitance and electrochemical pseudocapacitance.
Basic materials and construction
Electrolytic capacitors use a chemical feature of some special metals, previously called "valve metals", which on contact with a particular electrolyte form a very thin insulating oxide layer on their surface by anodic oxidation which can function as a dielectric. There are three different anode metals in use for electrolytic capacitors:
Aluminum electrolytic capacitors use a high-purity etched aluminium foil with aluminium oxide as dielectric
Tantalum electrolytic capacitors use a sintered pellet (“slug”) of high-purity tantalum powder with tantalum pentoxide as dielectric
Niobium electrolytic capacitors use a sintered "slug" of high-purity niobium or niobium oxide powder with niobium pentoxide as dielectric.
To increase their capacitance per unit volume, all anode materials are either etched or sintered and have a rough surface structure with a much higher surface area compared to a smooth surface of the same area or the same volume. By applying a positive voltage to the above-mentioned anode material in an electrolytic bath an oxide barrier layer with a thickness corresponding to the applied voltage will be formed (formation). This oxide layer acts as the dielectric in an electrolytic capacitor. The properties of these oxide layers are given in the following table:
After forming a dielectric oxide on the rough anode structure, a counter electrode has to match the rough insulating oxide surface. This is accomplished by the electrolyte, which acts as the cathode electrode of an electrolytic capacitor. There are many different electrolytes in use. Generally they are distinguished into two species, “non-solid” and “solid” electrolytes. As a liquid medium which has ion conductivity caused by moving ions, non-solid electrolytes can easily fit the rough structures. Solid electrolytes which have electron conductivity can fit the rough structures with the help of special chemical processes like pyrolysis for manganese dioxide or polymerization for conducting polymers.
Comparing the permittivities of the different oxide materials it is seen that tantalum pentoxide has a permittivity approximately three times higher than aluminium oxide. Tantalum electrolytic capacitors of a given CV value theoretically are therefore smaller than aluminium electrolytic capacitors. In practice different safety margins to reach reliable components makes a comparison difficult.
The anodically generated insulating oxide layer is destroyed if the polarity of the applied voltage changes.
Capacitance and volumetric efficiency
Electrolytic capacitors are based on the principle of a "plate capacitor" whose capacitance increases with larger electrode area A, higher dielectric permittivity ε, and thinness of dielectric (d).
The dielectric thickness of electrolytic capacitors is very small, in the range of nanometers per volt. On the other hand, the voltage strengths of these oxide layers are quite high. With this very thin dielectric oxide layer combined with a sufficiently high dielectric strength the electrolytic capacitors can achieve a high volumetric capacitance. This is one reason for the high capacitance values of electrolytic capacitors compared to conventional capacitors.
All etched or sintered anodes have a much higher surface area compared to a smooth surface of the same area or the same volume. That increases the capacitance value, depending on the rated voltage, by a factor of up to 200 for non-solid aluminium electrolytic capacitors as well as for solid tantalum electrolytic capacitors. The large surface compared to a smooth one is the second reason for the relatively high capacitance values of electrolytic capacitors compared with other capacitor families.
Because the forming voltage defines the oxide layer thickness, the desired voltage rating can be produced very simply. Electrolytic capacitors have high volumetric efficiency, the so-called "CV product", defined as the product of capacitance and voltage divided by volume.
Basic construction of non-solid aluminium electrolytic capacitors
Basic construction of solid tantalum electrolytic capacitors
Types and features of electrolytic capacitors
Comparison of electrolytic capacitor types
Combinations of anode materials for electrolytic capacitors and the electrolytes used have given rise to wide varieties of capacitor types with different properties. An outline of the main characteristics of the different types is shown in the table below.
The non-solid or so-called "wet" aluminium electrolytic capacitors were and are the cheapest among all other conventional capacitors. They not only provide the cheapest solutions for high capacitance or voltage values for decoupling and buffering purposes but are also insensitive to low ohmic charging and discharging as well as to low-energy transients. Non-solid electrolytic capacitors can be found in nearly all areas of electronic devices, with the exception of military applications.
Tantalum electrolytic capacitors with solid electrolyte as surface-mountable chip capacitors are mainly used in electronic devices in which little space is available or a low profile is required. They operate reliably over a wide temperature range without large parameter deviations. In military and space applications only tantalum electrolytic capacitors have the necessary approvals.
Niobium electrolytic capacitors are in direct competition with industrial tantalum electrolytic capacitors because niobium is more readily available. Their properties are comparable.
The electrical properties of aluminium, tantalum and niobium electrolytic capacitors have been greatly improved by the polymer electrolyte.
Comparison of electrical parameters
In order to compare the different characteristics of the different electrolytic capacitor types, capacitors with the same dimensions and of similar capacitance and voltage are compared in the following table. In such a comparison the values for ESR and ripple current load are the most important parameters for the use of electrolytic capacitors in modern electronic equipment. The lower the ESR, the higher the ripple current per volume and better functionality of the capacitor in the circuit. However, better electrical parameters come with higher prices.
1) Manufacturer, series name, capacitance/voltage
2) calculated for a capacitor 100 μF/10 V,
3) from a 1976 data sheet
Styles of aluminium and tantalum electrolytic capacitors
Aluminium electrolytic capacitors form the bulk of the electrolytic capacitors used in electronics because of the large diversity of sizes and the inexpensive production. Tantalum electrolytic capacitors, usually used in the SMD (surface-mount device) version, have a higher specific capacitance than the aluminium electrolytic capacitors and are used in devices with limited space or flat design such as laptops. They are also used in military technology, mostly in axial style, hermetically sealed. Niobium electrolytic chip capacitors are a new development in the market and are intended as a replacement for tantalum electrolytic chip capacitors.
History
Origin
The phenomenon that in an electrochemical process, aluminium and such metals as tantalum, niobium, manganese, titanium, zinc, cadmium, etc., can form an oxide layer which blocks an electric current from flowing in one direction but which allows current to flow in the opposite direction, was first observed in 1857 by the German physicist and chemist Johann Heinrich Buff (1805–1878). It was first put to use in 1875 by the French researcher and founder Eugène Ducretet, who coined the term "valve metal" for such metals.
Charles Pollak (born Karol Pollak), a producer of accumulators, found out that the oxide layer on an aluminium anode remained stable in a neutral or alkaline electrolyte, even when the power was switched off. In 1896, he filed a patent for an "Electric liquid capacitor with aluminium electrodes" (de: Elektrischer Flüssigkeitskondensator mit Aluminiumelektroden) based on his idea of using the oxide layer in a polarized capacitor in combination with a neutral or slightly alkaline electrolyte.
"Wet" aluminium capacitor
The first industrially realized electrolytic capacitors consisted of a metallic box used as the cathode. It was filled with a borax electrolyte dissolved in water, in which a folded aluminium anode plate was inserted. Applying a DC voltage from outside, an oxide layer was formed on the surface of the anode. The advantage of these capacitors was that they were significantly smaller and cheaper than all other capacitors at this time relative to the realized capacitance value. This construction with different styles of anode construction but with a case as cathode and container for the electrolyte was used up to the 1930s and was called a "wet" electrolytic capacitor, in the sense of its having a high water content.
The first more common application of wet aluminium electrolytic capacitors was in large telephone exchanges, to reduce relay hash (noise) on the 48 volt DC power supply. The development of AC-operated domestic radio receivers in the late 1920s created a demand for large-capacitance (for the time) and high-voltage capacitors for the valve amplifier technique, typically at least 4 microfarads and rated at around 500 volts DC. Waxed paper and oiled silk film capacitors were available, but devices with that order of capacitance and voltage rating were bulky and prohibitively expensive.
"Dry" aluminium capacitor
The ancestor of the modern electrolytic capacitor was patented by Samuel Ruben in 1925, who teamed with Philip Mallory, the founder of the battery company that is now known as Duracell International. Ruben's idea adopted the stacked construction of a silver mica capacitor. He introduced a separated second foil to contact the electrolyte adjacent to the anode foil instead of using the electrolyte-filled container as the capacitor's cathode. The stacked second foil got its own terminal additional to the anode terminal and the container no longer had an electrical function. This type of electrolytic capacitor combined with a liquid or gel-like electrolyte of a non-aqueous nature, which is therefore dry in the sense of having a very low water content, became known as the "dry" type of electrolytic capacitor.
With Ruben's invention, together with the invention of wound foils separated with a paper spacer 1927 by A. Eckel of Hydra-Werke (Germany), the actual development of electrolytic capacitors began.
William Dubilier, whose first patent for electrolytic capacitors was filed in 1928, industrialized the new ideas for electrolytic capacitors and started the first large commercial production in 1931 in the Cornell-Dubilier (CD) factory in Plainfield, New Jersey. At the same time in Berlin, Germany, the "Hydra-Werke", an AEG company, started the production of electrolytic capacitors in large quantities. Another manufacturer, Ralph D. Mershon, had success in servicing the radio-market demand for electrolytic capacitors.
In his 1896 patent Pollak already recognized that the capacitance of the capacitor increases when roughening the surface of the anode foil. Today (2014), electrochemically etched low voltage foils can achieve an up to 200-fold increase in surface area compared to a smooth surface. Advances in the etching process are the reason for the dimension reductions in aluminium electrolytic capacitors over recent decades.
For aluminium electrolytic capacitors the decades from 1970 to 1990 were marked by the development of various new professional series specifically suited to certain industrial applications, for example with very low leakage currents or with long life characteristics, or for higher temperatures up to 125 °C.
Tantalum capacitors
One of the first tantalum electrolytic capacitors were developed in 1930 by Tansitor Electronic Inc. USA, for military purposes. The basic construction of a wound cell was adopted and a tantalum anode foil was used together with a tantalum cathode foil, separated with a paper spacer impregnated with a liquid electrolyte, mostly sulfuric acid, and encapsulated in a silver case.
The relevant development of solid electrolyte tantalum capacitors began some years after William Shockley, John Bardeen and Walter Houser Brattain invented the transistor in 1947. It was invented by Bell Laboratories in the early 1950s as a miniaturized, more reliable low-voltage support capacitor to complement their newly invented transistor. The solution found by R. L. Taylor and H. E. Haring at Bell Labs in early 1950 was based on experience with ceramics. They ground tantalum to a powder, which they pressed into a cylindrical form and then sintered at a high temperature between 1500 and 2000 °C under vacuum conditions, to produce a pellet ("slug").
These first sintered tantalum capacitors used a non-solid electrolyte, which does not fit the concept of solid electronics. In 1952 a targeted search at Bell Labs by D. A. McLean and F. S. Power for a solid electrolyte led to the invention of manganese dioxide as a solid electrolyte for a sintered tantalum capacitor.
Although fundamental inventions came from Bell Labs, the inventions for manufacturing commercially viable tantalum electrolytic capacitors came from researchers at the Sprague Electric Company. Preston Robinson, Sprague's Director of Research, is considered to be the actual inventor of tantalum capacitors in 1954. His invention was supported by R. J. Millard, who introduced the "reform" step in 1955, a significant improvement in which the dielectric of the capacitor was repaired after each dip-and-convert cycle of MnO2 deposition, which dramatically reduced the leakage current of the finished capacitors.
Although solid tantalum capacitors offered capacitors with lower ESR and leakage current values than the aluminium electrolytic capacitors, a 1980 price shock for tantalum dramatically reduced the applications of tantalum electrolytic capacitors, especially in the entertainment industry. The industry switched back to using aluminium electrolytic capacitors.
Solid electrolytes
The first solid electrolyte of manganese dioxide developed 1952 for tantalum capacitors had a conductivity 10 times better than all other types of non-solid electrolytes. It also influenced the development of aluminium electrolytic capacitors. In 1964 the first aluminium electrolytic capacitors with solid electrolyte SAL electrolytic capacitor came on the market, developed by Philips.
With the beginning of digitalization, Intel launched its first microcomputer, the MCS 4, in 1971. In 1972 Hewlett Packard launched one of the first pocket calculators, the HP 35. The requirements for capacitors increased in terms of lowering the equivalent series resistance (ESR) for bypass and decoupling capacitors.
It was not until 1983 when a new step toward ESR reduction was taken by Sanyo with its "OS-CON" aluminium electrolytic capacitors. These capacitors used a solid organic conductor, the charge transfer salt TTF-TCNQ (tetracyanoquinodimethane), which provided an improvement in conductivity by a factor of 10 compared with the manganese dioxide electrolyte.
The next step in ESR reduction was the development of conducting polymers by Alan J. Heeger, Alan MacDiarmid and Hideki Shirakawa in 1975. The conductivity of conductive polymers such as polypyrrole (PPy) or PEDOT is better than that of TCNQ by a factor of 100 to 500, and close to the conductivity of metals.
In 1991 Panasonic released its "SP-Cap", series of polymer aluminium electrolytic capacitors. These aluminium electrolytic capacitors with polymer electrolytes reached very low ESR values directly comparable to ceramic multilayer capacitors (MLCCs). They were still less expensive than tantalum capacitors and with their flat design for laptops and cell phones competed with tantalum chip capacitors as well.
Tantalum electrolytic capacitors with PPy polymer electrolyte cathode followed three years later. In 1993 NEC introduced its SMD polymer tantalum electrolytic capacitors, called "NeoCap". In 1997 Sanyo followed with the "POSCAP" polymer tantalum chips.
A new conductive polymer for tantalum polymer capacitors was presented by Kemet at the "1999 Carts" conference. This capacitor used the newly developed organic conductive polymer PEDT Poly(3,4-ethylenedioxythiophene), also known as PEDOT (trade name Baytron®)
Niobium capacitors
Another price explosion for tantalum in 2000/2001 forced the development of niobium electrolytic capacitors with manganese dioxide electrolyte, which have been available since 2002. Niobium is a sister metal to tantalum and serves as valve metal generating an oxide layer during anodic oxidation. Niobium as raw material is much more abundant in nature than tantalum and is less expensive. It was a question of the availability of the base metal in the late 1960s which led to development and implementation of niobium electrolytic capacitors in the former Soviet Union instead of tantalum capacitors as in the West. The materials and processes used to produce niobium-dielectric capacitors are essentially the same as for existing tantalum-dielectric capacitors. The characteristics of niobium electrolytic capacitors and tantalum electrolytic capacitors are roughly comparable.
Water-based electrolytes
With the goal of reducing ESR for inexpensive non-solid electrolytic capacitors from the mid-1980s in Japan, new water-based electrolytes for aluminium electrolytic capacitors were developed. Water is inexpensive, an effective solvent for electrolytes, and significantly improves the conductivity of the electrolyte. The Japanese manufacturer Rubycon was a leader in the development of new water-based electrolyte systems with enhanced conductivity in the late 1990s. The new series of non-solid electrolytic capacitors with water-based electrolyte was described in the data sheets as having "low ESR", "low impedance", "ultra-low impedance" or "high ripple current".
From 1999 through at least 2010, a stolen recipe for such a water-based electrolyte, in which important stabilizers were absent, led to the widespread problem of "bad caps" (failing electrolytic capacitors), leaking or occasionally bursting in computers, power supplies, and other electronic equipment, which became known as the "capacitor plague". In these electrolytic capacitors the water reacts quite aggressively with aluminium, accompanied by strong heat and gas development in the capacitor, resulting in premature equipment failure, and development of a cottage repair industry.
Electrical characteristics
Series-equivalent circuit
The electrical characteristics of capacitors are harmonized by the international generic specification IEC 60384-1. In this standard, the electrical characteristics of capacitors are described by an idealized series-equivalent circuit with electrical components which model all ohmic losses, capacitive and inductive parameters of an electrolytic capacitor:
C, the capacitance of the capacitor
RESR, the equivalent series resistance which summarizes all ohmic losses of the capacitor, usually abbreviated as "ESR"
LESL, the equivalent series inductance which is the effective self-inductance of the capacitor, usually abbreviated as "ESL".
Rleak, the resistance representing the leakage current of the capacitor
Capacitance, standard values and tolerances
The electrical characteristics of electrolytic capacitors depend on the structure of the anode and the electrolyte used. This influences the capacitance value of electrolytic capacitors, which depends on measuring frequency and temperature. Electrolytic capacitors with non-solid electrolytes show a broader aberration over frequency and temperature ranges than do capacitors with solid electrolytes.
The basic unit of an electrolytic capacitor's capacitance is the microfarad (μF). The capacitance value specified in the data sheets of the manufacturers is called the rated capacitance CR or nominal capacitance CN and is the value for which the capacitor has been designed.
The standardized measuring condition for electrolytic capacitors is an AC measuring method with 0.5 V at a frequency of 100/120 Hz at a temperature of 20 °C. For tantalum capacitors a DC bias voltage of 1.1 to 1.5 V for types with a rated voltage ≤2.5 V, or 2.1 to 2.5 V for types with a rated voltage of >2.5 V, may be applied during the measurement to avoid reverse voltage.
The capacitance value measured at the frequency of 1 kHz is about 10% less than the 100/120 Hz value. Therefore, the capacitance values of electrolytic capacitors are not directly comparable and differ from those of film capacitors or ceramic capacitors, whose capacitance is measured at 1 kHz or higher.
Measured with an AC measuring method at 100/120 Hz the capacitance value is the closest value to the electrical charge stored in the e-caps. The stored charge is measured with a special discharge method and is called the DC capacitance. The DC capacitance is about 10% higher than the 100/120 Hz AC capacitance. The DC capacitance is of interest for discharge applications like photoflash.
The percentage of allowed deviation of the measured capacitance from the rated value is called the capacitance tolerance. Electrolytic capacitors are available in different tolerance series, whose values are specified in the E series specified in IEC 60063. For abbreviated marking in tight spaces, a letter code for each tolerance is specified in IEC 60062.
rated capacitance, series E3, tolerance ±20%, letter code "M"
rated capacitance, series E6, tolerance ±20%, letter code "M"
rated capacitance, series E12, tolerance ±10%, letter code "K"
The required capacitance tolerance is determined by the particular application. Electrolytic capacitors, which are often used for filtering and bypassing, do not have the need for narrow tolerances because they are mostly not used for accurate frequency applications like in oscillators.
Rated and category voltage
Referring to the IEC/EN 60384-1 standard, the allowed operating voltage for electrolytic capacitors is called the "rated voltage UR" or "nominal voltage UN". The rated voltage UR is the maximum DC voltage or peak pulse voltage that may be applied continuously at any temperature within the rated temperature range TR.
The voltage proof of electrolytic capacitors decreases with increasing temperature. For some applications it is important to use a higher temperature range. Lowering the voltage applied at a higher temperature maintains safety margins. For some capacitor types therefore the IEC standard specifies a "temperature derated voltage" for a higher temperature, the "category voltage UC". The category voltage is the maximum DC voltage or peak pulse voltage that may be applied continuously to a capacitor at any temperature within the category temperature range TC. The relation between both voltages and temperatures is given in the picture at right.
Applying a higher voltage than specified may destroy electrolytic capacitors.
Applying a lower voltage may have a positive influence on electrolytic capacitors. For aluminium electrolytic capacitors a lower applied voltage can in some cases extend the lifetime. For tantalum electrolytic capacitors lowering the voltage applied increases the reliability and reduces the expected failure rate.
I
Surge voltage
The surge voltage indicates the maximum peak voltage value that may be applied to electrolytic capacitors during their application for a limited number of cycles.
The surge voltage is standardized in IEC/EN 60384-1. For aluminium electrolytic capacitors with a rated voltage of up to 315 V, the surge voltage is 1.15 times the rated voltage, and for capacitors with a rated voltage exceeding 315 V, the surge voltage is 1.10 times the rated voltage.
For tantalum electrolytic capacitors the surge voltage can be 1.3 times the rated voltage, rounded off to the nearest volt. The surge voltage applied to tantalum capacitors may influence the capacitor's failure rate.
Transient voltage
aluminium electrolytic capacitors with non-solid electrolyte are relatively insensitive to high and short-term transient voltages higher than surge voltage, if the frequency and the energy content of the transients are low. This ability depends on rated voltage and component size. Low energy transient voltages lead to a voltage limitation similar to a zener diode. An unambiguous and general specification of tolerable transients or peak voltages is not possible. In every case transients arise, the application has to be approved very carefully.
Electrolytic capacitors with solid manganese oxide or polymer electrolyte, and aluminium as well as tantalum electrolytic capacitors cannot withstand transients or peak voltages higher than the surge voltage. Transients may destroy this type of electrolytic capacitor.
Reverse voltage
Standard electrolytic capacitors, and aluminium as well as tantalum and niobium electrolytic capacitors are polarized and generally require the anode electrode voltage to be positive relative to the cathode voltage.
Nevertheless, electrolytic capacitors can withstand for short instants a reverse voltage for a limited number of cycles. Specifically, aluminium electrolytic capacitors with non-solid electrolyte can withstand a reverse voltage of about 1 V to 1.5 V. This reverse voltage should never be used to determine the maximum reverse voltage under which a capacitor can be used permanently.
Solid tantalum capacitors can also withstand reverse voltages for short periods. The most common guidelines for tantalum reverse voltage are:
10 % of rated voltage to a maximum of 1 V at 25 °C,
3 % of rated voltage to a maximum of 0.5 V at 85 °C,
1 % of rated voltage to a maximum of 0.1 V at 125 °C.
These guidelines apply for short excursion and should never be used to determine the maximum reverse voltage under which a capacitor can be used permanently.
But in no case, for aluminium as well as for tantalum and niobium electrolytic capacitors, may a reverse voltage be used for a permanent AC application.
To minimize the likelihood of a polarized electrolytic being incorrectly inserted into a circuit, polarity has to be very clearly indicated on the case, see the section on polarity marking below.
Special bipolar aluminium electrolytic capacitors designed for bipolar operation are available, and usually referred to as "non-polarized" or "bipolar" types. In these, the capacitors have two anode foils with full-thickness oxide layers connected in reverse polarity. On the alternate halves of the AC cycles, one of the oxides on the foil acts as a blocking dielectric, preventing reverse current from damaging the electrolyte of the other one. But these bipolar electrolytic capacitors are not suitable for main AC applications instead of power capacitors with metallized polymer film or paper dielectric.
Impedance
In general, a capacitor is seen as a storage component for electric energy. But this is only one capacitor application. A capacitor can also act as an AC resistor. aluminium electrolytic capacitors in particular are often used as decoupling capacitors to filter or bypass undesired AC frequencies to ground or for capacitive coupling of audio AC signals. Then the dielectric is used only for blocking DC. For such applications, the impedance (AC resistance) is as important as the capacitance value.
The impedance Z is the vector sum of reactance and resistance; it describes the phase difference and the ratio of amplitudes between sinusoidally varying voltage and sinusoidally varying current at a given frequency. In this sense impedance is a measure of the ability of the capacitor to pass alternating currents and can be used like Ohm's law.
In other words, impedance is a frequency-dependent AC resistance and possesses both magnitude and phase at a particular frequency.
In data sheets of electrolytic capacitors only the impedance magnitude |Z| is specified, and simply written as "Z". Regarding the IEC/EN 60384-1 standard, the impedance values of electrolytic capacitors are measured and specified at 10 kHz or 100 kHz depending on the capacitance and voltage of the capacitor.
Besides measuring, the impedance can be calculated using the idealized components of a capacitor's series-equivalent circuit, including an ideal capacitor C, a resistor ESR, and an inductance ESL. In this case the impedance at the angular frequency ω is given by the geometric (complex) addition of ESR, by a capacitive reactance XC
and by an inductive reactance XL (Inductance)
.
Then Z is given by
.
In the special case of resonance, in which the both reactive resistances XC and XL have the same value (XC=XL), then the impedance will only be determined by ESR. With frequencies above the resonance frequency, the impedance increases again because of the ESL of the capacitor. The capacitor becomes an inductor.
ESR and dissipation factor tan δ
The equivalent series resistance (ESR) summarizes all resistive losses of the capacitor. These are the terminal resistances, the contact resistance of the electrode contact, the line resistance of the electrodes, the electrolyte resistance, and the dielectric losses in the dielectric oxide layer.
For electrolytic capacitors, ESR generally decreases with increasing frequency and temperature.
ESR influences the superimposed AC ripple after smoothing and may influence the circuit functionality. Within the capacitor, ESR accounts for internal heat generation if a ripple current flows across the capacitor. This internal heat reduces the lifetime of non-solid aluminium electrolytic capacitors and affects the reliability of solid tantalum electrolytic capacitors.
For electrolytic capacitors, for historical reasons the dissipation factor tan δ will sometimes be specified in the data sheet instead of the ESR. The dissipation factor is determined by the tangent of the phase angle between the capacitive reactance XC minus the inductive reactance XL and the ESR. If the inductance ESL is small, the dissipation factor can be approximated as:
The dissipation factor is used for capacitors with very low losses in frequency-determining circuits where the reciprocal value of the dissipation factor is called the quality factor (Q), which represents a resonator's bandwidth.
Ripple current
"Ripple current" is the RMS value of a superimposed AC current of any frequency and any waveform of the current curve for continuous operation within the specified temperature range. It arises mainly in power supplies (including switched-mode power supplies) after rectifying an AC voltage and flows as charge and discharge current through any decoupling and smoothing capacitors.
Ripple currents generate heat inside the capacitor body. This dissipation power loss PL is caused by ESR and is the squared value of the effective (RMS) ripple current IR.
This internally generated heat, additional to the ambient temperature and possibly other external heat sources, leads to a capacitor body temperature having a temperature difference of Δ T relative to ambient. This heat has to be distributed as thermal losses Pth over the capacitor's surface A and the thermal resistance β to ambient.
The internally generated heat has to be distributed to ambient by thermal radiation, convection, and thermal conduction. The temperature of the capacitor, which is the net difference between heat produced and heat dissipated, must not exceed the capacitor's maximum specified temperature.
The ripple current is specified as an effective (RMS) value at 100 or 120 Hz or at 10 kHz at upper category temperature. Non-sinusoidal ripple currents have to be analyzed and separated into their single sinusoidal frequencies by means of Fourier analysis and summarized by squared addition the single currents.
In non-solid electrolytic capacitors the heat generated by the ripple current causes the evaporation of electrolytes, shortening the lifetime of the capacitors. Exceeding the limit tends to result in explosive failure.
In solid tantalum electrolytic capacitors with manganese dioxide electrolyte the heat generated by the ripple current affects the reliability of the capacitors. Exceeding the limit tends to result in catastrophic failure, failing short-circuit, with visible burning.
The heat generated by the ripple current also affects the lifetime of aluminium and tantalum electrolytic capacitors with solid polymer electrolytes. Exceeding the limit tends to result in catastrophic failure, failing short-circuit.
Current surge, peak or pulse current
aluminium electrolytic capacitors with non-solid electrolytes normally can be charged up to the rated voltage without any current surge, peak or pulse limitation. This property is a result of the limited ion movability in the liquid electrolyte, which slows down the voltage ramp across the dielectric, and of the capacitor's ESR. Only the frequency of peaks integrated over time must not exceed the maximal specified ripple current.
Solid tantalum electrolytic capacitors with manganese dioxide electrolyte or polymer electrolyte are damaged by peak or pulse currents. Solid Tantalum capacitors which are exposed to surge, peak or pulse currents, for example, in highly inductive circuits, should be used with a voltage derating. If possible, the voltage profile should be a ramp turn-on, as this reduces the peak current experienced by the capacitor.
Leakage current
For electrolytic capacitors, DC leakage current (DCL) is a special characteristic that other conventional capacitors do not have. This current is represented by the resistor Rleak in parallel with the capacitor in the series-equivalent circuit of electrolytic capacitors.
The reasons for leakage current are different between electrolytic capacitors with non-solid and with solid electrolyte or more common for "wet" aluminium and for "solid" tantalum electrolytic capacitors with manganese dioxide electrolyte as well as for electrolytic capacitors with polymer electrolytes. For non-solid aluminium electrolytic capacitors the leakage current includes all weakened imperfections of the dielectric caused by unwanted chemical processes taking place during the time without applied voltage (storage time) between operating cycles. These unwanted chemical processes depend on the kind of electrolyte. Water-based electrolytes are more aggressive to the aluminium oxide layer than are electrolytes based on organic liquids. This is why different electrolytic capacitor series specify different storage time without reforming.
Applying a positive voltage to a "wet" capacitor causes a reforming (self-healing) process which repairs all weakened dielectric layers, and the leakage current remain at a low level.
Although the leakage current of non-solid electrolytic capacitors is higher than current flow across the dielectric in ceramic or film capacitors, self-discharge of modern non-solid electrolytic capacitors with organic electrolytes takes several weeks.
The main causes of DCL for solid tantalum capacitors include electrical breakdown of the dielectric; conductive paths due to impurities or poor anodization; and bypassing of dielectric due to excess manganese dioxide, to moisture paths, or to cathode conductors (carbon, silver). This "normal" leakage current in solid electrolyte capacitors cannot be reduced by "healing", because under normal conditions solid electrolytes cannot provide oxygen for forming processes. This statement should not be confused with the self-healing process during field crystallization, see below, Reliability (Failure rate).
The specification of the leakage current in data sheets is often given as multiplication of the rated capacitance value CR with the value of the rated voltage UR together with an addendum figure, measured after a measuring time of two or five minutes, for example:
The leakage current value depends on the voltage applied, on the temperature of the capacitor, and on measuring time. Leakage current in solid MnO2 tantalum electrolytic capacitors generally drops very much faster than for non-solid electrolytic capacitors but remain at the level reached.
Dielectric absorption (soakage)
Dielectric absorption occurs when a capacitor that has remained charged for a long time discharges only incompletely when briefly discharged. Although an ideal capacitor would reach zero volts after discharge, real capacitors develop a small voltage from time-delayed dipole discharging, a phenomenon that is also called dielectric relaxation, "soakage" or "battery action".
Dielectric absorption may be a problem in circuits where very small currents are used in the function of an electronic circuit, such as long-time-constant integrators or sample-and-hold circuits. In most electrolytic capacitor applications supporting power supply lines, dielectric absorption is not a problem.
But especially for electrolytic capacitors with high rated voltage, the voltage at the terminals generated by the dielectric absorption can pose a safety risk to personnel or circuits. In order to prevent shocks, most very large capacitors are shipped with shorting wires that need to be removed before the capacitors are used.
Operational characteristics
Reliability (failure rate)
The reliability of a component is a property that indicates how reliably this component performs its function in a time interval. It is subject to a stochastic process and can be described qualitatively and quantitatively; it is not directly measurable. The reliability of electrolytic capacitors is empirically determined by identifying the failure rate in production accompanying endurance tests, see Reliability engineering.
Reliability normally is shown as a bathtub curve and is divided into three areas: early failures or infant mortality failures, constant random failures and wear out failures. Failures totalized in a failure rate are short circuit, open circuit, and degradation failures (exceeding electrical parameters).
The reliability prediction is generally expressed in a failure rate λ, abbreviated FIT (Failures In Time). This is the number of failures that can be expected in one billion (109) component-hours of operation (e.g., 1000 components for 1 million hours, or 1 million components for 1000 hours which is 1 ppm/1000 hours) at fixed working conditions during the period of constant random failures. This failure rate model implicitly assumes the idea of "random failure". Individual components fail at random times but at a predictable rate.
Billions of tested capacitor unit-hours would be needed to establish failure rates in the very low level range which are required today to ensure the production of large quantities of components without failures. This requires about a million units over a long time period, which means a large staff and considerable financing. The tested failure rates are often complemented with figures resulting from feedback from the field from major customers (field failure rate), which mostly results in a lower failure rate than tested.
The reciprocal value of FIT is Mean Time Between Failures (MTBF).
The standard operating conditions for FIT testing are 40 °C and 0.5 UR. For other conditions of applied voltage, current load, temperature, capacitance value, circuit resistance (for tantalum capacitors), mechanical influences and humidity, the FIT figure can be converted with acceleration factors standardized for industrial or military applications. The higher the temperature and applied voltage, the higher the failure rate, for example.
The most often cited source for failure rate conversion is MIL-HDBK-217F, the “bible” of failure rate calculations for electronic components. SQC Online, the online statistical calculator for acceptance sampling and quality control, provides an online tool for short examination to calculate given failure rate values for given application conditions.
Some manufacturers may have their own FIT calculation tables for tantalum capacitors. or for aluminium capacitors
For tantalum capacitors the failure rate is often specified at 85 °C and rated voltage UR as reference conditions and expressed as percent failed components per thousand hours (n %/1000 h). That is, “n” number of failed components per 105 hours, or in FIT the ten-thousand-fold value per 109 hours.
Tantalum capacitors are now very reliable components. Continuous improvement in tantalum powder and capacitor technologies have resulted in a significant reduction in the amount of impurities which formerly caused most field crystallization failures. Commercially available industrially produced tantalum capacitors now have reached as standard products the high MIL standard "C" level, which is 0.01%/1000 h at 85 °C and UR or 1 failure per 107 hours at 85 °C and UR. Converted to FIT with the acceleration factors coming from MIL HDKB 217F at 40 °C and 0.5 , UR is the failure rate. For a 100 μF/25 V tantalum chip capacitor used with a series resistance of 0.1 Ω the failure rate is 0.02 FIT.
Aluminium electrolytic capacitors do not use a specification in "% per 1000 h at 85 °C and UR". They use the FIT specification with 40 °C and 0.5 UR as reference conditions. aluminium electrolytic capacitors are very reliable components. Published figures show for low voltage types (6.3…160 V) FIT rates in the range of 1 to 20 FIT and for high voltage types (>160 …550 V) FIT rates in the range of 20 to 200 FIT. Field failure rates for aluminium e-caps are in the range of 0.5 to 20 FIT.
The published figures show that both tantalum and aluminium capacitor types are reliable components, comparable with other electronic components and achieving safe operation for decades under normal conditions. But a great difference exists in the case of wear-out failures. Electrolytic capacitors with non-solid electrolyte, have a limited period of constant random failures up to the point when wear-out failures begin. The constant random failure rate period corresponds to the lifetime or service life of “wet” aluminium electrolytic capacitors.
Lifetime
The lifetime, service life, load life or useful life of electrolytic capacitors is a special characteristic of non-solid aluminium electrolytic capacitors, whose liquid electrolyte can evaporate over time. Lowering the electrolyte level affects the electrical parameters of the capacitors. The capacitance decreases and the impedance and ESR increase with decreasing amounts of electrolyte. This very slow electrolyte drying-out depends on the temperature, the applied ripple current load, and the applied voltage. The lower these parameters compared to their maximum values, the longer the capacitor's “life”. The “end of life” point is defined by the appearance of wear-out failures or degradation failures when either capacitance, impedance, ESR or leakage current exceed their specified change limits.
The lifetime is a specification of a collection of tested capacitors and delivers an expectation of the behavior of similar types. This lifetime definition corresponds to the time of the constant random failure rate in the bathtub curve.
But even after exceeding the specified limits and the capacitors having reached their “end of life”, the electronic circuit is not in immediate danger; only the functionality of the capacitors is reduced. With today's high levels of purity in the manufacture of electrolytic capacitors it is not to be expected that short circuits occur after the end-of-life-point with progressive evaporation combined with parameter degradation.
The lifetime of non-solid aluminium electrolytic capacitors is specified in terms of “hours per temperature", like "2,000h/105 °C". With this specification the lifetime at operational conditions can be estimated by special formulas or graphs specified in the data sheets of serious manufacturers. They use different ways for specification, some give special formulas, others specify their e-caps lifetime calculation with graphs that consider the influence of applied voltage. The basic principle for calculating the time under operational conditions is the so-called “10-degree-rule”.
This rule is also known as the Arrhenius rule. It characterizes the change of thermic reaction speed. For every 10 °C lower temperature the evaporation is reduced by half. That means for every 10 °C reduction in temperature, the lifetime of capacitors doubles. If a lifetime specification of an electrolytic capacitor is, for example, 2000 h/105 °C, the capacitor's lifetime at 45 °C can be ”calculated” as 128,000 hours—that is roughly 15 years—by using the 10-degrees-rule.
However, solid polymer electrolytic capacitors, and aluminium, tantalum, and niobium electrolytic capacitors also have a lifetime specification. The polymer electrolyte exhibits a small deterioration of conductivity caused by thermal degradation of the conductive polymer. The electrical conductivity decreases as a function of time, in agreement with a granular metal type structure, in which aging is due to the shrinking of the conductive polymer grains. The lifetime of polymer electrolytic capacitors is specified in terms similar to non-solid electrolytic capacitors but its lifetime calculation follows other rules, leading to much longer operational lifetimes.
Tantalum electrolytic capacitors with solid manganese dioxide electrolyte do not have wear-out failures, so they do not have a lifetime specification in the sense of non-solid aluminium electrolytic capacitors. Also, tantalum capacitors with non-solid electrolyte, the "wet tantalums", do not have a lifetime specification because they are hermetically sealed.
Failure modes, self-healing mechanism and application rules
The many different types of electrolytic capacitors exhibit different electrical long-term behavior, intrinsic failure modes, and self-healing mechanisms. Application rules for types with an intrinsic failure mode are specified to ensure capacitors with high reliability and long life.
Performance after storage
All electrolytic capacitors are "aged" during manufacturing by applying the rated voltage at high temperature for a sufficient time to repair all cracks and weaknesses that may have occurred during production. However, a particular problem with non-solid aluminium models may occur after storage or unpowered periods. Chemical processes (corrosion) can weaken the oxide layer, which may lead to a higher leakage current. Most modern electrolytic systems are chemically inert and do not exhibit corrosion problems, even after storage times of two years or longer. Non-solid electrolytic capacitors using organic solvents like GBL as electrolyte do not have problems with high leakage current after prolonged storage. They can be stored for up to 10 years without problems
Storage times can be tested using accelerated shelf-life testing, which requires storage without applied voltage at the upper category temperature for a certain period, usually 1000 hours. This shelf life test is a good indicator for chemical stability and of the oxide layer, because all chemical reactions are accelerated by higher temperatures. Nearly all commercial series of non-solid electrolytic capacitors fulfill the 1000 hour shelf life test. However, many series are specified only for two years of storage. This also ensures solderability of the terminals.
For antique radio equipment or for electrolytic capacitors built in the 1970s or earlier, "preconditioning" may be appropriate. This is performed by applying the rated voltage to the capacitor via a series resistor of approximately 1 kΩ for one hour, allowing the oxide layer to repair itself through self-healing. Capacitors that fail leakage current requirements after preconditioning may have experienced mechanical damage.
Electrolytic capacitors with solid electrolytes do not have preconditioning requirements.
Causes of explosion
Electrolytic capacitors can explode due to several reasons, primarily related to internal pressure buildup and electrolyte issues:
Overvoltage and Reverse Polarity: Applying a voltage higher than the rated value or reversing the polarity can cause excessive current to flow, leading to rapid heating. This heating can decompose the electrolyte, generating gas and increasing internal pressure until the capacitor explodes.
Electrolyte Decomposition: Electrolytes in capacitors can decompose into gasses such as hydrogen when exposed to high temperatures or electrical stresses. The resulting gas increases internal pressure, leading to the rupture of the capacitor casing.
Design Flaws and Manufacturing Defects: Poor manufacturing processes can lead to weak points in the capacitor’s structure. During the capacitor plague, a widespread issue arose from the use of an incomplete electrolyte formula, which lacked necessary inhibitors to prevent gas formation and pressure buildup.
Thermal Runaway: High ripple currents can cause the capacitor to overheat. As the temperature rises, the electrolyte evaporates faster, creating more gas and increasing pressure, which can result in an explosion.
Aging and Deterioration: Over time, the materials within electrolytic capacitors degrade, reducing their ability to handle electrical stress and heat. This aging process can lead to internal failures and explosions.
Additional information
Capacitor symbols
Electrolytic capacitor symbols
Parallel connection
If an individual capacitor within a bank of parallel capacitors develops a short, the entire energy of the capacitor bank discharges through that short. Thus, large capacitors, particularly high voltage types, should be individually protected against sudden discharge.
Series connection
In applications where high withstanding voltages are needed, electrolytic capacitors can be connected in series. Because of individual variation in insulation resistance, and thus the leakage current when voltage is applied, the voltage is not distributed evenly across each series capacitor. This can result in the voltage rating of an individual capacitor being exceeded. A passive or active balancer circuit must be provided in order to equalize the voltage across each individual capacitor.
Polarity marking
Polarity marking for polymer electrolytic capacitors
Imprinted markings
Electrolytic capacitors, like most other electronic components, are marked, space permitting, with
manufacturer's name or trademark;
manufacturer's type designation;
polarity of the terminations (for polarized capacitors)
rated capacitance;
tolerance on rated capacitance
rated voltage and nature of supply (AC or DC)
climatic category or rated temperature;
year and month (or week) of manufacture;
certification marks of safety standards (for safety EMI/RFI suppression capacitors)
Smaller capacitors use a shorthand notation. The most commonly used format is: XYZ J/K/M “V”, where XYZ represents the capacitance (calculated as XY × 10Z pF), the letters K or M indicate the tolerance (±10% and ±20% respectively) and “V” represents the working voltage.
Examples:
105K 330V implies a capacitance of 10 × 105 pF = 1 μF (K = ±10%) with a rated voltage of 330 V.
476M 100V implies a capacitance of 47 × 106 pF = 47 μF (M = ±20%) with a rated voltage of 100 V.
Capacitance, tolerance and date of manufacture can be indicated with a short code specified in IEC/EN 60062. Examples of short-marking of the rated capacitance (microfarads): μ47 = 0,47 μF, 4μ7 = 4,7 μF, 47μ = 47 μF
The date of manufacture is often printed according to international standards.
Version 1: coding with year/week numeral code, "1208" is "2012, week number 8".
Version 2: coding with year code/month code. The year codes are: "R" = 2003, "S"= 2004, "T" = 2005, "U" = 2006, "V" = 2007, "W" = 2008, "X" = 2009, "A" = 2010, "B" = 2011, "C" = 2012, "D" = 2013, “E” = 2014 etc. Month codes are: "1" to "9" = Jan. to Sept., "O" = October, "N" = November, "D" = December. "X5" is then "2009, May"
For very small capacitors no marking is possible. Here only the traceability of the manufacturers can ensure the identification of a type.
Standardization
The standardization for all electrical, electronic components and related technologies follows the rules given by the International Electrotechnical Commission (IEC), a non-profit, non-governmental international standards organization.
The definition of the characteristics and the procedure of the test methods for capacitors for use in electronic equipment are set out in the Generic specification:
IEC/EN 60384-1 - Fixed capacitors for use in electronic equipment
The tests and requirements to be met by aluminium and tantalum electrolytic capacitors for use in electronic equipment for approval as standardized types are set out in the following sectional specifications:
IEC/EN 60384-3—Surface mount fixed tantalum electrolytic capacitors with manganese dioxide solid electrolyte
IEC/EN 60384-4—Aluminium electrolytic capacitors with solid (MnO2) and non-solid electrolyte
IEC/EN 60384-15—Fixed tantalum capacitors with non-solid and solid electrolyte
IEC/EN 60384-18—Fixed aluminium electrolytic surface mount capacitors with solid (MnO2) and non-solid electrolyte
IEC/EN 60384-24—Surface mount fixed tantalum electrolytic capacitors with conductive polymer solid electrolyte
IEC/EN 60384-25—Surface mount fixed aluminium electrolytic capacitors with conductive polymer solid electrolyte
IEC/EN 60384-26—Fixed aluminium electrolytic capacitors with conductive polymer solid electrolyte
Market
The market for electrolytic capacitors in 2008 was roughly 30% of the total market in value
aluminium electrolytic capacitors—US$3.9 billion (22%);
Tantalum electrolytic capacitors—US$2.2 billion (12%);
In number of pieces, these capacitors cover about 10% of the total capacitor market, or about 100 to 120 billion pieces.
Manufacturers and products
Date of the table: March 2015
See also
E-series of preferred numbers
List of capacitor manufacturers
Types of capacitor
References
Further reading
The Electrolytic Capacitor; 1st Ed; Alexander Georgiev; Murray Hill Books; 191 pages; 1945. (archive)
External links
Capacitors | Electrolytic capacitor | [
"Physics"
] | 12,476 | [
"Capacitance",
"Capacitors",
"Physical quantities"
] |
343,528 | https://en.wikipedia.org/wiki/Kola%20Superdeep%20Borehole | The Kola Superdeep Borehole SG-3 () is the deepest human-made hole on Earth (since 1979), which attained maximum true vertical depth of in 1989. It is the result of a scientific drilling effort to penetrate as deeply as possible into the Earth's crust conducted by the Soviet Union in the Pechengsky District of the Kola Peninsula, near the Russian border with Norway.
SG (СГ) is a Russian designation for a set of superdeep () boreholes conceived as part of a Soviet scientific research programme of the 1960s, 1970s and 1980s. Aralsor SG-1 (in the Pre-Caspian Basin of west Kazakhstan) and Biyikzhal SG-2 (in Krasnodar Krai), both less than deep, preceded Kola SG-3, which was originally intended to reach deep. Drilling at Kola SG-3 began in 1970 using the Uralmash-4E, and later the Uralmash-15000 series drilling rig. A total of five boreholes were drilled, two branching from a central shaft and two from one of those branches.
In addition to being the deepest human-made hole on Earth, Kola Superdeep Borehole SG-3 was, for almost three decades, the world's longest borehole in measured depth along its bore, until surpassed in 2008 by a hydrocarbon extraction borehole at the Al Shaheen Oil Field in Qatar.
Drilling
Drilling at Kola SG-3 began on 24 May 1970 using the Uralmash-4E, a serial drilling rig used for drilling oil wells. The rig was slightly modified to be able to reach a depth. In 1974, the new purpose-built Uralmash-15000 drilling rig was installed onsite, named after the new target depth, set at .
On 6 June 1979, the world depth record then held by the Bertha Rogers hole in Washita County, Oklahoma, United States, at , was broken by Kola SG-3. In October 1982, Kola SG-3's first hole reached .
The second hole was started in January 1983 from a depth of the first hole. In 1983, the drill passed in the second hole, and drilling was stopped for about a year for numerous scientific and celebratory visits to the site. This idle period may have contributed to a breakdown after drilling resumed; on 27 September 1984, after drilling to , a section of the drill string twisted off and was left in the hole. Drilling was restarted in September 1986, from the first hole.
The third hole reached in 1989. In that year, the hole depth was expected to reach by the end of 1990 and by 1993. In June 1990, a breakdown occurred in the third hole at of depth.
The drilling of the fourth hole was started in January 1991 from of depth of third hole. The drilling of the fourth hole was stopped in April 1992 at of depth.
Drilling of the fifth hole started in April 1994 from of depth of the third hole. Drilling was stopped in August 1994 at of depth due to lack of funds, and the well itself was mothballed.
Research
The stated areas of study of the Kola Superdeep Borehole were the deep structure of the Baltic Shield, seismic discontinuities and the thermal regime in the Earth's crust, the physical and chemical composition of the deep crust and the transition from upper to lower crust, lithospheric geophysics, and to create and develop technologies for deep geophysical study. Drilling penetrated about a third of the way through the Baltic Shield of the continental crust, estimated to be around deep, reaching Archean rocks at the bottom. Numerous unexpected geophysical discoveries were made:
During the drilling process, the expected basaltic layers at down were never found, nor were basaltic layers at any depth. There were instead more granites, deeper than predicted. The prediction of a transition at 7 kilometres was based on seismic waves indicating discontinuity, which could have been caused by a transition between rocks, or a metamorphic transition in the granite itself.
Water pooled below the surface, having percolated up through the granite until it reached a layer of impermeable rock. This water did not naturally vaporize at any depth in the borehole.
The drilling mud that flowed out of the hole was described as "boiling" with an unexpected level of hydrogen gas.
Microscopic plankton fossils were found below the surface.
In 1992, an international geophysical experiment obtained a reflection seismic crustal cross-section through the well. The Kola-92 working group consisted of researchers from the universities of Glasgow and Edinburgh in the United Kingdom, the University of Wyoming in the United States, and the University of Bergen in Norway, as well as several Russian earth science research institutions.
The experiment was documented in a video recorded by Professor David Smythe,
which shows the drilling deck in action during an attempt to recover a tool dropped down the hole.
Status
The drilling ended in 1995 due to a lack of funding. The scientific team was transferred to the federal state unitary subsidiary enterprise "Kola Superdeep," downsized, and given the new task of thoroughly studying the exposed section. In 2007, the scientific team was dissolved and the equipment was transferred to a private company and partially liquidated.
In 2008, the company was liquidated due to unprofitability, and the site was abandoned. It is still visited by sightseers, who report that the structure over the borehole has been partially destroyed or removed.
Similar projects
The United States had embarked on a similar project in 1957, dubbed Project Mohole, which was intended to penetrate the shallow crust under the Pacific Ocean off Mexico. After initial drilling, the project was abandoned in 1966 when funding was cut off. This program inspired the Ocean Drilling Program, Integrated Ocean Drilling Program, and the present International Ocean Discovery Program.
The KTB superdeep borehole (German Continental Deep Drilling Programme, 1987–1995) at Windischeschenbach in northern Bavaria was drilled to a depth of , reaching temperatures of more than . Its ambitious measuring program used high-temperature logging tools that were upgraded specifically for KTB.
In 2023, China embarked on a super-deep borehole in the Tarim Basin in the Xinjiang region for scientific, oil and gas exploration. In March 2024, drilling of the borehole, which is known as Shendi Take 1, reached a depth of 10,000 metres.
Records
The deep Kola Superdeep Borehole has been the world's deepest borehole since 1979. It was also the longest borehole in the world from 1979 to 2008. Its record length was surpassed in May 2008 by the curved extended reach drilling bore of well BD-04A in the Al Shaheen Oil Field in Qatar, which attained a total length of but depth of just .
See also
, based on true events at Kola Borehole
, deep oceanic drilling ship, which achieved a subsea drilling record in 2012
, covers the lowest point on land
Vertical seismic profile — relevant seismic measurements
- observed since 1989
References
Further reading
External links
Official Kola Superdeep Borehole website
The World's Deepest Hole – Alaska Science Forum – July 1985
The Deepest Hole 20 June 2006
Kola Superdeep – Scientific research results and experiences by PhD A. Osadchikh 1984
Photo report on a trip to the Kola superdeep well in 2017. Many photos of the current state.
1970 establishments in the Soviet Union
Buildings and structures in Murmansk Oblast
Cancelled projects
Deepest boreholes
Earth's crust
Science and technology in Russia
Science and technology in the Soviet Union
Structure of the Earth | Kola Superdeep Borehole | [
"Mathematics"
] | 1,559 | [
"Mathematical objects",
"Functions and mappings",
"Mathematical relations",
"Vertical distributions"
] |
343,552 | https://en.wikipedia.org/wiki/Phthalates | Phthalates ( ), or phthalate esters, are esters of phthalic acid. They are mainly used as plasticizers, i.e., substances added to plastics to increase their flexibility, transparency, durability, and longevity. They are used primarily to soften polyvinyl chloride (PVC). While phthalates are commonly used as plasticizers, not all plasticizers are phthalates. The two terms are specific, unique, and not used interchangeably.
Lower-molecular-weight phthalates are typically replaced in many products in the United States, Canada, and European Union over health concerns. They are being replaced by higher molecular-weight phthalates as well as non-phthalic plasticizers.
Phthalates are commonly ingested in small quantities via the diet. There are numerous forms of phthalates not regulated by governments. One of the most commonly known phtalates is bis(2-ethylhexyl) phthalate (DEHP). In many countries, DEHP is regulated as a toxin, and is banned from use in broad categories of consumer goods, such as cosmetics, children's toys, medical devices, and food packaging.
Production
Phthalate esters are produced industrially by the reaction of phthalic anhydride with excess alcohol. Often the phthalic anhydride is molten. The monoesterification occurs readily, but the second step is slow:
The conversion is conducted at high temperatures to drive off the water. Typical catalysts are based on tin or titanium alkoxides or carboxylates.
The properties of the phthalate can be varied by changing the alcohol. Around 30 are, or have been, commercially important. Phthalates' share of the global plasticisers market has been decreasing since around 2000 however total production has been increasing, with around 5.5 million tonnes made in 2015, up from around 2.7 million tonnes in the 1980s. The explanation for this is the increasing size of the plasticiser market, largely due driven by increases in PVC production, which nearly doubled between 2000 and 2020. The People's Republic of China is the largest consumer, accounting for around 45% of all use. Europe and the United States together account for around 25% of use, with the remainder widely spread around the world.
Uses
PVC plasticisers
Between 90 and 95% of all phthalates are used as plasticisers for the production of flexible PVC. They were the first commercially important compounds for this role, a historic advantage that has led to them becoming firmly embedded in flexible PVC technology. Among the common plastics, PVC is unique in its acceptance of large amounts of plasticizer with gradual changes in physical properties from a rigid solid to a soft gel. Phthalates derived from alcohols with 7-13 carbon atoms occupy a privileged position as general purpose plasticizers, suitable for almost all flexible PVC applications. Phthalates larger than this have limited compatibility in PVC, with di(isotridecyl) phthalate representing the practical upper limit. Conversely, plasticizers derived from alcohols with 4-6 carbon atoms are too volatile to be used on their own, but have been used alongside other compounds as secondary plasticizers, where they improve low-temperature flexibility. Compounds derived from alcohols with 1-3 carbon atoms are not used as plasticizers in PVC at all, due excessive fuming at processing temperatures (typically 180-210 °C).
Historically DINP, DEHP, BBP, DBP, and DIHP have been the most important phthalates, however many of these are now facing regulatory pressure and gradual phase-outs. Almost all phthalates derived from alcohols with between 3 and 8 carbons are classed as toxic by ECHA. This includes Bis(2-ethylhexyl) phthalate (DEHP or DOP), which has long been the most widely used phthalate, with commercial production dating back to the 1930s. In the EU, the use of DEHP is restricted under REACH and it can only be used in specific cases if an authorisation has been granted; similar restrictions exist in many other jurisdictions. Despite this, the phase-out of DEHP is slow and it was still the most frequently used plasticizer in 2018, with an estimated global production of 3.24 million tonnes. DINP and DIDP are used as a substitutes for DEHP in many applications, as they are not classified as hazardous. Non-phthalate plasticizers are also being increasingly used.
Almost 90% of all plasticizers are used in PVC, giving this material improved flexibility and durability. The majority is used in films and cable sheathing. Flexible PVC can consist of over 85% plasticizer by mass, however unplasticized PVC (UPVC) should not contain any.
Non-PVC plasticisers
Phthalates see use as plasticisers in various other polymers, with applications centred around coatings such as lacquers, varnishes, and paints. The addition of phthalates imparts some flexibility to these materials, reducing their tendency to chip.
Phthalates derived from alcohols with between 1-4 carbon atoms are used as plasticisers for cellulose-type plastics, such as cellulose acetate, nitrocellulose and cellulose acetate butyrate, with commonly encountered applications including nail polish. Most phthalates are also compatible with alkyds and acrylic resins, which are used in both oil and emulsion based paints.
Other plasticised polymer systems include polyvinyl butyral (particularly the forms used to make laminated glass), PVA and its co-polymers like PVCA. They are also compatible in nylon, polystyrene, polyurethanes, and certain rubbers; but their use in these is very limited.
Phthalates can plasticise ethyl cellulose, polyvinyl acetate phthalate (PVAP) and cellulose acetate phthalate (CAP), all of which are used to make enteric coatings for tablet and capsule medications. These coatings protect drugs from the acidity of the stomach, but allow their release and absorption in the intestines.
Solvent and phlegmatizer
Phthalate esters are widely used as solvents for highly reactive organic peroxides. Thousands of tonnes are consumed annually for this purpose. The great advantage offered by these esters is that they are phlegmatizers, i.e. they minimize the explosive tendencies of a family of chemical compounds that otherwise are potentially dangerous to handle. Phthalates have also been used for producing plastic explosives such as Semtex.
Other uses
Relatively minor amounts of some phthalates find use in personal-care items such as eye shadow, moisturizer, nail polish, liquid soap, and hair spray. Low-molecular-weight phthalates like dimethyl phthalate and diethyl phthalate are used as fixatives for perfumes. Dimethyl phthalate has been also used as an insect repellent and is especially useful against ixodid ticks responsible for Lyme disease. and species of mosquitoes such as Anopheles stephensi, Culex pipiens and Aedes aegypti,
Diallyl phthalate is used to prepare vinyl ester resins with good electrical insulation properties. These resins are used to manufacture of electronics components.
History
The development of cellulose nitrate plastic in 1846 led to the patent of castor oil in 1856 for use as the first plasticizer. In 1870, camphor became the more favored plasticizer for cellulose nitrate. Phthalates were first introduced in the 1920s and quickly replaced the volatile and odorous camphor. In 1931, the commercial availability of polyvinyl chloride (PVC) and the development of di(2-ethylhexyl) phthalate (DEHP) began the boom of the plasticizer PVC industry.
Properties
Phthalate esters usually refers to dialkyl esters of phthalic acid (also called 1,2-benzenedicarboxylic acid, not be confused with the structurally isomeric terephthalic or isophthalic acids); the name "phthalate" derives from phthalic acid, which itself is derived from the word "naphthalene". When added to plastics, phthalates allow the polyvinyl polymers to slide against one another. The phthalates have a clear syrupy liquid consistency and show low water solubility, high oil solubility, and low volatility. The polar carboxyl group contributes little to the physical properties of the phthalates, except when R and R' are very small (such as ethyl or methyl groups). Phthalates are colorless, odorless liquids produced by the reaction of phthalic anhydride with alcohols.
The mechanism by which phthalates and related compounds plasticize polar polymers has been a subject of intense study since the 1960s. The mechanism is one of polar interactions between the polar centres of the phthalate molecule (the C=O functionality) and the positively charged areas of the vinyl chain, typically residing on the carbon atom of the carbon-chlorine bond. For this to be established, the polymer must be heated in the presence of the plasticizer, first above the Tg of the polymer and then into a melt state. This enables an intimate mix of polymer and plasticizer to be formed, and for these interactions to occur. When cooled, these interactions remain and the network of PVC chains cannot reform (as is present in unplasticized PVC, or PVC-U). The alkyl chains of the phthalate then screen the PVC chains from each other as well. They are blended within the plastic article as a result of the manufacturing process.
Because they are not chemically bonded to the host plastics, phthalates are released from the plastic article by relatively gentle means. For example, they can be extracted by extraction with organic solvents and, to some extent, by handling.
Alternatives
Being inexpensive, nontoxic (in an acute sense), colorless, noncorrosive, biodegradable, and with easily tuned physical properties, phthalate esters are nearly ideal plasticizers. Among the numerous alternative plasticizers are dioctyl terephthalate (DEHT) (a terephthalate isomeric with DEHP) and 1,2-cyclohexane dicarboxylic acid diisononyl ester (DINCH) (a hydrogenated version of DINP). Both DEHT and DINCH have been used in high volumes for a variety of products used in contact with humans as alternative plasticizers for DEHP and DINP. Some of these products include medical devices, toys, and food packaging. DEHT and DINCH are more hydrophobic than other phthalate alternatives such as bis(2-ethylhexyl) adipate (DEHA) and diisodecyl adipate (DIDA). Since alternative plasticizers such as DEHT and DINCH are more likely to bind to organic matter and airborne particles indoors, exposure occurs primarily through food consumption and contact with dust.
Many bio-based plasticizers based on vegetable oil have been developed.
Occurrence and exposure
Human exposure
Due to the ubiquity of plasticized plastics, people are often exposed to phthalates. For example, most Americans tested by the Centers for Disease Control and Prevention have metabolites of multiple phthalates in their urine. Exposure to phthalates is more likely in women and people of color. Differences were found between Mexican-Americans, blacks, and whites in terms of the overall risk of disturbance of glucose homeostasis. With Mexican-Americans having a fasting blood glucose (FBG) increase of 5.82 mg/dL, blacks having a fasting blood glucose increase of 3.63 mg/dL, and whites having a fasting blood glucose increase of 1.79 mg/dL, there was evidence of an increased risk for minorities. Overall, the study concludes that phthalates may alter glucose homeostasis and insulin sensitivity. Higher levels of some phthalate metabolites were associated with elevated FBG, fasting insulin, and insulin resistance. Non-Hispanic black women and Hispanic women have higher levels of some phthalate metabolites.
Higher dust concentrations of DEHP were found in homes of children with asthma and allergies, compared with healthy children's homes. The author of the study stated, "The concentration of DEHP was found to be significantly associated with wheezing in the last 12 months as reported by the parents." Phthalates were found in almost every sampled home in Bulgaria. The same study found that DEHP, BBzP, and DnOP were in significantly higher concentrations in dust samples collected in homes where polishing agents were used. Data on flooring materials was collected, but there was not a significant difference in concentrations between homes where no polish was used that have balatum (PVC or linoleum) flooring and homes with wood. High frequency of dusting did decrease the concentration.
In general, children's exposure to phthalates is greater than that of adults. In a 1990s Canadian study that modeled ambient exposures, it was estimated that daily exposure to DEHP was 9 μg/kg bodyweight/day in infants, 19 μg/kg bodyweight/day in toddlers, 14 μg/kg bodyweight/day in children, and 6 μg/kg bodyweight/day in adults. Infants and toddlers are at the greatest risk of exposure, because of their mouthing behavior. Body-care products containing phthalates are a source of exposure for infants. The authors of a 2008 study "observed that reported use of infant lotion, infant powder, and infant shampoo were associated with increased infant urine concentrations of [phthalate metabolites], and this association is strongest in younger infants. These findings suggest that dermal exposures may contribute significantly to phthalate body burden in this population." Although they did not examine health outcomes, they noted that "Young infants are more vulnerable to the potential adverse effects of phthalates given their increased dosage per unit body surface area, metabolic capabilities, and developing endocrine and reproductive systems."
Infants and hospitalized children are particularly susceptible to phthalate exposure. Medical devices and tubing may contain 20–40% Di(2-ethylhexyl) phthalate (DEHP) by weight, which "easily leach out of tubing when heated (as with warm saline / blood)". Several medical devices contain phthalates including, but not limited to, IV tubing, gloves, nasogastric tubes, and respiratory tubing. The Food and Drug Administration did an extensive risk assessment of phthalates in the medical setting and found that neonates may be exposed to five times greater than the allowed daily tolerable intake. This finding led to the conclusion by the FDA that, "[c]hildren undergoing certain medical procedures may represent a population at increased risk for the effects of DEHP".
In 2008, the Danish Environmental Protection Agency (EPA) found a variety of phthalates in erasers and warned of health risks when children regularly suck and chew on them. The European Commission Scientific Committee on Health and Environmental Risks (SCHER), however, considers that, even in the case when children bite off pieces from erasers and swallow them, it is unlikely that this exposure leads to health consequences.
In 2008, the United States National Research Council recommended that the cumulative effects of phthalates and other antiandrogens be investigated. It criticized U.S. EPA guidances, which stipulate that, when examining cumulative effects, the chemicals examined should have similar mechanisms of action or similar structures, as too restrictive. It recommended instead that the effects of chemicals that cause similar adverse outcomes should be examined cumulatively. Thus, the effect of phthalates should be examined together with other antiandrogens, which otherwise may have been excluded because their mechanisms or structure are different.
Food
Phthalates are found in food, especially fast food items. Phthalate DnBP was detected in 81 percent of the samples, while DEHP was found in 70 percent. Diethylhexyl terephthalate (DEHT), the main alternative to DEHP, was detected in 86%. A 2024 study by Consumer Reports found phthalates in all but one of the grocery store products and fast foods they tested.
Diet is believed to be the main source of DEHP and other phthalates in the general population. Fatty foods such as milk, butter, and meats are a major source. Studies show that exposure to phthalates is greater from ingestion of certain foods, rather than exposure via water bottles, as is most often first thought of with plastic chemicals. Low-molecular-weight phthalates such as DEP, DBP, BBzP may be dermally absorbed. Inhalational exposure is also significant with the more volatile phthalates.
One study, conducted between 2003 and 2010 analysing data from 9,000 individuals, found that those who reported that they had eaten at a fast food restaurant had much higher levels of two separate phthalates—DEHP and DiNP—in their urine samples. Even small consumption of fast food caused higher presence of phthalates. "People who reported eating only a little fast food had DEHP levels that were 15.5 percent higher and DiNP levels that were 25 percent higher than those who said they had eaten none. For people who reported eating a sizable amount, the increase was 24 percent and 39 percent, respectively."
Air
Outdoor air concentrations are higher in urban and suburban areas than in rural and remote areas. They also pose no acute toxicity.
Common plasticizers such as DEHP are only weakly volatile. Higher air temperatures result in higher concentrations of phthalates in the air. PVC flooring leads to higher concentrations of BBP and DEHP, which are more prevalent in dust. A 2012 Swedish study of children found that phthalates from PVC flooring were taken up into their bodies, showing that children can ingest phthalates not only from food but also by breathing and through the skin.
Natural occurrence
Various plants and microorganisms produce small amounts of phthalate esters, the so-called endogenous phthalates. Biosynthesis is believed to involve a modified Shikimate pathway The extent of this natural production is not fully known, but it may create a background of phthalate pollution.
Biodegradation
Phthalates do not persist due to rapid biodegradation, photodegradation, and anaerobic degradation.
Research
Phthalates are under research as a class of possible endocrine disruptors, substances that may interfere with normal hormonal responses in varied environmental conditions. The concern has sparked demands to ban or restrict the use of phthalates in baby toys.
A 2024 review indicated that exposure of mothers to environmental phthalates may have adverse pregnancy outcomes, such as a higher miscarriage rate and lower birth weights. Another review showed small reductions in lung function in adolescents and children who had been exposed to phthalates.
A 2017 review indicated ways to avoid exposure to phthalates: (1) eating a balanced diet to avoid ingesting too many endocrine disruptors from a single source, (2) eliminating canned or packaged food in order to limit ingestion of DEHP phthalates leached from plastics, and (3) eliminating use of any personal product such as moisturizer, perfume, or cosmetics that contain phthalates. Exposure to phthalates may increase the risk of asthma.
A 2018 study indicated that exposure to phthalates during developmental stages in childhood may negatively affect adipose tissue function and metabolic homeostasis, possibly increasing the risk of obesity.
Legal status
The governments of Australia, New Zealand, Canada, the US, and California have determined that many phthalates are not harmful to human health or the environment in amounts typically found, and therefore are legally unregulated. The focus for regulation in these jurisdictions has been mainly on diethyl phthalate (DEHP), which is generally regarded as a carcinogenic toxin requiring regulation.
The European Chemicals Agency (European Union, EU) regards ortho-phthalates, such as DEHP, dibutyl phthalate, diisobutyl phthalate, and benzyl butyl phthalate as potentially harmful to fertility, unborn babies, and the endocrine system. The EU also regulates some phthalates to protect the environment.
Australia and New Zealand
A 2017 survey of foods and packaging in Australia and New Zealand led to recognition of DEHP and diisononyl phthalate as among possible contaminants posing a risk to human health, resulting in several regulations on these phthalates in both countries. Australia has a permanent ban on certain children's products containing DEHP, which is considered poisonous if products containing it are placed in the mouths of children up to three years old.
Canada
In 1994, a Health Canada assessment found that DEHP and another phthalate product, B79P, were harmful to human health. The Canadian federal government responded by banning their use in cosmetics and restricting their use in other applications, such as soft toys and child-care products. In 1999, DEHP was put on the national List of Toxic Substances, under the Canadian Environmental Protection Act, 1999, and in 2021, it was deemed a risk to the environment. It is on the List of Ingredients that are Prohibited for Use in Cosmetic Products.
Twenty of the 28 phthalate substances under national screening programs are considered possible risks to human health or the environment. As of 2021, regulations to protect the environment against DEHP and B79P have not been enacted.
European Union
Some phthalates have been restricted in the European Union for use in children's toys since 1999. DEHP, BBP, and DBP are restricted for all toys; DINP, DIDP, and DNOP are restricted only in toys that can be taken into the mouth. The restriction states that the amount of these phthalates may not be greater than 0.1% mass percent of the plasticized part of the toy.
Generally, the high molecular weight phthalates DINP, DIDP, and DPHP have been registered under REACH and have demonstrated their safety for use in current applications. They are not classified for any health or environmental effects.
The low molecular weight products BBP, DEHP, DIBP, and DBP were added to the Candidate list of Substances for Authorisation under REACH in 2008–9, and added to the Authorisation list, Annex XIV, in 2012. This means that from February 2015 they are not allowed to be produced in the EU unless authorisation has been granted for a specific use, although they may still be imported in consumer products. The creation of an Annex XV dossier, which could ban the import of products containing these chemicals, was being prepared jointly by the ECHA and Danish authorities, and expected to be submitted by April 2016.
Since 2021, the European Waste Framework Directive requires manufacturers, importers and distributors of products containing phthalates on the REACH Candidate List to notify the European Chemicals Agency.
In November 2021, the European Commission added endocrine disrupting properties to DEHP and other phthalates, meaning that companies must apply for REACH authorization for some uses that were previously exempted, including in food packaging, medical devices, and drug packaging.
Legislation, additional
United States
During August 2008, the United States Congress passed and President George W. Bush signed the Consumer Product Safety Improvement Act (CPSIA), which became public law 110–314. Section 108 of that law specified that as of February 10, 2009, "it shall be unlawful for any person to manufacture for sale, offer for sale, distribute in commerce, or import into the United States any children's toy or child care article that contains concentrations of more than 0.1 percent of" DEHP, DBP, or BBP and "it shall be unlawful for any person to manufacture for sale, offer for sale, distribute in commerce, or import into the United States any children's toy that can be placed in a child's mouth or child care article that contains concentrations of more than 0.1 percent of" DINP, DIDP, and DnOP. Furthermore, the law requires the establishment of a permanent review board to determine the safety of other phthalates. Prior to this legislation, the Consumer Product Safety Commission had determined that voluntary withdrawals of DEHP and diisononyl phthalate (DINP) from teethers, pacifiers, and rattles had eliminated the risk to children, and advised against enacting a phthalate ban.
In 1986, California voters approved an initiative to address concerns about exposure to toxic chemicals. That initiative became the Safe Drinking Water and Toxic Enforcement Act of 1986, also called Proposition 65. In December 2013, DINP was listed as a chemical "known to the State of California to cause cancer" Beginning in December 2014, companies with ten or more employees manufacturing, distributing or selling the product(s) containing DINP were required to provide a clear and reasonable warning for that product. The California Office of Environmental Health Hazard Assessment, charged with maintaining the Proposition 65 list and enforcing its provisions, has implemented a "No Significant Risk Level" of 146 μg/day for DINP.
The CDC provided a 2011 public health statement on diethyl phthalate describing regulations and guidelines concerning its possible harmful health effects. Under laws for Superfund sites, the Environmental Protection Agency named diethyl phthalate as a hazardous substance. The Occupational Safety and Health Administration stated that the maximum amount of diethyl phthalate allowed in workroom air during an 8-hour workday, 40-hour workweek, is 5 milligrams per cubic meter.
Identification in plastics
Phthalates are used in some, but not all, PVC formulations, and there are no specific labeling requirements for phthalates. PVC plastics are typically used for various containers and hard packaging, medical tubing and bags, and are labeled "Type 3". However, the presence of phthalates rather than other plasticizers is not marked on PVC items. Only unplasticized PVC (uPVC), which is mainly used as a hard construction material, has no plasticizers. If a more accurate test is needed, chemical analysis, for example by gas chromatography or liquid chromatography, can establish the presence of phthalates.
Polyethylene terephthalate (PET, PETE, Terylene, Dacron) is the main substance used to package bottled water and many sodas. Products containing PETE are labeled "Type 1" (with a "1" in the recycle triangle). Although the word "phthalate" appears in the name, PETE does not use phthalates as plasticizers. The terephthalate polymer PETE and the phthalate ester plasticizers are chemically different substances. Despite this, however, many studies have found phthalates, such as DEHP in bottled water and soda. One hypothesis is that these may have been introduced during plastic recycling.
See also
Xenoestrogen
Non-phthalate plasticizers such as
1,2-Cyclohexane dicarboxylic acid diisononyl ester,
Dioctyl terephthalate, and
Citrates
Antiandrogens in the environment
References
Further reading
External links
: video of Steve Risotto of the American Chemistry Council, 23 October 2009
Plasticizers
Suspected testicular toxicants
Suspected fetotoxicants
Suspected teratogens
Endocrine disruptors | Phthalates | [
"Chemistry"
] | 5,823 | [
"Endocrine disruptors"
] |
343,663 | https://en.wikipedia.org/wiki/Set-theoretic%20limit | In mathematics, the limit of a sequence of sets (subsets of a common set ) is a set whose elements are determined by the sequence in either of two equivalent ways: (1) by upper and lower bounds on the sequence that converge monotonically to the same set (analogous to convergence of real-valued sequences) and (2) by convergence of a sequence of indicator functions which are themselves real-valued. As is the case with sequences of other objects, convergence is not necessary or even usual.
More generally, again analogous to real-valued sequences, the less restrictive limit infimum and limit supremum of a set sequence always exist and can be used to determine convergence: the limit exists if the limit infimum and limit supremum are identical. (See below). Such set limits are essential in measure theory and probability.
It is a common misconception that the limits infimum and supremum described here involve sets of accumulation points, that is, sets of where each is in some This is only true if convergence is determined by the discrete metric (that is, if there is such that for all ). This article is restricted to that situation as it is the only one relevant for measure theory and probability. See the examples below. (On the other hand, there are more general topological notions of set convergence that do involve accumulation points under different metrics or topologies.)
Definitions
The two definitions
Suppose that is a sequence of sets. The two equivalent definitions are as follows.
Using union and intersection: define and If these two sets are equal, then the set-theoretic limit of the sequence exists and is equal to that common set. Either set as described above can be used to get the limit, and there may be other means to get the limit as well.
Using indicator functions: let equal if and otherwise. Define and where the expressions inside the brackets on the right are, respectively, the limit infimum and limit supremum of the real-valued sequence Again, if these two sets are equal, then the set-theoretic limit of the sequence exists and is equal to that common set, and either set as described above can be used to get the limit.
To see the equivalence of the definitions, consider the limit infimum. The use of De Morgan's law below explains why this suffices for the limit supremum. Since indicator functions take only values and if and only if takes value only finitely many times. Equivalently,
if and only if there exists such that the element is in for every which is to say if and only if for only finitely many
Therefore, is in the if and only if is in all but finitely many For this reason, a shorthand phrase for the limit infimum is " is in all but finitely often", typically expressed by writing " a.b.f.o.".
Similarly, an element is in the limit supremum if, no matter how large is, there exists such that the element is in That is, is in the limit supremum if and only if is in infinitely many For this reason, a shorthand phrase for the limit supremum is " is in infinitely often", typically expressed by writing " i.o.".
To put it another way, the limit infimum consists of elements that "eventually stay forever" (are in set after ), while the limit supremum consists of elements that "never leave forever" (are in set after ). Or more formally:
{|
|-
| || for every there is a with for all and
|-
| ||for every there is a with for all .
|}
Monotone sequences
The sequence is said to be nonincreasing if for each and nondecreasing if for each In each of these cases the set limit exists. Consider, for example, a nonincreasing sequence Then
From these it follows that
Similarly, if is nondecreasing then
The Cantor set is defined this way.
Properties
If the limit of as goes to infinity, exists for all then Otherwise, the limit for does not exist.
It can be shown that the limit infimum is contained in the limit supremum: for example, simply by observing that all but finitely often implies infinitely often.
Using the monotonicity of and of
By using De Morgan's law twice, with set complement That is, all but finitely often is the same as finitely often.
From the second definition above and the definitions for limit infimum and limit supremum of a real-valued sequence, and
Suppose is a -algebra of subsets of That is, is nonempty and is closed under complement and under unions and intersections of countably many sets. Then, by the first definition above, if each then both and are elements of
Examples
Let Then
and
so exists.
Change the previous example to Then
and
so does not exist, despite the fact that the left and right endpoints of the intervals converge to 0 and 1, respectively.
Let Then
is the set of all rational numbers between 0 and 1 (inclusive), since even for and is an element of the above. Therefore,
On the other hand, which implies
In this case, the sequence does not have a limit. Note that is not the set of accumulation points, which would be the entire interval (according to the usual Euclidean metric).
Probability uses
Set limits, particularly the limit infimum and the limit supremum, are essential for probability and measure theory. Such limits are used to calculate (or prove) the probabilities and measures of other, more purposeful, sets. For the following, is a probability space, which means is a σ-algebra of subsets of and is a probability measure defined on that σ-algebra. Sets in the σ-algebra are known as events.
If is a monotone sequence of events in then exists and
Borel–Cantelli lemmas
In probability, the two Borel–Cantelli lemmas can be useful for showing that the limsup of a sequence of events has probability equal to 1 or to 0. The statement of the first (original) Borel–Cantelli lemma is
The second Borel–Cantelli lemma is a partial converse:
Almost sure convergence
One of the most important applications to probability is for demonstrating the almost sure convergence of a sequence of random variables. The event that a sequence of random variables converges to another random variable is formally expressed as It would be a mistake, however, to write this simply as a limsup of events. That is, this the event ! Instead, the of the event is
Therefore,
See also
References
Set theory
Probability theory | Set-theoretic limit | [
"Mathematics"
] | 1,382 | [
"Mathematical logic",
"Set theory"
] |
343,674 | https://en.wikipedia.org/wiki/Chinese%20space%20program | The space program of the People's Republic of China is about the activities in outer space conducted and directed by the People's Republic of China. The roots of the Chinese space program trace back to the 1950s, when, with the help of the newly allied Soviet Union, China began development of its first ballistic missile and rocket programs in response to the perceived American (and, later, Soviet) threats. Driven by the successes of Soviet Sputnik 1 and American Explorer 1 satellite launches in 1957 and 1958 respectively, China would launch its first satellite, Dong Fang Hong 1 in April 1970 aboard a Long March 1 rocket, making it the fifth nation to place a satellite in orbit.
China has one of the most active space programs in the world. With space launch capability provided by the Long March rocket family and four spaceports (Jiuquan, Taiyuan, Xichang, Wenchang) within its border, China conducts either the highest or the second highest number of orbital launches each year. It operates a satellite fleet consisting of a large number of communications, navigation, remote sensing and scientific research satellites. The scope of its activities has expanded from low Earth orbit to the Moon and Mars. China is one of the three countries, alongside the United States and Russia, with independent human spaceflight capability.
Currently, most of the space activities carried out by China are managed by the China National Space Administration (CNSA) and the People's Liberation Army Strategic Support Force, which directs the astronaut corps and the Chinese Deep Space Network. Major programs include China Manned Space Program, BeiDou Navigation Satellite System, Chinese Lunar Exploration Program, Gaofen Observation and Planetary Exploration of China. In recent years, China has conducted several missions, including Chang'e-4, Chang'e-5, Chang’e-6, Tianwen-1 and Tiangong space station.
History
Early years (1950s to mid-1970s)
The Chinese space program began in the form of missile research in the 1950s. After its birth in 1949, the newly founded People's Republic of China was in pursuit of missile technology to build up the nation's defense for the Cold War. In 1955, Qian Xuesen (), the world-class rocketry scientist, returned to China from the United States. In 1956, Qian submitted a proposal for the development of China's missile program, which was approved in just a few months. On October 8, China's first missile research institute, the Fifth Research Academy under the Ministry of National Defense, was established with less than 200 staff, most of which were recruited by Qian. The event was later recognized as the birth of China's space program.
To fully utilize all available resources, China kick-started its missile development by manufacturing a licensed copy of two Soviet R-2 missiles, which were secretly shipped to China in December 1957 as part of the cooperative technology transfer program between the Soviet Union and China. The Chinese version of the missile was given the code name "1059" with the expectation of being launched in 1959. But the target date was soon postponed due to various difficulties arising from the sudden withdrawal of Soviet technical assistance due to the Sino-Soviet split. Meanwhile, China started constructing its first missile test site in the Gobi desert of Inner Mongolia, which later became the famous Jiuquan Satellite Launch Center (), China's first spaceport.
After the launch of mankind's first artificial satellite, Sputnik 1, by the Soviet Union on October 4, 1957, Mao Zedong decided during the 8th National Congress of the Chinese Communist Party (CCP) on May 17, 1958, to make China an equal of the superpowers (), by adopting Project 581 with the objective of placing a satellite in orbit by 1959 to celebrate the 10th anniversary of the PRC's founding. This goal was soon proven unrealistic, and it was decided to focus on the development of sounding rockets first.
The first achievement of the program was the launch of T-7M, a sounding rocket that eventually reached the height of 8 km on February 19, 1960. It was the first rocket developed by Chinese engineers. The success was praised by Mao Zedong as a good beginning of an indigenous Chinese rocket development. However, all Soviet technological assistance was abruptly withdrawn after the 1960 Sino-Soviet split, and Chinese scientists continued on the program with extremely limited resources and knowledge. It was under these harsh conditions that China successfully launched the first "missile 1059", fueled by alcohol and liquid oxygen, on December 5, 1960, marking a successful imitation of Soviet missile. The missile 1059 was later renamed as Dongfeng-1 (DF-1, ).
While the imitation of Soviet missile was still in progress, the Fifth Academy led by Qian had begun the development of Dongfeng-2 (DF-2), the first missile to be designed and built completely by the Chinese. After a failed attempt in March 1962, multiple improvements, and hundreds of engine firing tests, DF-2 achieved its first successful launch on its second attempt on Jun 29, 1964 in Jiuquan. It was considered as a major milestone in China's indigenous missile development history.
In the next few years, Dongfeng-2 conducted seven more launches, all ended in success. On October 27, 1966, as part of the "Two Bombs, One Satellite" project, Dongfeng-2A, an improved version of DF-2, successfully launched and detonated a nuclear warhead at its target. As China's missile industry matures, a new plan of developing carrier rockets and launching satellites was proposed and approved in 1965 with the name Project 581 changed to Project 651. On January 30, 1970, China successfully tested the newly developed two-stage Dongfeng-4 (DF-4) missile, which demonstrated critical technologies like rocket staging, engine in-flight ignition, attitude control. The DF-4 was used to develop the Long March 1 (LM-1 or CZ-1, ), with a newly designed spin-up orbital insertion solid-propellant rocket motor third stage added to the two existing Nitric acid/UDMH liquid propellant stages.
China's space program benefited from the Third Front campaign to develop basic industry and national defense industry in China's rugged interior in preparation for potential invasion by the Soviet Union or the United States. Almost all of China's new aerospace work units in the late 1960s and early 1970s were established as part of the Third Front and Third Front projects included expansion of Jiuquan Satellite Launch Center, building Xichang Satellite Launch Center, and building Taiyuan Satellite Launch Center.
On April 24, 1970, China successfully launched the 173 kg Dong Fang Hong I (, meaning The East Is Red I) atop a Long March 1 (CZ-1, ) rocket from Jiuquan Satellite Launch Center. It was the heaviest first satellite placed into orbit by a nation. The third stage of the Long March 1 was specially equipped with a 40 m2 solar reflector () deployed by the centrifugal force developed by the spin-up orbital insertion solid propellant stage. China's second satellite was launched with the last Long March 1 on March 3, 1971. The 221 kg ShiJian-1 (SJ-1, ) was equipped with a magnetometer and cosmic-ray/x-ray detectors.
In addition to the satellite launch, China also made small progress in human spaceflight. The first successful launch and recovery of a T-7A(S1) sounding rocket carrying a biological experiment (it carried eight white mice) was on July 19, 1964, from Base 603 (). As the space race between the two superpowers reached its climax with the conquest of the Moon, Mao and Zhou Enlai decided on July 14, 1967, that China should not be left behind, and started China's own crewed space program. China's first spacecraft designed for human occupancy was named Shuguang-1 () in January 1968. China's Space Medical Institute () was founded on April 1, 1968, and the Central Military Commission issued the order to start the selection of astronauts. The first crewed space program, known as Project 714, was officially adopted in April 1971 with the goal of sending two astronauts into space by 1973 aboard the Shuguang spacecraft. The first screening process for astronauts had already ended on March 15, 1971, with 19 astronauts chosen. But the program was soon canceled in the same year due to political turmoil, ending China's first human spaceflight attempt.
While CZ-1 was being developed, the development of China's first long-range intercontinental ballistic missile, namely Dongfeng-5 (DF-5), has started since 1965. The first test flight of DF-5 was conducted in 1971. After that, its technology was adopted by two different models of Chinese medium-lift launch vehicles being developed. One of the two was Feng Bao 1 (FB-1, ) developed by Shanghai's 2nd Bureau of Mechanic-Electrical Industry, the predecessor of Shanghai Academy of Spaceflight Technology (SAST). The other parallel medium-lift LV program, also based on the same DF-5 ICBM and known as Long March 2 (CZ-2, ), was started in Beijing by the First Research Academy of the Seventh Ministry of Machine Building, which later became China Academy of Launch Vehicle Technology (CALT). Both FB-1 and CZ-2 were fueled by N2O4 and UDMH, the same propellant used by DF-5.
On July 26, 1975, FB-1 made its first successful flight, placing the 1107-kilogram Changkong-1 () satellite into orbit. It was the first time that China launched a payload heavier than 1 metric ton. Four months later, on November 26, CZ-2 successfully launched the FSW-0 No.1 () recoverable satellite into orbit. The satellite returned to earth and was successfully recovered three days later, making China the third country capable of recovering a satellite, after the Soviet Union and the United States. FB-1 and CZ-2, which were developed by two different institutes, were later evolved into two different branches of the classic Long March rocket family: Long March 4 and Long March 2.
As part of the Third Front effort to relocate critical defense infrastructure to the relatively remote interior (away from the Soviet border), it was decided to construct a new space center in the mountainous region of Xichang in the Sichuan province, code-named Base 27. After expansion, the Northern Missile Test Site was upgraded as a test base in January 1976 to become the Northern Missile Test Base () known as Base 25.
New era (late 1970s to 1980s)
After Mao died on September 9, 1976, his rival, Deng Xiaoping, denounced during the Cultural Revolution as reactionary and therefore forced to retire from all his offices, slowly re-emerged as China's new leader in 1978. At first, the new development was slowed. Then, several key projects deemed unnecessary were simply cancelled—the Fanji ABM system, the Xianfeng Anti-Missile Super Gun, the ICBM Early Warning Network 7010 Tracking Radar and the land-based high-power anti-missile laser program. Nevertheless, some development did proceed. The first Yuanwang-class space tracking ship was commissioned in 1979. The first full-range test of the DF-5 ICBM was conducted on May 18, 1980. The payload reached its target located 9300 km away in the South Pacific () and retrieved five minutes later by helicopter. In 1982, Long March 2C (CZ-2C, ), an upgraded version of Long March 2 based on DF-5 with 2500 kg low Earth orbit (LEO) payload capacity, completed its maiden flight. Long March 2C, along with many of its derived models, eventually became the backbone of Chinese space program in the following decades.
As China changing its direction from political activities to economy development since late 1970s, the demand for communications satellites surged. As a result, the Chinese communications satellite program, code name Project 331, was started on March 31, 1975. The first generation of China's own communication satellites was named Dong Fang Hong 2 (DFH-2, ), whose development was led by the famous satellite expert Sun Jiadong. Since communications satellites works in the geostationary orbit much higher than what the existing carrier rockets could reach, the launching of communications satellites became the next big challenge for the Chinese space program.
The task was assigned to Long March 3 (CZ-3, ), the most advanced Chinese launch vehicle in the 1980s. Long March 3 was a derivative of Long March 2C with an additional third stage, designed to send payloads to geosynchronous transfer orbit (GTO). When the development of Long March 3 began in the early 1970s, the engineers had to make a choice between the two options for the third stage engine: either the traditional engine fueled by the same hypergolic fuels used by the first two stages, or the advanced cryogenic engine fueled by liquid hydrogen and liquid oxygen. Although the cryogenic engine plan was much more challenging than the other one, it was eventually chosen by Chief Designer Ren Xinmin (), who had foreseen the great potential of its use for the Chinese space program in the coming future. The development of cryogenic engine with in-flight re-ignition capability began in 1976 and wasn't completed until 1983. At the same time, Xichang Satellite Launch Center () was chosen as the launch site of Long March 3 due to its low latitude, which provides better GTO launch capability.
On January 29, 1984, Long March 3 performed its maiden flight from Xichang, carrying the first experimental DFH-2 satellite. Unfortunately, because of the cryogenic third-stage engine failed to re-ignite during flight, the satellite was placed into a 400 km LEO instead of its intended GTO. Despite the rocket failure, the engineers managed to send the satellite into an elliptic orbit with an apoapsis of 6480 km using the satellite's own propulsion system. A series of tests were then conducted to verify the performance the satellite. Thanks to the hard work by the engineers, the cause of the cryogenic engine failure was located quickly, followed by improvements applied on the second rocket awaiting launch.
On April 8, 1984, less than 70 days after the first failure, Long March 3 launched again from Xichang. It successfully inserted the second experimental DFH-2 satellite into target GTO on its second attempt. The satellite reached the final orbit location on April 16 and was handed over to the user on May 14, becoming China's first geostationary communications satellite. The success made China the fifth country in the world with independent geostationary satellite development and launch capability. Less than two years later, on February 1, 1986, the first practical DFH-2 communications satellite was launched into orbit atop a Long March 3 rocket, ending China's reliance on foreign communications satellite.
During the 1980s, human spaceflights in the world became significantly more active than before as the American Space Shuttle and Soviet space stations were put in service respectively. It was in the same period that the previously canceled Chinese human spaceflight program was quietly revived again. In March 1986, Project 863 () was proposed by four scientists Wang Daheng, Wang Ganchang, Yang Jiachi, and Chen Fangyun. The goal of the project was to stimulate the development of advanced technologies, including human spaceflight. Followed by the approval of Project 863, the early study of Chinese human spaceflight program in the new era had begun.
The rise and fall of commercial launches (1990s)
After the initial success of Long March 3, further development of the Long March rocket series allowed China to announce a commercial launch program for international customers in 1985, which opened up a decade of commercial launches by Chinese launch vehicles in the 1990s. The launch service was provided by China Great Wall Industry Corporation (CGWIC) with support from CALT, SAST and China Satellite Launch and Tracking Control General (CLTC). The first contract was signed with AsiaSat in January 1989 to launch AsiaSat 1, a communications satellite manufactured by Hughes. It was previously a satellite owned by Westar but placed into a wrong orbit due to kick motor malfunction before being recovered in the STS-51-A mission in 1984.
On April 7, 1990, a Long March 3 rocket successfully launched AsiaSat 1 into target geosynchronous transfer orbit with high precision, fulfilling the contract. As its very first commercial launch ended in full success, the Chinese commercial launch program was introduced to the world with a good opening.
Although Long March 3 completed its first commercial mission as expected, its 1,500 kg payload capability was not capable of placing the new generation of communication satellites, which were usually over 2,500 kg, into geostationary transfer orbit. To deal with the problem, China introduced Long March 2E (CZ-2E, ), the first Chinese rocket with strap-on boosters that can place up to 3,000 kg payload into GTO. The development of Long March 2E began in November 1988 when CGWIC was awarded the contract of launching two Optus satellites by Hughes mostly due to its low price. At that time, neither the rocket nor the launch facility was anything more than concepts on paper. Yet the engineers of CALT eventually built all the hardware from scratch in a record-breaking period of 18 months, which impressed the American experts. On September 16, 1990, Long March 2E, carrying an Optus mass simulator, conducted its test flight and reached intended orbit as designed. The success of the test flight was a huge inspiration for all parties involved and brought optimism about the coming launch of actual Optus satellites.
However, an accident occurred during this highly anticipated launch on March 22, 1992, at Xichang Satellite Launch Center. After initial ignition, all engines shut down unexpectedly. The rocket was unable to lift off, resulting in a launch abort while being live-streamed to the world. The post-launch investigation revealed that some minor aluminum scraps caused a shortage in the control circuit, triggering an emergency shutdown of all engines. Although the huge vibration brought by the short-lived ignition had led to a rotation of the whole rocket by 1.5 degree clockwise and partial displacement of the supporting blocks, the rocket filled with propellant was still standing on the launch pad when the dust settled. After a rescue mission that lasted for 39 hours, the payload, rocket, and launch facilities were all preserved intact, avoiding huge losses. Less than five months later, on August 14, a new Long March 2E rocket successfully lifted off from Xichang, sending the Optus satellite into orbit.
In June 1993, the China Aerospace Corporation was founded in Beijing. It was also granted the title of China National Space Administration (CNSA). A improved version of Long March 3, namely Long March 3A (CZ-3A, ) with 2,600 kg payload capacity to GTO, was put into service in 1994. However, on February 15, 1996, during the first flight of the further improved Long March 3B (CZ-3B, ) rocket carrying Intelsat 708, the rocket veered off course immediately after clearing the launch platform, crashing 22 seconds later. The crash killed 6 people and injured 57, making it the most disastrous event in the history of Chinese space program. Although the Long March 3 rocket successfully launched APStar 1A communication satellites on July 3, it came across a third stage re-ignition malfunction during the launch of ChinaSat 7 on August 18, resulting in another launch failure.
The two launch failures within a few months dealt a severe blow to the reputation of the Long March rockets. As a consequence, the Chinese commercial launch service was facing canceled orders, refusal of insurance, or greatly increased insurance premium. Under such a harsh circumstance, the Chinese space industry initiated full-scale quality improving activities. A closed-loop quality management system was established to fix quality issues in both the technical and administrative aspects. The strict quality management system remarkably increased the success rate ever since. Within the next 15 years, from October 20, 1996, up until August 16, 2011, China had achieved 102 consecutive successful space launches. On August 20, 1997, Long March 3B accomplished its first successful flight on its second attempt, placing the 3,770 kg Agila-2 communications satellite into orbit. It offered a GTO payload capacity as high as 5,000 kg capable of putting different kinds of heavy satellites available on the international market into orbit. Ever since then, Long March 3B had become the backbone of China's mid to high Earth orbit launches and been granted the title of most powerful rocket by China for nearly 20 years. In 1998, the administrative branch of China Aerospace Corporation was split and then merged into the newly founded Commission for Science, Technology and Industry for National Defense while retaining the title of CNSA. The remaining part was split again into China Aerospace Science and Technology Corporation (CASC) and China Aerospace Science and Industry Corporation (CASIC) in 1999.
While the Long March rockets were trying to take back the commercial launch market it lost, the political suppression from the United States approached. In 1998, the United States accused Hughes and Loral of exporting technologies that inadvertently helped China's ballistic missile program while resolving issues that caused the Long March rocket launch failures. The accusation ultimately led to the release of Cox Report, which further accused China of stealing sensitive technologies. In the next year, the U.S. Congress passed the act that put commercial satellites into the list restricted by International Traffic in Arms Regulations (ITAR) and prohibited launches of satellites containing U.S. made components onboard Chinese rockets. The regulation abruptly killed the commercial cooperation between China and the United States. The two Iridum satellites launched by Long March 2C on June 12, 1999, became the last batch of American satellites launched by Chinese rocket. Furthermore, due to the strict regulation applied and the U.S. dominance in space industry, the Long March rockets had been de facto excluded from the international commercial launch market, causing a stagnation of the Chinese commercial launch program in the next few years.
Despite the turmoil of commercial launches, the Chinese space program still made a huge breakthrough near the end of the decade. At 6:30 (China Standard Time) on November 20, 1999, Shenzhou-1 (), the first uncrewed Shenzhou spacecraft () designed for human spaceflight, was successfully launched atop a Long March 2F (CZ-2F, ) rocket from Jiuquan Satellite Launch Center. The spacecraft was inserted into low earth orbit 10 minutes after lift off. After orbiting the Earth for 14 rounds, the spacecraft initiated the return procedure as planned and landed safely in Inner Mongolia at 03:41 on November 21, marking the full success of China's first Shenzhou test flight. Following the announcement of the success of the mission, the previously secretive Chinese human spaceflight program, namely the China Manned Space Program (CMS, ), was formally made public. CMS, which was formally approved on September 21, 1992, by the CCP Politburo Standing Committee as Project 921, has been the most ambitious space program of China since its birth. Its goals can be described as "Three Steps": Crewed spacecraft launch and return; Space laboratory for short-term missions; Long-term modular space station. Due to its complex nature, a series of advanced projects were introduced by the program, including Shenzhou spacecraft, Long March 2F rocket, human spaceflight launch site in Jiuquan, Beijing Aerospace Flight Control Center, and Astronaut Center of China in Beijing. In terms of astronauts, fourteen candidates were selected to form the People's Liberation Army Astronaut Corps and started accepting spaceflight training.
Breakthroughs by Shenzhou and Chang'e (2000s)
Since the beginning of 21st century, China has been experiencing rapid economic growth, which led to higher investment into space programs and multiple major achievements in the following decades. In November 2000, the Chinese government released its first white paper entitled China's Space Activities, which described its goals in the next decade as:
To build up an earth observation system for long-term stable operation.
To set up an independently operated satellite broadcasting and telecommunications system.
To establish an independent satellite navigation and positioning system.
To upgrade the overall level and capacity of China's launch vehicles.
To realize manned spaceflight and establish an initially complete R&D and testing system for manned space projects.
To establish a coordinated and complete national satellite remote-sensing application system.
To develop space science and explore outer space.
The independent satellite navigation and positioning system mentioned by the white paper was Beidou (). The development of Beidou dates back to 1983 when academician of the Chinese Academy of Sciences Chen Fangyun designed a primitive satellite navigation systems consisting of two satellites in the geostationary orbit. Sun Jiadong, the famous satellite expert of China, later proposed a "three-step" strategy to develop China's own satellite navigation system, whose service coverage expands from China to Asia then the globe. The two satellites of the "first step", namely BeiDou-1, were launched in October and December 2000. As an experimental system, Beidou-1 offered basic positioning, navigation and timing services to limited areas in and around China. After a few years of experiment, China started the construction of BeiDou-2, a more advanced system to serve the Asia-Pacific region by launching the first two satellites in 2007 and 2009 respectively.
Another major goal specified by the white paper was to realize crewed spaceflight. The China Manned Space Program continued its steady evolvement in the 21st century after its initial success. From January 2001 to January 2003, China conducted three uncrewed Shenzhou spacecraft test flights, validating all systems required by human spaceflight. Among these missions, the Shenzhou-4 launched on December 30, 2002, was the last uncrewed rehearsal of Shenzhou. It flew for 6 days and 18 hours and orbited around the Earth for 108 circles before returning on January 5, 2003.
On October 15, 2003, the first Chinese astronaut Yang Liwei () was launched aboard Shenzhou-5 () spacecraft atop a Long March 2F rocket from Jiuquan Satellite Launch Center. The spacecraft was inserted into orbit ten minutes after launch, making Yang the first Chinese in space. After a flight of more than 21 hours and 14 orbits around the Earth, the spacecraft returned and landed safely in Inner Mongolia in the next morning, followed by Yang's walking out of the return capsule by himself. The complete success of Shenzhou 5 mission was widely celebrated in China and received worldwide endorsements from different people and parties, including UN Secretary General Kofi Annan. The mission, officially recognized by China as the second milestone of its space program after the launch of Dongfanghong-1, marked China's standing as the third country capable of completing independent human spaceflight, ending the over 40-year long duopoly by the Soviet Union/Russia and the United States.
The China Manned Space Program did not stop its footsteps after its historic first crewed spaceflight. In 2005, two Chinese astronauts, Fei Junlong () and Nie Haisheng (), safely completed China's first "multi-person and multi-day" spaceflight mission aboard Shenzhou-6 () between October 12 and 17. On 25 September 2008, Shenzhou-7 () was launched into space with three astronauts, Zhai Zhigang (), Liu Boming () and Jing Haipeng (). During the flight, Zhai and Liu conducted China's first spacewalk in orbit.
Around the same time, China began preparation for extraterrestrial exploration, starting with the Moon. The early research of Moon exploration of China dates back to 1994 when its necessity and feasibility were studied and discussed among Chinese scientists. As a result, the white paper of 2000 enlisted the Moon as the primary target of China's deep space exploration within the decade. In January 2004, the year after China's first human spaceflight mission, the Chinese Moon orbiting program was formally approved and was later transformed into Chinese Lunar Exploration Program (CLEP, ). Just like several other space programs of China, CLEP was divided into three phases, which were simplified as "Orbiting, Landing, Returning" (), all to be executed by robotic probes at the time of planning.
On October 24, 2007, the first lunar orbiter Chang'e-1 () was successfully launched by a Long March 3A rocket, and was inserted into Moon orbit on November 7, becoming China's first artificial satellite of the Moon. It then performed a series of surveys and produced China's first lunar map. On March 1, 2009, Chang'e-1, which had been operating longer than its designed life span, performed a controlled hard landing on lunar surface, concluding the Chang'e-1 mission. Being China's first deep space exploration mission, Chang'e-1 was recognized by China as the third milestone of the Chinese space program and the admission ticket to the world club of deep space explorations.
In others areas, despite the harsh sanction imposed by the United States since 1999, China still made some progress in terms of commercial launches within the first decade of the 21st century. In April 2005, China successfully conducted its first commercial launch since 1999 by launching the APStar 6 communications satellite manufactured by French company Alcatel atop a Long March 3B rocket. In May 2007, China launched NigComSat-1 satellite developed by China Academy of Space Technology. This was the first time China provided the full service from satellite manufacture to launch for international customers.
Expansion and revolution (2010s)
From 2000 to 2010, China had quadrupled its GDP and became the second largest economy in the world. Due to the rapid development of economy activities across the nation, the demand for high-resolution Earth observation systems increased in a remarkable manner. To end the reliance on foreign high-resolution remote sensing data, China initiated the China High-resolution Earth Observation System program (), most commonly known as Gaofen (), in May 2010. Its purpose is to establish an all-day, all-weather coverage Earth observation system for satisfying the requirements of social development as part of the Chinese space infrastructures. The first Gaofen satellite, Gaofen 1, was launched into orbit on April 26, 2013, followed by more satellites being launched into different orbits in the next few years to cover different spectra. As of today, more than 30 Gaofen satellites are being operated by China as the completion of the space-based section of Gaofen was announced in late 2022.
The Beidou Navigation Satellite System proceeded in extraordinary speed after the launch of first Beidou-2 satellite in 2007. As many as five Beidou-2 navigation satellites were launched in 2010 alone. In late 2012, the Beidou-2 navigation system consisting of 14 satellites was completed and started providing service to Asia-Pacific region. The construction of more advanced Beidou-3 started since November 2017. Its buildup speed was even more astonishing than before as China launched 24 satellites into medium Earth orbit, 3 into inclined geosynchronous orbit, and 3 into geostationary orbit within just three years. The final satellite of Beidou-3 was successfully launched by a Long March 3B rocket on June 23, 2020. On July 31, 2020, CCP general secretary Xi Jinping made the announcement on the Beidou-3 completion ceremony, declaring the commission of Beidou-3 system across the globe. The completed Beidou-3 navigation system integrates navigation and communication function, and possesses multiple service capabilities, including positioning, navigation and timing, short message communication, international search and rescue, satellite-based augmentation, ground augmentation and precise point positioning. It is now one of the four core system providers designated by the International Committee on Global Navigation Satellite Systems of the United Nations.
The China Manned Space Program continued to make breakthroughs in human spaceflight technologies in 2010s. In the early 2000s, the Chinese crewed space program continued to engage with Russia in technological exchanges regarding the development of a docking mechanism used for space stations. Deputy Chief Designer, Huang Weifen, stated that near the end of 2009, the China Manned Space Agency began to train astronauts on how to dock spacecraft. In order to practice space rendezvous and docking, China launched an target vehicle, Tiangong-1 (), in 2011, followed by the uncrewed Shenzhou 8 (). The two spacecraft performed China's first automatic rendezvous and docking on 3 November 2011, which verified the performance of docking procedures and mechanisms. About 9 months later, in June 2012, Tiangong 1 completed the first manual rendezvous and docking with Shenzhou 9 (), a crewed spacecraft carrying Jing Haipeng, Liu Wang () and China's first female astronaut Liu Yang (). The successes of Shenzhou 8 and 9 missions, especially the automatic and manual docking experiments, marked China's advancement in space rendezvous and docking. Tiangong 1 was later docked with crewed spacecraft Shenzhou 10 () carrying astronauts Nie Haisheng, Zhang Xiaoguang () and Wang Yaping (), who conducted multiple scientific experiments, gave lectures to over 60 million students in China, and performed more docking tests before returning to the Earth safely after 15 days in space. The completion of missions from Shenzhou 7 to 10 demonstrated China's mastery of all basic human spaceflight technologies, ending phase 1 of "Second Step".
Although Tiangong 1 was considered as a space station prototype, its functionality was still remarkably weaker than decent space laboratories. Tiangong-2 (), the first real space laboratory of China, was launched into orbit on September 15, 2016. It was visited by Shenzhou 11 crew a month later. Two astronauts, Jing Haipeng and Chen Dong () entered Tiangong 2 and were stationed for about 30 days, breaking China's record for the longest human spaceflight mission while carrying out different types of human-attended experiments. In April 2017, China's first cargo spacecraft, Tianzhou-1 (), docked with Tiangong 2 and completed multiple in-orbit propellant refueling tests.
In terms of deep space explorations, after completing the objective of "Orbiting" in 2007, the Chinese Lunar Exploration Program started preparing for the "Landing" phase. China's second lunar probe, Chang'e-2 (), was launched on October 1, 2010. It used trans-lunar injection orbit to reach the Moon for the first time and imaged the Sinus Iridum region where future landing missions were expected to occur. On December 2, 2013, a Long March 3B rocket launched Chang'e-3 (), China's first lunar lander, to the Moon. On December 14, Chang'e 3 successfully landed on the Sinus Iridum region, making China the third country that made soft-landing on an extraterrestrial body. A day later, the Yutu rover () was deployed to the lunar surface and started its survey, achieving the goal of "landing and roving" for the second phase of CLEP.
In addition to lunar exploration, it is worth noting that China made its first attempt of interplanetary exploration during the same period. Yinghuo-1 (), China's first Mars orbiter, was launched on board the Russian Fobos-Grunt spacecraft as an additional payload in November 2011. Yinghuo-1 was a mission in cooperation with Russian Space Agency. It was a relatively small project initiated by National Space Science Center of Chinese Academy of Sciences instead of a major space program managed by the state space agency. The Yinghuo-1 orbiter weighed about 100 kg and was carried by the Fobos-Grunt probe. It was expected to detach from the Fobos-Grunt probe and injected into Mars orbit after reaching Mars. However, due to an error of the onboard computer, the Fobos-Grunt probe failed to start its main engine and was stranded in the low Earth orbit after launch. Two months later, Fobos-Grunt, along with the Yinghuo-1 orbiter, re-entered and eventually burned up in the Earth atmosphere, resulting in a mission failure. Although the Yinghuo-1 mission did not achieve its original goal due to factors not controlled by China, it led to the dawn of the Chinese interplanetary explorations by gathering a group of talents dedicated to interplanetary research for the first time. On December 13, 2012, the Chinese lunar probe Chang'e 2, which was in an extended mission after the conclusion of its primary tasks in lunar orbit, made a flyby of asteroid Toutatis with closest approach being 3.2 kilometers, making it China's first interplanetary probe. In 2016, the first Chinese independent Mars mission was formally approved and listed as one of the major tasks in "White Paper on China's Space Activities in 2016". The mission, which was planned in an unprecedented manner, aimed to achieve Mars orbiting, landing and roving in one single attempt in 2020.
While China was making remarkable progress in all areas above, the Long March rockets, the absolute foundation of Chinese space program, were also experiencing a crucial revolution. Ever since 1970s, the Long March rocket family had been using dinitrogen tetroxide and UDMH as propellant for liquid engines. Although this hypergolic propellant is simple, cheap and reliable, its disadvantages, including toxicity, environmental damages, and low specific impulse, hindered Chinese carrier rockets from being competitive against other space powers since the mid-1980s. To get rid of such unsatisfying situation, China commenced the study of new propellant selection since the introduction of Project 863 in 1986. After an early study that lasted for over a decade, the development of a 120-ton rocket engine burning LOX and kerosene in staged combustion cycle were formally approved in 2000. Despite setbacks like engine explosions during initial firing tests, the development team still made breakthroughs in key technologies like superalloy production and engine ignition and completed its first long duration firing test in 2006. The engine, which was named YF-100, was eventually certified in 2012, and the first engine for actual flight was ready in 2014. On September 20, 2015, the Long March 6 (), a small rocket using one YF-100 engine on its first stage, successfully conducted its maiden flight. On June 25, 2016, the medium-lift Long March 7 (), which was equipped with six YF-100 engines, completed its maiden flight in full success, increasing the maximum LEO payload capacity by Chinese rockets to 13.5 tons. The successes of Long March 6 and 7 signified the introduction of the "new generation of Long March rockets" powered by clean and more efficient engines.
The maiden launch of Long March 7 was also the very first launch from Wenchang Space Launch Site () located in Wenchang, Hainan Province. It marked the inauguration of Wenchang on the world stage of space activities. Compared with the old Jiuquan, Taiyuan, and Xichang, the Wenchang Space Launch Site, whose construction began in September 2009, is China's latest and most advanced spaceport. Rockets launched from Wenchang can send ten to fifteen percent more payloads in mass to orbit thanks to its low latitude. Additionally, due to its geographic location, the drop zones of rocket debris produced by rocket launches are in the ocean, eliminating threats posed to people and facilities on the ground. Wenchang's coastal location also allows larger rockets to be delivered to launch site by sea, which is difficult, if not impossible, for inland launch sites due to the size limits of tunnels needed to be passed through during transportations.
The biggest breakthrough within the decade, if not decades, were brought by Long March 5 (), the leading role of the new generation of Long March rockets and China's first heavy-lift launch vehicle. The early study of Long March 5 can be traced back to 1986, and the project was formally approved in mid-2000s. It applied 247 new technologies during its development while over 90% of its components were newly developed and applied for the first time. Instead of using the classic 3.35-meter-diameter core stage and 2.25-meter-diameter side boosters, the 57-meter tall Long March 5 consists of one 5-meter-diameter core stage burning LH2/LOX and four 3.35-meter-diameter side boosters burning kerosene/LOX. With a launch mass as high as 869 metric tons and 10,573 kN lift-off thrust, the Long March 5, being China's most powerful rocket, is capable of lifting up to 25 tons of payload to LEO and 14 tons to GTO, making it more than 2.5 times as much as the previous record holder (Long March 3B) and nearly as equal as the most powerful rocket in the world at that time (Delta IV Heavy). Due to its unprecedented capability, the Long March 5 was expected as the keystone for the Chinese space program in the early 21st century. However, after a successful maiden flight in late 2016, the second launch of the Long March 5 on July 2, 2017 suffered a failure, which was considered as the biggest setback for Chinese space program in nearly two decades. Because of the failure, the Long March 5 was grounded indefinitely until the problem was located and resolved, and multiple planned major space missions were either postponed or facing the risk of being postponed in the next few years.
Despite the uncertain future of Long March 5, China managed to make history in space explorations with existing hardware in the next two years. Due to tidal locking, the Moon has been orbiting the Earth as the only natural satellite by facing it with the same side. Humans had never seen the far side of the Moon until the Space Age. Although humans have already got quite an amount of knowledge about the overall condition of the far side of the Moon in early 21st century with the help of numerous visits by lunar orbiters since the 1960s, no country had ever explored the area in close distance due to lack of communications on the far side. This missing piece was eventually filled by China's Chang'e-4 () mission in 2019. To solve the communications problem, China launched Queqiao (), a relay satellite orbiting around the Earth–Moon L2 Lagrangian point, in May 2018 to enable communications between the far side of the Moon and the Earth. On December 8, 2018, the Chang'e 4, which was originally built as the backup of Chang'e 3, was launched by a Long March 3B rocket from Xichang and entered lunar orbit on December 12. On January 3, 2019, Chang'e 4 successfully soft-landed at the Von Kármán (lunar crater) on the far side of the Moon, and returned the first close-up image of the lunar surface on the far side. A rover named Yutu-2 () was deployed onto the lunar surface a few hours later, leaving the first trial on the far side. The accomplishment of a series of tasks by Chang'e-4 made China the first country to successfully achieved soft-landing and roving on the far side of the Moon. Because of its great success, the project team received IAF World Space Award of 2020.
Aside from Chang'e 4, there were some other events worth noting during this period. In August 2016, China launched world's first quantum communications satellite Mozi (). In June 2017, the first Chinese X-ray astronomy satellite named Huiyan () was launched into space. In August of the same year, the Astronaut Center of China organized a joint training in which sixteen Chinese and two ESA astronauts participated. It was the first time that foreign astronauts took part in astronaut training organized by China. In 2018, China performed more orbital launches than any other countries on the planet for the first time in history. On June 5, 2019, China conducted its first Sea Launch with Long March 11 () in the Yellow Sea. On July 25, Chinese company i-Space became the first Chinese private company to successfully conduct an orbital launch with its Hyperbola-1 small solid rocket.
As the 2010s came to an end, the Chinese space program was poised to conclude the decade with an inspiring event. On December 27, 2019, after a grounding and fixture that lasted for 908 days, the Long March 5 rocket conducted a highly anticipated return-to-flight mission from Wenchang. The mission ended in full success by placing Shijian-20, the heaviest satellite China had ever built, into the intended supersynchronous orbit. The flawless return of Long March 5 swept away all the depressions brought by its last failure since 2017. With its great power, the Long March 5 cleared the paths to multiple world-class space projects, allowing China to make great strides toward its ambitions in the coming 2020s.
2020-present
Being the product of latest technology and engineering by Chinese space industry in the early 21st century, the flight-proven Long March 5 unleashed the potential of Chinese space program to a great extent. Various projects previously restricted by the mass and size limits of the payloads were now offered a chance of realization. Ever since 2020, with the help of Long March 5, the Chinese space program has made tremendous progress in multiple areas by completing some of the most challenging missions ever conducted in history of space explorations, impressing the world like never before.
The "Third Step" of China Manned Space Program kicked off in 2020. Long March 5B, a variant of Long March 5, conducted its maiden flight successfully on May 5, 2020. Its high payload capacity and large payload fairing space enabled the delivery of Chinese space station modules to low Earth orbit. On April 29, 2021, Tianhe core module (), the 22-tonne core module of the space station, was successfully launched into Low Earth orbit by a Long March 5B rocket, marking the beginning of the construction of the China Space Station, also known as Tiangong (), followed by unprecedented high frequency of human spaceflight missions. A month later, China launched Tianzhou-2, the first cargo mission to the space station. On June 17, Shenzhou-12, the first crewed mission to the Chinese Space Station consisting of Nie Haisheng, Liu Boming and Tang Hongbo, was launched from Jiuquan. The crew docked with Tianhe and entered the core module about 9 hours after launch, becoming the first residents of the station. The crew lived and worked on the space station for three months, conducted two spacewalks, and returned to Earth safely on September 17, 2021. breaking the record of longest Chinese human spaceflight mission (33 days) previously made by Shenzhou-11. Roughly a month later, the Shenzhou-13 crewed was launched to the station. Astronauts Zhai Zhigang, Wang Yaping and Ye Guangfu completed the first long-duration spaceflight mission of China that lasted for over 180 days before returning to Earth safely on April 16, 2022. Astronaut Wang Yaping became the first Chinese female to perform a spacewalk during the mission.
Starting from May 2022, the China Manned Space Program had entered the space station assembly and construction phase. On June 5, 2022, Shenzhou-13 was launched and docked to Tianhe core module. The crew, including Chen Dong, Liu Yang and Cai Xuzhe, were expected to welcome the arrival of two space station modules during the six-month mission. On July 24, the third Long March 5B rocket lifted off from Wenchang, carrying the 23.2 t Wentian laboratory module (), the largest and heaviest spacecraft ever built and launched by China, into orbit. The module docked with the space station less than 20 hours later, adding the second module and the first laboratory module to it. On September 30, the new Wentian module was rotated from the forward docking port to starboard parking port. On October 31, the Mengtian laboratory module (), the third and final module of China Space Station, was launched by another Long March 5B rocket into orbit and docked with the space station in less than 13 hours later. On November 3, the 'T-shape' China Space Station was completed after the successful transposition of the Mengtian module. On November 29, Shenzhou-15 was launched and later docked with China Space Station. Astronauts Fei Junlong, Deng Qingming, and Zhang Lu were welcomed by the Shenzhou-14 crew on board the station, completing the first crew gathering and handover in space by Chinese astronauts and starting the era of continuous Chinese astronaut presence in space.
The third phase of Chinese Lunar Exploration Program was also allowed to proceed in 2020. As preparation, China conducted Chang'e 5-T1 mission in 2014. By completing its main task on November 1, 2014, China demonstrated the capability of returning a spacecraft from the lunar orbit back to Earth safely, paving the way for the lunar sample return mission to be conducted in 2017. However, the failure of the second Long March 5 mission disrupted the original plan. Despite the readiness of the spacecraft, the mission had to be postponed due to the unavailability of its launch vehicle, until the successful return-to-flight of Long March 5 in late 2019. On November 24, 2020, the sample return mission, entitled Chang'e-5 (), kicked off as the Long March 5 rocket launched the 8.2 t spacecraft stack into space. The spacecraft entered lunar orbit on November 28, followed by a separation of the stack into two parts. The lander landed near Mons Rümker in Oceanus Procellarum on December 1 and started the sample collection process the next day. Two days after the landing, on December 3, the ascent vehicle attached to the lander took off from lunar surface and entered lunar orbit, carrying the container with collected samples. This was the first time that China launched a spacecraft from an extraterrestrial body. On December 6, the ascent vehicle successfully docked with the orbiter in lunar orbit and transferred the sample container to the return capsule, accomplishing the first robotic rendezvous and docking in lunar orbit in history. On December 13, the orbiter, along with the return module, entered the orbit back to Earth after main engine burns. The return capsule eventually landed intact in Inner Mongolia on December 17, sealing the perfect completion of the mission.
On December 19, 2020, CNSA hosted the Chang'e-5 lunar sample handover ceremony in Beijing. By weighing the sample container taken out from the return capsule, CNSA announced that Chang'e-5 retrieved 1,731 grams of samples from the Moon. Being the most complex mission completed by China at the time, the Chang'e-5 mission achieved multiple remarkable milestones, including China's first lunar sampling, first liftoff from an extraterrestrial body, first automated rendezvous and docking in lunar orbit (by any nation) and the first spacecraft carrying samples to re-enter Earth's atmosphere at high speed. Its success also marked the completion of the goal of "Orbiting, Landing, Returning" planned by CLEP since 2004.
Prior to the launch of Chang'e-5, which targeted the Moon 380,000 km away from the Earth, China's first Mars probe had departed, heading to the Mars 400 million kilometers away. Ever since the approval of the Mars mission in 2016, China had developed required various technologies required, including deep space network, atmospheric entry, lander hovering and obstacle avoidance. Long March 5, the only launch vehicle capable of delivering the spacecraft, was back to service after its critical return-to-flight in December 2019. As a result, all things were ready when the launch windows of July 2020 arrived. On April 24, 2020, CNSA officially announced the program of Planetary Exploration of China and named China's first independent Mars mission as Tianwen-1 (). On July 23, 2020, Tianwen-1 was successfully launched atop a Long March 5 rocket into Trans-Mars injection orbit. The spacecraft, consisting of an orbiter, a lander, and a rover, aimed to achieve the goals of orbiting, landing, and roving on Mars in one single mission on the nation's first attempt. Due to its highly complex and risky nature, the mission was widely described as "ambitious" by international observers.
After a seven-month journey, on February 10, 2021, Tianwen-1 entered Mars orbit and became China's first operational Mars probe. The payloads on the orbiter were subsequently activated and started surveying Mars in preparation for the landing. In the following few months, CNSA released a series of images captured by the orbiter. On April 24, CNSA announced that the first Chinese Mars rover carried by Tianwen-1 probe had been named Zhurong, the god of fire in ancient Chinese mythology.
On May 15, 2020, around 1 am (Beijing time), Tianwen-1 initiated its landing process by igniting its main engines and lowering its orbit, followed by the separation of landing module at 4 am. The orbiter then returned to the parking orbit while the lander moved toward Mars atmosphere. Three hours later, the landing experienced the most dangerous atmospheric entry process that lasted for nine minutes. At 7:18 am, the lander successfully landed on the preselected southern Utopia Planitia. On May 25, the Zhurong rover drove onto the Martian surface from the lander. On June 11, CNSA released the first batch of high-resolution images of landing sites captured by Zhurong rovers, marking the success of the Mars landing mission. Being China's first independent Mars mission, Tianwen-1 completed the daunting process involving the orbiting, landing, and roving in highly sophisticated manner on one single attempt, making China the second nation to land and drive a Mars rover on the Martian surface after the United States. It drew the attention of the world as another example of China's rapidly expanding presence in outer space. Because of its huge difficulty and inspiring success, the Tianwen-1 development team received IAF World Space Award of 2022. It was the second time that a Chinese team awarded with this honor after the Chang'e-4 mission in 2019.
On 13 March, China attempted to launch two spacecrafts, DRO-A and DRO-B, into distant retrograde orbit around the Moon. As an independent project, the mission was managed by Chinese Academy of Sciences instead of Chinese Lunar Exploration Program. However, the mission failed to reach the strived for orbit due to an upper stage malfunction, remaining stranded in low Earth orbit. Rescue attempts had been made as its orbit had been observed being significantly raised to a highly elliptical orbit since its launch, yet the following status remains unknown to the public. They appear to have succeeded in reaching their desired orbit.
On 20 March 2024 China launched its relay satellite, Queqiao-2, in the orbit of the Moon, along with two mini satellites Tiandu 1 and 2. Queqiao-2 will relay communications for the Chang'e 6 (far side of the Moon), Chang'e 7 and Chang'e 8 (Lunar south pole region) spacecrafts. Tiandu 1 and 2 will test technologies for a future lunar navigation and positioning constellation. All the three probes entered lunar orbit successfully on 24 March 2024 (Tiandu-1 and 2 were attached to each other and separated in lunar orbit on 3 April 2024).
China sent Chang'e 6 on 3 May 2024, which conducted the first lunar sample return from Apollo Basin on the far side of the Moon. This is China's second lunar sample return mission, the first was achieved by Chang'e 5 from the lunar near side four years earlier. It also carried the Chinese Jinchan rover to conduct infrared spectroscopy of lunar surface and imaged Chang'e 6 lander on lunar surface. The lander-ascender-rover combination was separated with the orbiter and returner before landing on 1 June 2024 at 22:23 UTC. It landed on the Moon's surface on 1 June 2024. The ascender was launched back to lunar orbit on 3 June 2024 at 23:38 UTC, carrying samples collected by the lander, and later completed another robotic rendezvous and docking in lunar orbit. The sample container was then transferred to the returner, which landed in Inner Mongolia on 25 June 2024, completing China's far side extraterrestrial sample return mission. After dropping off the return samples for Earth, the Chang'e 6 (CE-6) orbiter was successfully captured by the Sun-Earth L2 Lagrange point on 9 September 2024.
Near future development
According to a 2022 government white paper, China will conduct more human spaceflight, lunar and planetary exploration missions, including:
Xuntian Space Telescope launch.
Chang'e-7 mission to perform a precise landing in the Moon's polar region that includes a "hopping detector" to explore permanently-shadowed areas.
Chang'e-8 lunar polar mission to test in-situ resource utilization and establish the predicate for the International Lunar Research Station.
Tianwen-2 mission to sample near-Earth asteroids and probe main-belt comets.
Tianwen-3 mission using two launches to return samples from Mars.
Tianwen-4 mission to explore the Jupiter system and Callisto; a probe to fly-by Uranus will be attached to the Jupiter probe.
In addition to these, China has also initiated the crewed lunar landing phase of its lunar exploration program, which aims to land Chinese astronauts on the Moon by 2030. A new crewed carrier rocket (Long March 10), new generation crew spacecraft, crewed lunar lander, lunar EVA spacesuit, lunar rover and other equipment are under development.
Chinese space program and the international community
Belt and Road Initiative
One of China's priorities in its Belt and Road Initiative is to improve satellite information pathways.
Bilateral space cooperation
China is an attractive partner for space cooperation for other developing countries because it launches their satellites at a reduced cost and often provides financing in the form of policy loans.
With respect to the African countries, the 2022-2024 action plan for the Forum on China-Africa Cooperation commits China to using space technology to enhance cooperation with African countries and to create centers for Africa-China cooperation on satellite remote sensing application. African countries are increasingly cooperating with China on satellite launches and specialized training. As of 2022, China has launched two satellites for Ethiopia, two for Nigeria, one for Algeria, one for Sudan, and one for Egypt.
China and Namibia jointly operate the China Telemetry, Tracking, and Command Station which was established in 2001 in Swakopmund, Namibia. This station tracks Chinese satellites and space missions.
China and Brazil have successfully cooperated in the field of space. Among the most successful space cooperation projects were the development and launch of earth monitoring satellites. As of 2023, the two countries have jointly developed six China-Brazil Earth Resource Satellites. These projects have helped both Brazil and China develop their access to satellite imagery and promoted remote sending research. Brazil and China's cooperation is a unique example of South-South cooperation between two developing countries in the field of space.
Dual-use technologies and outer space
The PRC is a member of the United Nations Committee on the Peaceful Uses of Outer Space and a signatory to all United Nations treaties and conventions on space, with the exception of the 1979 Moon Treaty. The United States government has long been resistant to the use of PRC launch services by American industry due to concerns over alleged civilian technology transfer that could have dual-use military applications to countries such as North Korea, Iran or Syria. Thus, financial retaliatory measures have been taken on many occasions against several Chinese space companies.
NASA's policy excluding Chinese state affiliates
Due to supposed national security concerns, all researchers from the U.S. National Aeronautics and Space Administration (NASA) are prohibited from working with Chinese citizens affiliated with a Chinese state enterprise or entity. In April 2011, the 112th United States Congress banned NASA from using its funds to host Chinese visitors at NASA facilities. In March 2013, the U.S. Congress passed legislation barring Chinese nationals from entering NASA facilities without a waiver from NASA.
The history of the U.S. exclusion policy can be traced back to allegations by a 1998 U.S. Congressional Commission that the technical information that American companies provided China for its commercial satellite ended up improving Chinese intercontinental ballistic missile technology. This was further aggravated in 2007 when China blew up a defunct meteorological satellite in low Earth orbit to test a ground-based anti-satellite (ASAT) missile. The debris created by the explosion contributed to the space junk that litter Earth's orbit, exposing other nations' space assets to the risk of accidental collision. The United States also fears the Chinese application of dual-use space technology for nefarious purposes.
The Chinese response to the exclusion policy involved its own space policy of opening up its space station to the outside world, welcoming scientists coming from all countries. American scientists have also boycotted NASA conferences due to its rejection of Chinese nationals in these events.
Organization
Initially, the space program of the PRC was organized under the People's Liberation Army, particularly the Second Artillery Corps (now the PLA Rocket Force, PLARF). In the 1990s, the PRC reorganized the space program as part of a general reorganization of the defense industry to make it resemble Western defense procurement.
The China National Space Administration, an agency within the Commission of Science, Technology and Industry for National Defense currently headed by Zhang Kejian, is now responsible for launches. The Long March rocket is produced by the China Academy of Launch Vehicle Technology, and satellites are produced by the China Aerospace Science and Technology Corporation. The latter organizations are state-owned enterprises; however, it is the intent of the PRC government that they should not be actively state-managed and that they should behave as independent design bureaus.
Universities and institutes
The space program also has close links with:
College of Aerospace Science and Engineering, National University of Defense Technology
School of Astronautics, Beihang University
School of Aerospace, Tsinghua University
School of Astronautics, Northwestern Polytechnical University
School of Aeronautics and Astronautics, Zhejiang University
Institute of Aerospace Science and Technology, Shanghai Jiaotong University
College of Aeronautics, Harbin Institute of Technology
School of Automation Science and Electrical Engineering, Beihang University
Space cities
Dongfeng Space City (), also known as Base 20 () or Dongfeng base ()
Beijing Space City ()
Wenchang Space City ()
Shanghai Space City ()
Yantai Space City ()
Guizhou Aerospace Industrial Park (), also known as Base 061 (), founded in 2002 after approval of Project 863 for industrialization of aerospace research centers ().
Suborbital launch sites
Nanhui () First successful launch of a T-7M sounding rocket on February 19, 1960.
Base 603 () Also known as Guangde Launch Site (). The first successful flight of a biological experimental sounding rocket transporting eight white mice was launched and recovered on July 19, 1964.
Satellite launch centers
The PRC has 6 satellite launch centers/sites:
Jiuquan Satellite Launch Center (JSLC)
Taiyuan Satellite Launch Center (TSLC)
Xichang Satellite Launch Center (XSLC)
Wenchang Spacecraft Launch Site (administered by Xichang SLC)
Wenchang Commercial Space Launch Site (administered by HICAL)
Haiyang Oriental Aerospace Port (administered by Taiyuan SLC)
Monitoring and control centers
Beijing Aerospace Command and Control Center (BACCC)
Xi'an Satellite Control Center (XSCC) also known as Base 26()
Fleet of six Yuanwang-class space tracking ships.
Data relay satellite () Tianlian I (), specially developed to decrease the communication time between the Shenzhou 7 spaceship and the ground; it will also improve the amount of data that can be transferred. The current orbit coverage of 12 percent will thus be increased to a total of about 60 percent.
Deep Space Tracking Network composed with radio antennas in Beijing, Shanghai, Kunming and Ürümqi, forming a 3000 km VLBI ().
Domestic tracking stations
New integrated land-based space monitoring and control network stations, forming a large triangle with Kashi in the north-west of China, Jiamusi in the north-east and Sanya in the south.
Weinan Station
Changchun Station
Qingdao Station
Zhanyi Station
Nanhai Station
Tianshan Station
Xiamen Station
Lushan Station
Jiamusi Station
Dongfeng Station
Hetian Station
Overseas tracking stations
Tarawa Station, Kiribati
Malindi Station, Kenya
Swakopmund tracking station, Namibia
China Satellite Launch and Tracking Control General tracking hub at Espacio Lejano Station in Neuquén Province, Argentina.
Plus shared space tracking facilities with France, Brazil, Sweden, and Australia.
Crewed landing sites
Siziwang Banner
Notable spaceflight programs
Project 714
As the Space Race between the two superpowers reached its climax with humans landing on the Moon, Mao Zedong and Zhou Enlai decided on July 14, 1967, that the PRC should not be left behind, and therefore initiated China's own crewed space program. The top-secret Project 714 aimed to put two people into space by 1973 with the Shuguang spacecraft. Nineteen PLAAF pilots were selected for this goal in March 1971. The Shuguang-1 spacecraft to be launched with the CZ-2A rocket was designed to carry a crew of two. The program was officially cancelled on May 13, 1972, for economic reasons, though the internal politics of the Cultural Revolution likely motivated the closure.
The short-lived second crewed program was based on the successful implementation of landing technology (third in the World after USSR and United States) by FSW satellites. It was announced a few times in 1978 with the open publishing of some details including photos, but then was abruptly canceled in 1980. It has been argued that the second crewed program was created solely for propaganda purposes, and was never intended to produce results.
Project 863
A new crewed space program was proposed by the Chinese Academy of Sciences in March 1986, as Astronautics plan 863-2. This consisted of a crewed spacecraft (Project 863–204) used to ferry astronaut crews to a space station (Project 863–205). In September of that year, astronauts in training were presented by the Chinese media. The various proposed crewed spacecraft were mostly spaceplanes. Project 863 ultimately evolved into the 1992 Project 921.
China Manned Space Program (Project 921)
Spacecraft
In 1992, authorization and funding were given for the first phase of Project 921, which was a plan to launch a crewed spacecraft. The Shenzhou program had four uncrewed test flights and two crewed missions. The first one was Shenzhou 1 on November 20, 1999. On January 9, 2001 Shenzhou 2 launched carrying test animals. Shenzhou 3 and Shenzhou 4 were launched in 2002, carrying test dummies. Following these was the successful Shenzhou 5, China's first crewed mission in space on October 15, 2003, which carried Yang Liwei in orbit for 21 hours and made China the third nation to launch a human into orbit. Shenzhou 6 followed two years later ending the first phase of Project 921. Missions are launched on the Long March 2F rocket from the Jiuquan Satellite Launch Center. The China Manned Space Agency (CMSA) of the Equipment Development Department of the Central Military Commission provides engineering and administrative support for the crewed Shenzhou missions.
Space laboratory
The second phase of the Project 921 started with Shenzhou 7, China's first spacewalk mission. Then, two crewed missions were planned to the first Chinese space laboratory. The PRC initially designed the Shenzhou spacecraft with docking technologies imported from Russia, therefore compatible with the International Space Station (ISS). On September 29, 2011, China launched Tiangong 1. This target module is intended to be the first step to testing the technology required for a planned space station.
On October 31, 2011, a Long March 2F rocket lifted the Shenzhou 8 uncrewed spacecraft which docked twice with the Tiangong 1 module. The Shenzhou 9 craft took off on 16 June 2012 with a crew of 3. It successfully docked with the Tiangong-1 laboratory on 18 June 2012, at 06:07 UTC, marking China's first crewed spacecraft docking. Another crewed mission, Shenzhou 10, launched on 11 June 2013. The Tiangong 1 target module is then expected to be deorbited.
A second space lab, Tiangong 2, launched on 15 September 2016, 22:04:09 (UTC+8). The launch mass was 8,600 kg, with a length of 10.4m and a width of 3.35m, much like the Tiangong 1. Shenzhou 11 launched and rendezvoused with Tiangong 2 in October 2016, with an unconfirmed further mission Shenzhou 12 in the future. The Tiangong 2 brings with it the POLAR gamma ray burst detector, a space-Earth quantum key distribution, and laser communications experiment to be used in conjunction with the Mozi 'Quantum Science Satellite', a liquid bridge thermocapillary convection experiment, and a space material experiment. Also included is a stereoscopic microwave altimeter, a space plant growth experiment, and a multi-angle wide-spectral imager and multi-spectral limb imaging spectrometer. Onboard TG-2 there will also be the world's first-ever in-space cold atomic fountain clock.
Space station
A larger basic permanent space station (基本型空间站) would be the third and last phase of Project 921. This will be a modular design with an eventual weight of around 60 tons, to be completed sometime before 2022. The first section, designated Tiangong 3, was scheduled for launch after Tiangong 2, but ultimately not ordered after its goals were merged with Tiangong 2.
This could also be the beginning of China's crewed international cooperation, the existence of which was officially disclosed for the first time after the launch of Shenzhou 7.
The first module of Tiangong space station, Tianhe core module, was launched on 29 April 2021, from Wenchang Space Launch Site. It was first visited by Shenzhou 12 crew on 17 June 2021. The Chinese space station is scheduled to be completed in 2022 and fully operational by 2023.
Lunar exploration
In January 2004, the PRC formally started the implementation phase of its uncrewed Moon exploration project. According to Sun Laiyan, administrator of the China National Space Administration, the project will involve three phases: orbiting the Moon; landing; and returning samples.
On December 14, 2005, it was reported "an effort to launch lunar orbiting satellites will be supplanted in 2007 by a program aimed at accomplishing an uncrewed lunar landing. A program to return uncrewed space vehicles from the Moon will begin in 2012 and last for five years, until the crewed program gets underway" in 2017, with a crewed Moon landing planned after that.
The decision to develop a new Moon rocket in the 1962 Soviet UR-700M-class (Project Aelita) able to launch a 500-ton payload in LTO and a more modest 50 tons LTO payload LV has been discussed in a 2006 conference by academician Zhang Guitian (), a liquid propellant rocket engine specialist, who developed the CZ-2 and CZ-4A rockets engines.
On June 22, 2006, Long Lehao, deputy chief architect of the lunar probe project, laid out a schedule for China's lunar exploration. He set 2024 as the date of China's first moonwalk.
In September 2010, it was announced that the country is planning to carry out explorations in deep space by sending a man to the Moon by 2025. China also hoped to bring a Moon rock sample back to Earth in 2017, and subsequently build an observatory on the Moon's surface. Ye Peijian, Commander in Chief of the Chang'e program and an academic at the Chinese Academy of Sciences, added that China has the "full capacity to accomplish Mars exploration by 2013."
On December 14, 2013 China's Chang'e 3 became the first object to soft-land on the Moon since Luna 24 in 1976.
On 20 May 2018, several months before the Chang'e 4 mission, the Queqiao was launched from Xichang Satellite Launch Center in China, on a Long March 4C rocket. The spacecraft took 24 days to reach L2, using a gravity assist at the Moon to save propellant. On 14 June 2018, Queqiao finished its final adjustment burn and entered the mission orbit, about from the Moon. This is the first lunar relay satellite ever placed in this location.
On January 3, 2019, Chang'e 4, the China National Space Administration's lunar rover, made the first-ever soft landing on the Moon's far side. The rover was able to transmit data back to Earth despite the lack of radio frequencies on the far side, via a dedicated satellite sent earlier to orbit the Moon. Landing and data transmission are considered landmark achievements for human space exploration.
Yang Liwei declared at the 16th Human in Space Symposium of International Academy of Astronautics (IAA) in Beijing, on May 22, 2007, that building a lunar base was a crucial step to realize a flight to Mars and farther planets.
According to practice, since the whole project is only at a very early preparatory research phase, no official crewed Moon program has been announced yet by the authorities. But its existence is nonetheless revealed by regular intentional leaks in the media. A typical example is the Lunar Roving Vehicle () that was shown on a Chinese TV channel () during the 2008 May Day celebrations.
On 23 November 2020, China launched the new Moon mission Chang'e 5, which returned to Earth carrying lunar samples on 16 December 2020. Only two nations, the United States and the former Soviet Union have ever returned materials from the Moon, thus making China the third country to have ever achieved the feat.
China sent Chang'e 6 on 3 May, which conducted the first lunar sample return from the far side of the Moon. This is China's second lunar sample return mission, the first was achieved by Chang'e 5 from the lunar near side 4 years ago.
Mission to Mars and beyond
In 2006, the Chief Designer of the Shenzhou spacecraft stated in an interview that:
Sun Laiyan, administrator of the China National Space Administration, said on July 20, 2006, that China would start deep space exploration focusing on Mars over the next five years, during the Eleventh Five-Year Plan (2006–2010) Program period. In April 2020, the Planetary Exploration of China program was announced. The program aims to explore planets of the Solar System, starting with Mars, then expanded to include asteroids and comets, Jupiter and more in the future.
The first mission of the program, Tianwen-1 Mars exploration mission, began on July 23, 2020. A spacecraft, which consisted of an orbiter, a lander, a rover, a remote and a deployable camera, was launched by a Long March 5 rocket from Wenchang. The Tianwen-1 was inserted into Mars orbit in February 2021 after a seven-month journey, followed by a successful soft landing of the lander and Zhurong rover on May 14, 2021.
Space-based solar power
According to the China Academy of Space Technology (CAST) presentation at the 2015 International Space Development Congress in Toronto, Canada, Chinese interest in space-based solar power began in the period 1990–1995. By 2013, there was a national goal, that "the state has decided that power coming from outside of the earth, such as solar power and development of other space energy resources, is to be China's future direction" and the following roadmap was identified: "In 2010, CAST will finish the concept design; in 2020, we will finish the industrial level testing of in-orbit construction and wireless transmissions. In 2025, we will complete the first 100kW SPS demonstration at LEO; and in 2035, the 100MW SPS will have an electric generating capacity. Finally in 2050, the first commercial level SPS system will be in operation at GEO." The article went on to state that "Since SPS development will be a huge project, it will be considered the equivalent of an Apollo program for energy. In the last century, America's leading position in science and technology worldwide was inextricably linked with technological advances associated with the implementation of the Apollo program. Likewise, as China's current achievements in aerospace technology are built upon with its successive generations of satellite projects in space, China will use its capabilities in space science to assure sustainable development of energy from space."
In 2015, the CAST team won the International SunSat Design Competition with their video of a Multi-Rotary Joint concept. The design was presented in detail in a paper for the Online Journal of Space Communication.
In 2016, Lt Gen. Zhang Yulin, deputy chief of the PLA armament development department of the Central Military Commission, suggested that China would next begin to exploit Earth-Moon space for industrial development. The goal would be the construction of space-based solar power satellites that would beam energy back to Earth.
In June 2021, Chinese officials confirmed the continuation of plans for a geostationary solar power station by 2050. The updated schedule anticipates a small-scale electricity generation test in 2022, followed by a megawatt-level orbital power station by 2030. The gigawatt-level geostationary station will require over 10,000 tonnes of infrastructure, delivered using over 100 Long March 9 launches.
List of launchers and projects
Launch vehicles
Active/under research
Air-Launched SLV able to place a 50 kilogram plus payload to 500 km SSO
Ceres-1 small-lift solid-fueled launch vehicle from private firm (relatively high launch cadence)
Gravity-1 medium-lift sea-launched solid fuel launch vehicle under development
Hyperbola-1 small-lift solid-fueled launch vehicle from private firm
Hyperbola-3 medium-lift liquid-fueled (methalox) launch vehicle with reusable first stage (VTVL) from private firm currently under development
Jielong 3 small to medium-lift solid fueled launch vehicle currently in service
Kaituozhe-1A ()
Kuaizhou quick-reaction small-lift solid fuel launch vehicle
Lijian-1 small to medium-lift solid fuel launch vehicle currently in service (by the commercial spin-off of the Chinese Academy of Sciences)
Lijian-2 medium-lift launch vehicle utilizing liquid fuel (kerolox) with reusable first stage under development
CZ-2E(A) Intended for launch of Chinese space station modules. Payload capacity up to 14 tons in LEO and 9000 (kN) liftoff thrust developed by 12 rocket engines, with enlarged fairing of 5.20 m in diameter and length of 12.39 m to accommodate large spacecraft
CZ-2F/G Modified CZ-2F without escape tower, specially used for launching robotic missions such as Shenzhou cargo and space laboratory module with payload capacity up to 11.2 tons in LEO
CZ-3B(A) More powerful Long March rockets using larger-size liquid propellant strap-on motors, with payload capacity up to 13 tons in LEO
CZ-3C Launch vehicle combining CZ-3B core with two boosters from CZ-2E
Long March 4C
CZ-5 heavy-lift hydrolox launch vehicle (with kerolox boosters)
CZ-5B variant of the CZ-5 for low Earth orbit payloads (up to 25 tonnes to LEO)
CZ-6 or Small Launch Vehicle; small-lift kerolox LV with short launch preparation period, low cost and high reliability, to meet the launch need of small satellites up to 500 kg to 700 km SSO, first flight for 2010; with Fan Ruixiang () as Chief designer of the project
CZ-7 medium-lift kerolox launch vehicle for launching resupply missions to the Tiangong space station
CZ-8 medium-lift launch vehicle mainly for launching payloads to SSO orbits
CZ-9 super heavy-lift launch vehicle with a LEO lift capability of 150 tonnes currently under development (planned to be fully reusable in time)
CZ-10 crew-rated super-heavy launch vehicle for crewed lunar missions under development
CZ-10A crew-rated medium-lift launch vehicle for launching the next-generation crewed spacecraft to LEOs with reusable first stage currently under development
CZ-11 small-lift solid fuel quick-response launch vehicle
Pallas-1 reusable (1st stage) medium-lift liquid fuel (kerolox) launch vehicle by private firm currently under development
Project 921-3 Reusable launch vehicle current project of the reusable shuttle system.
Tengyun another current project of two wing-staged reusable shuttle system
Reusable spaceplane reusable vertically-launched spaceplane with wings that lands on a runway and currently in service (speculated to be similar to the US X-37B in form and function)
Tianlong 2 medium-lift kerolox launch vehicle from private firm (in service)
Tianlong 3 medium to heavy-lift kerolox launch vehicle with reusable first stage from private firm currently under development
Zhuque-2 medium-lift liquid fuel (methalox) launch vehicle by private firm currently in service (first methane fueled rocket in the world to reach space and to reach orbit with payload)
Zhuque-3 medium to heavy-lift methalox launch vehicle by private firm with reusable first stage currently under development
Cancelled/retired
CZ-1D based on a CZ-1 but with a new N2O4/UDMH second stage.
Project 869 reusable shuttle system with Tianjiao-1 or Chang Cheng-1 (Great Wall-1) orbiters. Project of 1980s-1990s.
Satellites and science mission
Space-Based ASAT System small and nano-satellites developed by the Small Satellite Research Institute of the Chinese Academy of Space Technology.
The Double Star Mission comprised two satellites launched in 2003 and 2004, jointly with ESA, to study the Earth's magnetosphere.
Earth observation, remote sensing or reconnaissance satellites series: CBERS, Dongfanghong program, Fanhui Shi Weixing, Yaogan and Ziyuan 3.
Tianlian I telecommunication satellite
Tianlian II () Next generation data relay satellite (DRS) system, based on the DFH-4 satellite bus, with two satellites providing up to 85% coverage.
Beidou navigation system or Compass Navigation Satellite System, composed of 60 to 70 satellites, during the "Eleventh Five-Year Plan" period (2006–2010).
Astrophysics research, with the launch of the world's largest Solar Space Telescope in 2008, and Project 973 Space Hard X-Ray Modulation Telescope () by 2010.
Chinese Deep Space Network with the completion of the FAST, the world's largest single dish radio antenna of 500 m in Guizhou, and a 3000 km VLBI radio antenna.
A Deep Impact-style mission to test process of re-directing the direction of an asteroid or comet.
Space exploration
Crewed LEO Program
Project 921-1 – Shenzhou spacecraft.
Tiangong - first three crewed Chinese Space Laboratories.
Project 921-2 – permanent crewed modular Chinese Space Station
Tianzhou – robotic cargo vessel to resupply the Chinese Space Station, based on the design of Tiangong-1, not meant for reentry, but usable for garbage disposal.
Next-generation crewed spacecraft () – upgrade version of the Shenzhou spacecraft to resupply the Chinese Space Station and return cargo back to Earth.
Project 921-11 – X-11 reusable spacecraft for Project 921-2 Space Station.
Tianjiao-1 or Chang Cheng-1 (Great Wall-1) - winged spaceplane orbiters of Project 869 reusable shuttle system. Project of 1980s-1990s.
Shenlong - winged spaceplane orbiter of current Project 921-3 reusable shuttle system.
Tengyun - winged spaceplane orbiter in another current project of two wing-staged reusable shuttle system.
HTS Maglev Launch Assist Space Shuttle - winged spaceplane orbiter in another current shuttle project.
Chinese Lunar Exploration Program
First phase, Chang'e 1 and Chang'e 2 – launched in 2007 and 2010
Second phase, Chang'e 3 and Chang'e 4 – launched in 2013 and 2018
Third phase, Chang'e 5-T1 (completed in 2014) and Chang'e 5 – (completed in 2020)
Fourth phase, Chang'e 6 (sample-return from lunar far-side in May-June 2024), Chang'e 7 and Chang'e 8 – will explore the south pole for natural resources; may 3D-print a structure using regolith.
Crewed mission: by the year 2030, – crewed lunar missions employing the next-generation crewed spacecraft and the crewed lunar lander
Deep Space Exploration Program
China's first deep space probe, the Yinghuo-1 orbiter, was launched in November 2011 along with the joint Fobos-Grunt mission with Russia, but the rocket failed to leave Earth orbit and both probes underwent destructive re-entry on 15 January 2012.
In 2018, Chinese researchers proposed a deep space exploration roadmap to explore Mars, an asteroid, Jupiter, and further targets, within the 2020–2030 timeframe. Current and upcoming robotic missions include:
Chinese Deep Space Network relay satellites, for deep-space communication and exploration support network.
Tianwen-1, launched on 23 July 2020 with arrival at Mars on 10 February 2021. Mission includes an orbiter, a deployable and remote camera, a lander, and the Zhurong rover.
Tianwen-2, formerly ZhengHe, targeted for launch in 2025. Mission goals include asteroid flyby observations, global remote sensing, robotic landing, and sample return. Tianwen-2 is now in active development.
Interstellar Express, targeting for launch around 2024–2025 for Interstellar Heliosphere Probe-1 (IHP-1) and around 2025–2026 for Interstellar Heliosphere Probe-2 (IHP-2). Mission objectives include exploration of the heliosphere and interstellar space. Also to become the first non-NASA probes to leave the Solar System.
Mars Sample Return Mission, planned for launch around 2028–2030. Mission goals include in-situ topography and soil composition analysis, deep interior investigations to probe the planet's origins and geologic evolution, and sample return. As of December 2019, the plan is for two launches to be conducted during the November 2028 Earth-to-Mars launch window: a sample collection lander with Mars ascent vehicle on a Long March 3B, and an Earth Return Orbiter on a Long March 5, with samples returning to Earth in September 2031. Earlier plans implemented the mission in a single launch using the Long March 9.
Jupiter System orbiter, mission goals include orbital exploration of Jupiter and its four largest moons (with a focus on Callisto by orbiting this Jovian moon), study of the magnetohydrodynamics in the Jupiter system, and investigation of the internal composition of Jupiter's atmosphere and moons,
A fly-by of Uranus is planned as part of the 2029-2030 Tianwen-4 mission. The Uranus fly-by probe will detach from the Jupiter orbiter while in interplanetary space and proceed to a separate encounter with Uranus during the 2040s.
See also
Beihang University
China and weapons of mass destruction
Two Bombs, One Satellite
Chinese women in space
Harbin Institute of Technology
French space program
List of human spaceflights to the Tiangong space station
References
China, PR | Chinese space program | [
"Engineering"
] | 18,152 | [
"Space programs",
"Space programs by country"
] |
343,678 | https://en.wikipedia.org/wiki/Aryl%20group | In organic chemistry, an aryl is any functional group or substituent derived from an aromatic ring, usually an aromatic hydrocarbon, such as phenyl and naphthyl. "Aryl" is used for the sake of abbreviation or generalization, and "Ar" is used as a placeholder for the aryl group in chemical structure diagrams, analogous to “R” used for any organic substituent. “Ar” is not to be confused with the elemental symbol for argon.
A simple aryl group is phenyl (), a group derived from benzene. Examples of other aryl groups consist of:
The tolyl group () which is derived from toluene (methylbenzene)
The xylyl group (), which is derived from xylene (dimethylbenzene)
The naphthyl group (), which is derived from naphthalene
Arylation is the process in which an aryl group is attached to a substituent. It is typically achieved by cross-coupling reactions.
Nomenclature
The simplest aryl group is phenyl, which is made up of a benzene ring with one of its hydrogen atom replaced by some substituent, and has the molecular formula . Note that a phenyl group is not the same as a benzyl group, the latter consisting of a phenyl group attached to a methyl group and a molecular formula of .
To name compounds containing phenyl groups, the phenyl group can be taken to be the parent hydrocarbon and be represented by the suffix Alternatively, the phenyl group could be treated as the substituent, being described within the name as "phenyl". This is usually done when the group attached to the phenyl group consists of six or more carbon atoms.
As an example, consider a hydroxyl group connected to a phenyl group. In this case, if the phenyl group were taken to be the parent hydrocarbon, the compound would be named hydroxybenzene. Alternatively, and more commonly, the hydroxyl group could be taken as the parent group and the phenyl group treated as the substituent, resulting in the more familiar name phenol.
Reactions
See also
Alkyl
Aryl hydrocarbon receptor, a bodily target for dioxins
Aryloxy group
Arene compound
References
External links
Substituents | Aryl group | [
"Chemistry"
] | 504 | [
"Substituents",
"Aryl groups",
"Functional groups"
] |
343,857 | https://en.wikipedia.org/wiki/Lawn%20mower | A lawn mower (also known as a grass cutter or simply mower, also often spelled lawnmower) is a device utilizing one or more revolving blades (or a reel) to cut a grass surface to an even height. The height of the cut grass may be fixed by the mower's design but generally is adjustable by the operator, typically by a single master lever or by a mechanism on each of the machine's wheels. The blades may be powered by manual force, with wheels mechanically connected to the cutting blades so that the blades spin when the mower is pushed forward, or the machine may have a battery-powered or plug-in electric motor. The most common self-contained power source for lawn mowers is a small 4-stroke (typically one-cylinder) internal combustion engine. Smaller mowers often lack any form of self-propulsion, requiring human power to move over a surface; "walk-behind" mowers are self-propelled, requiring a human only to walk behind and guide them. Larger lawn mowers are usually either self-propelled "walk-behind" types or, more often, are "ride-on" mowers that the operator can sit on and control. A robotic lawn mower ("lawn-mowing bot", "mowbot", etc.) is designed to operate either entirely on its own or less commonly by an operator on a remote control.
Two main styles of blades are used in lawn mowers. Lawn mowers employing a single blade that rotates about a single vertical axis are known as rotary mowers, while those employing a cutting bar and multiple blade assembly that rotates about a single horizontal axis are known as cylinder or reel mowers (although in some versions, the cutting bar is the only blade, and the rotating assembly consists of flat metal pieces which force the blades of grass against the sharp cutting bar).
There are several types of mowers, each suited to a particular scale and purpose. The smallest types, non-powered push mowers, are suitable for small residential lawns and gardens. Electrical or piston engine-powered push-mowers are used for larger residential lawns (although there is some overlap). Riding mowers, which sometimes resemble small tractors, are larger than push mowers and are suitable for large lawns. However, commercial riding lawn mowers (such as zero-turn mowers) can be "stand-on" types and often bear little resemblance to residential lawn tractors, being designed to mow large areas at high speed in the shortest time possible. The largest multi-gang (multi-blade) mowers are mounted on tractors and are designed for large expanses of grass such as golf courses and municipal parks, although they are ill-suited for complex terrain.
History
Invention
The lawn mower was invented in 1830 by Edwin Beard Budding of Stroud, Gloucestershire, England. Budding's mower was designed primarily to cut the grass on sports grounds and extensive gardens, as a superior alternative to the scythe, and was granted a British patent on August 31, 1830.
Budding's first machine was wide with a frame made of wrought iron. The mower was pushed from behind. Cast-iron gear wheels transmitted power from the rear roller to the cutting cylinder, allowing the rear roller to drive the knives on the cutting cylinder; the ratio was 16:1. Another roller placed between the cutting cylinder and the main or land roller could be raised or lowered to alter the height of cut. The grass clippings were hurled forward into a tray-like box. It was soon realized, however, that an extra handle was needed in front to help pull the machine along. Overall, these machines were remarkably similar to modern mowers.
Two of the earliest Budding machines sold went to Regent's Park Zoological Gardens in London and the Oxford colleges. In an agreement between John Ferrabee and Edwin Budding dated May 18, 1830, Ferrabee paid the costs of enlarging the small blades, obtained letters of patent and acquired rights to manufacture, sell and license other manufacturers in the production of lawn mowers. Without patent, Budding and Ferrabee were shrewd enough to allow other companies to build copies of their mower under licence, the most successful of these being Ransomes of Ipswich, which began making mowers as early as 1832.
His machine was the catalyst for the preparation of modern-style sporting ovals, playing fields (pitches), grass courts, etc. This led to the codification of modern rules for many sports, including for football, lawn bowls, lawn tennis and others.
Further improvements
It took ten more years and further innovations to create a machine that could be drawn by animals, and sixty years before a steam-powered lawn mower was built. In the 1850s, Thomas Green & Son of Leeds introduced a mower called the Silens Messor (meaning silent cutter), which used a chain drive to transmit power from the rear roller to the cutting cylinder. These machines were lighter and quieter than the gear-driven machines that preceded them, although they were slightly more expensive. The rise in popularity of lawn sports helped prompt the spread of the invention. Lawn mowers became a more efficient alternative to the scythe and domesticated grazing animals.
Manufacture of lawn mowers took off in the 1860s. By 1862, Ferrabee's company was making eight models in various roller sizes. He manufactured over 5000 machines until production ceased in 1863. The first grass boxes were flat trays but took their present shape in the 1860s. James Sumner of Lancashire patented the first steam-powered lawn mower in 1893. His machine burned petrol and/or paraffin (kerosene) as fuel. These were heavy machines that took several hours to warm up to operating pressure. After numerous advances, these machines were sold by the Stott Fertilizer and Insecticide Company of Manchester and Sumner. The company they both controlled was called the Leyland Steam Motor Company.
Around 1900, one of the best known English machines was the Ransomes' Automaton, available in chain- or gear-driven models. Numerous manufacturers entered the field with petrol (gasoline) engine-powered mowers after the start of the 20th century. In 1902, The first was produced by Ransomes. JP Engineering of Leicester, founded after World War I, produced a range of very popular chain-driven mowers. About this time, an operator could ride behind animals that pulled the large machines. These were the first riding mowers.
The first United States patent for a reel lawn mower was granted to Amariah Hills on January 12, 1868. In 1870, Elwood McGuire of Richmond, Indiana designed a human-pushed lawn mower, which was very lightweight and a commercial success. John Burr patented an improved rotary-blade lawn mower in 1899, with the wheel placement altered for better performance. Amariah Hills went on to found the Archimedean Lawn Mower Co. in 1871.
In the United States, gasoline-powered lawn mowers were first manufactured in 1914 by Ideal Power Mower Co. of Lansing, Michigan, based on a patent by Ransom E. Olds. Ideal Power Mower also introduced the world's first self-propelled, riding lawn tractor in 1922, known as the "Triplex". The roller-drive lawn mower has changed very little since around 1930. Gang mowers, those with multiple sets of blades to cut a wider swath, were built in the United States in 1919 by the Worthington Mower Company.
Atco Ltd and the first motor mower
In the 1920s one of the most successful companies to emerge during this period was Atco, at that time a brand name of Charles H Pugh Ltd. The Atco 'Standard' motor mower, launched in 1921 was an immediate success. Just 900 of the 22-inch-cut machines were made in 1921, each costing £75. Within five years, annual production had accelerated to tens of thousands. Prices were reduced and a range of sizes were available, making the Standard the first truly mass-produced engine-powered mower.
Rotary mowers
Rotary mowers were not developed until engines were small enough and powerful enough to run the blades at sufficient speed. Many people experimented with rotary blade mowers in the late 1920s and early 1930s, and Power Specialties Ltd. introduced a gasoline-powered rotary mower. Kut Kwick replaced the saw blade of the "Pulp Saw" with a double-edged blade and a cutter deck, converting the "Pulp Saw" into the first ever out-front rotary mower.
One company that produced rotary mowers commercially was the Australian Victa company, starting in 1952. Its mowers were lighter and easier to use than similar ones that had come before. The first Victa mowers were made at Mortlake, an inner suburb of Sydney, by local resident Mervyn Victor Richardson. He made his first model out of scrap in his garage. The first Victa mowers were then manufactured, going on sale on 20 September 1952. The new company, Victa Mowers Pty Ltd, was incorporated on 13 February 1953.
The venture was so successful that by 1958 the company moved to much larger premises in Parramatta Road, Concord, and then to Milperra, by which time the mower incorporated an engine, designed and manufactured by Victa, which was specially designed for mowing, rather than employing a general-purpose engine bought from outside suppliers. Two Victa mowers, from 1958 and 1968 respectively, are held in the collection of the National Museum of Australia. The Victa mower is regarded as something of an Australian icon, appearing en masse, in simulated form, at the opening of the Sydney Olympic Games in 2000.
The hover mower, first introduced by Flymo in 1964, is a form of rotary mower using an air cushion on the hovercraft principle.
Types
By rotation
Cylinder or reel mowers
A cylinder mower or reel mower carries a fixed, horizontal cutting blade at the desired height of cut. Over this is a fast-spinning reel of blades which force the grass past the cutting bar. Each blade in the blade cylinder forms a helix around the reel axis, and the set of spinning blades describes a cylinder.
Of all the mowers, a properly adjusted cylinder mower makes the cleanest cut of the grass, and this allows the grass to heal more quickly. The cut of a well-adjusted cylinder mower is straight and definite, as if cut with a pair of scissors. This clean cut promotes healthier, thicker and more resilient lawn growth that is more resistant to disease, weeds and parasites. Lawn cut with a cylinder mower is less likely to result in yellow, white or brown discolouration as a result of leaf shredding. While the cutting action is often likened to that of scissors, it is neither necessary nor desirable for the blades of the spinning cylinder to contact the horizontal cutting bar. When the reel touches the cutting bar the work required by the mower increases dramatically. When the gap between the blades is less than the thickness of the grass blades, a clean cut can still be made without additional friction. When the gap is greater than the thickness of the grass blades, grass will slip through the gap uncut. Reel mowers also have more difficulty mowing over uneven terrain.
There are many variants of the cylinder mower. Push mowers have no engine and are usually used on smaller lawn areas where access is a problem, where noise pollution is undesirable and where air pollution is unwanted. As the mower is pushed along, the wheels drive gears which rapidly spin the reel. Typical cutting widths are . Advances in materials and engineering have resulted in these mowers being very light and easy to operate and manoeuvre compared with their predecessors while still giving all the cutting advantages of professional cylinder mowers. Their distinct environmental benefits, both in noise and air pollution, are also strong selling points, something not lost on many international zoos, animal sanctuaries and exclusive hotel groups.
The basic push mower mechanism is also used in gangs towed behind a tractor. The individual mowers are arranged in a "v" behind the tractor with each mower's track slightly overlapping that of the mower in front of it. Gang mowers are used over large areas of turf such as sports fields or parks.
A gasoline engine or electric motor can be added to a cylinder mower to power the cylinder, the wheels, the roller, or any combination of these. A typical arrangement on electric powered machines for residential lawns is for the motor to power the cylinder while the operator pushes the mower along. The electric models can be corded or cordless. On petrol machines the engine drives both the cylinder and the rear roller. Some variants have only three blades in a reel spinning at great speed, and these models are able to cut grass which has grown too long for ordinary push mowers. One type of reel mower, now largely obsolete, was a powered version of the traditional side-wheel push mower, which was used on residential lawns. An internal combustion engine sat atop the reel housing and drove the wheels, usually through a belt. The wheels in turn drove the reel, as in the push mower.
Greens mowers are used for the precision cutting of golf greens and have a cylinder made up of at least eight, but normally ten, blades. The machine has a roller before and after the cutting cylinder which smooths the freshly cut lawn and minimizes wheel marks. Due to the weight, the engine also propels the mower. Much smaller and lighter variants of the roller mower are sometimes used for small patches of ornamental lawns around flower beds, and these have no engine.
Riding reel mowers are also produced. Typically, the cutting reels are ahead of the vehicle's main wheels, so that the grass can be cut before the wheels push the grass over onto the ground. The reels are often hydraulically powered.
The main parts of a cylinder or reel mower are:
Blade reel/cylinder: Consists of numerous (3 to 7) helical blades that are attached to a rotating shaft. The blades rotate, creating a scissor-like cutting motion against the bed knife.
Bed knife: The stationary cutting mechanism of a cylinder/reel mower. This is a fixed horizontal blade that is mounted to the frame of the mower.
Body frame: The main structural frame of the mower onto which the other parts of the mower are mounted.
Wheels: Help propel the mower in action. Generally, reel mowers have two wheels.
Push handle: The "power source" of a manually operated reel mower. This is a sturdy T-shaped, rectangular, or trapezoidal handle that is connected to the frame, wheels and blade chamber.
Motor: The power source of a reel mower that is powered by gasoline or electricity.
Rotary mowers
A rotary mower rotates about a vertical axis with the blade spinning at high speed relying on impact to cut the grass. This tends to result in a rougher cut and bruises and shreds the grass leaf resulting in discolouration of the leaf ends as the shredded portion dies. This is particularly prevalent if the blades become clogged or blunt. Most rotary mowers need to be set a little higher than cylinder equivalents to avoid scalping and gouging of slightly uneven lawns, although some modern rotaries are fitted with a rear roller to provide a more formal striped cut. These machines will also tend to cut lower () than a standard four-wheeled rotary.
The main parts of a rotary mower are:
Cutter deck housing: Houses the blade and the drive system of the mower. It is shaped to effectively eject the grass clippings from the mower.
Blade mounting and drive system: The blade of a rotary mower is usually mounted directly to the crankshaft of its engine, but it can be propelled by a hydraulic motor or a belt pulley system.
Mower blade: A blade that rotates in a horizontal plane (about a vertical axis). Some mowers have multiple blades. The blade features edges that are slightly curved upward to generate a continuous air flow as the blade rotates (as a fan), thus creating a sucking and tearing action.
Engine/motor: May be powered by gasoline or electricity.
Wheels: Generally four wheels, two front and two rear. Some mowers have a roller in place of the rear wheels.
By energy source
Gasoline (petrol)
Extensive grass trimming was not common before the widespread application of the vertical shaft single cylinder gasoline/petrol engine. In the United States this development paralleled the market penetration of companies such as the Briggs & Stratton company of Wisconsin.
Most rotary push mowers are powered by internal combustion engines. Such engines are usually four-stroke engines, used for their greater torque and cleaner combustion (although a number of older models used two-stroke engines), running on gasoline (petrol) or other liquid fuels. Internal combustion engines used with lawn mowers normally have only one cylinder. Power generally ranges from four to seven horsepower. The engines usually have a carburetor and require a manual pull crank to start them, although an electric starter is offered on some models, particularly large riding and commercial mowers. Some mowers have a throttle control on the handlebar with which the operator can adjust the engine speed. Other mowers have a fixed, pre-set engine speed. All are equipped with a governor (often centrifugal/mechanical or air vane style) to open the throttle as needed to maintain the pre-selected speed when the force needed to cut the thicker or taller grass is encountered. Gasoline mowers have the advantages over electric mowers of greater power and distance range. They do create a significant amount of pollution due to the combustion in the engine, and their engines require periodic maintenance such as cleaning or replacement of the spark plug and air filter, and changing the engine oil. California passed Assembly Bill 1356 an air pollution control law on October 9, 2021. The California bill barred sales of spark ignited (gasoline fueled) internal combustion engines less than 25HP used for farm or construction machines as of January 1, 2024. The California bill does not ban turf care machines larger than 25HP or those powered by compression ignition (diesel) engines.
Electricity
Electric mowers are further subdivided into corded and cordless electric models. Both are relatively quiet, typically producing less than 75 decibels, while a gasoline lawn mower can be 95 decibels or more.
Corded electric mowers are limited in range by their trailing power cord, which may limit their use with lawns extending outward more than 100–150 feet (30–45 m) from the nearest available power outlet. There is the additional hazard with these machines of accidentally mowing over the power cable, which stops the mower and may put users at risk of receiving a dangerous electric shock. Installing a residual-current device (GFCI) on the outlet may reduce the shock risk.
Cordless electric mowers are powered by a variable number (typically 1–4) of 12-to-80-volt rechargeable batteries. Typically, more batteries mean more run time and/or power (and more weight). Batteries can be in the interior of the lawnmower or on the outside. If on the outside, the depleted batteries can be quickly swapped with recharged batteries. Cordless mowers have the maneuverability of a gasoline-powered mower and the environmental friendliness of a corded electric mower, but they are more expensive and come in fewer models (particularly the self-propelling type) than either. The eventual disposal of worn-out batteries is problematic (though some manufacturers offer to recycle them), and the motors in some cordless mowers tend to be less powerful than gasoline motors of the same total weight (including batteries).
Propane
Lawn mowers powered by propane were also manufactured by Lehr.
By hand
In hand-powered lawn mowers, the reel is attached to the mower's wheels by gears, so that when the mower is pushed forward, the reel spins several times faster than the plastic or rubber-tired wheels turn. Depending on the placement of the reel, these mowers often cannot cut grass very close to lawn obstacles.
Other notable types
Hover mowers
Hover mowers are powered rotary push lawn mowers that use an impeller above the spinning blades to drive air downward, thereby creating an air cushion that lifts the mower above the ground. The operator can then easily move the mower as it floats over the grass. Hover mowers are necessarily light in order to achieve the air cushion and typically have plastic bodies with an electric motor. The most significant disadvantage, however, is the cumbersome usability in rough terrain or on the edges of lawns, as the lifting air-cushion is destroyed by wide gaps between the chassis and the ground. Hover mowers are built to operate on steep slopes, waterfronts, and high-weeded areas, so they are often used by golf course greenskeepers and commercial landscapers. Grass collection is often available, but can be poor in some models. The quality of cut can be inferior if the grass is pushed away from the blade by the cushion of air.
Robotic mowers
Tractor pulled mowers
Tractor pulled mowers are usually in the form of an attachment to a tractor. The attachments can simply function by the movement of the tractor similar to manual push cylinder mowers, but also sometimes may have powered moving blades. They are commonly mounted on either the side or the back of the tractor.
Riding lawn mowers
Riding mowers (U.S. and Canada) or ride-on mowers (U.K. and Canada) are a popular alternative for large lawns. The operator is provided with a seat and controls on the mower and literally rides on the machine. Most use the horizontal rotating blade system, though usually with multiple blades. A common form of ride-on mower is the lawn tractor. These are usually designed to resemble a small agricultural tractor, with the cutting deck mounted amidships between the front and rear axles.
The drives for these mowers are in several categories. The most common transmission for tractors is a manual transmission. The second most common transmission type is a form of continuously variable transmission, called hydrostatic transmission. These transmissions take several forms, from pumps driving separate motors, which may incorporate a gear reduction, to fully integrated units containing a pump, motor and gear reduction. Hydrostatic transmissions are more expensive than mechanical transmissions, but they are easier to use and can transmit greater torque to the wheels compared to a typical mechanical transmission. The least common drive type, and the most expensive, is electric.
There have been a number of attempts to replace hydrostatic transmissions with lower cost alternatives, but these attempts, which include variable belt types, e.g. MTD's "Auto Drive", and toroidal, have various performance or perception problems that have caused their market life to be short or their market penetration to be limited.
Riding lawn mowers can often mount other devices, such as rototillers/rotavators, snow plows, snow blowers, yard vacuums, occasionally even front buckets or fork-lift tines (these are more properly known as "lawn tractors" in this case, being designed for a number of tasks). The ability to tow other devices is because they have multiple gears, often up to 5 or 6 and variable top speeds. Compact utility tractors equipped with a belly mower can look similar to riding lawn mowers, but they are typically larger, equipped with diesel engines, and feature a three-point hitch and rollover protection structure; these features are generally absent on riding lawn mowers.
The deck of a rotary mower is typically made of steel. Lighter steel is used on less expensive models, and heavier steel on more expensive models for durability. Other deck materials include aluminium, which does not rust and is a staple of higher priced mowers, and hard composite plastic, which does not rust and is lighter and less expensive than aluminium. Electric mowers typically have a plastic deck.
Riding mowers typically have an opening in the side or rear of the housing where the cut grass is expelled, as do most rotary lawn mowers. Some have a grass catcher attachment at the opening to bag the grass clippings.
Mulching mowers
Mulching mowers use special mulching blades which are available for rotary mowers. The blade is designed to keep the clippings circulating underneath the mower until the clippings are chopped quite small. Other designs have twin blades to mulch the clippings to small pieces. This function has the advantages of forgoing the additional work collecting and disposing of grass clippings while reducing lawn waste in such a way that also creates convenient compost for the lawn, forgoing the expense and adverse environmental effect of fertilizer.
Mower manufacturers market their mowers as side discharge, 2-in-1, meaning bagging and mulching or side discharging and mulching, and 3-in-1, meaning bagging, mulching, and side discharge. Most 2-in-1 bagging and mulching mowers require a separate attachment to discharge grass onto the lawn. Some side discharge mower manufacturers also sell separate "mulching plates" that will cover the opening on the side discharge mower and, in combination with the proper blades, will convert the mower to a mulching mower. These conversions are impractical when compared with 2- or 3-in-1 mowers which can be converted in the field in seconds. There are two types of bagging mowers. A rear bag mower features an opening on the back of the mower through which the grass is expelled into the bag. Hi-vac mowers have a tunnel that extends from the side discharge to the bag. Hi-vac is also the type of grass collection used on some riding lawn mowers and lawn tractors and is suitable for use in dry conditions but less suitable for long wet lush grass as they often clog up. Mulching and bagging mowers are not well suited to long grass or thick weeds. In some ride-on mowers, the cut grass is dropped onto the ground and then collected by a set of rotating bristles, allowing even long, wet grass to be collected.
Rotary mowers with internal combustion engines come in three price ranges. Low priced mowers use older technology, smaller motors, and lighter steel decks. These mowers are targeted at the residential market and typically price is the most important selling point.
Professional mowers
Professional grass-cutting equipment, used by large establishments such as universities, sports stadiums and local authorities, usually take the form of much larger, dedicated, ride-on platforms or attachments that can be mounted on, or behind, a standard tractor unit (a "gang-mower"). Either type may use rotating-blade or cylindrical-blade type cutters, although high-quality mowed surfaces demand the latter. Wide-area mowers (WAMs) are commercial grade mowers which have decks extended to either side, many to . These extensions can be lowered for large area mowing or raised to decrease the mower's width and allow for easy transport on city roads or trailers. Commercial lawn-mowing companies have also enthusiastically adopted types such as the zero-turn mower (in both ride-on and stand-on versions), which allow high speed over the grass surface, and rapid turnaround at the end of rows, as well as excellent maneuverability around obstacles.
Mowers mounted on a tractor's three-point hitch may be known as finish mowers used for maintaining lawn, flail mowers used for maintaining rough grass on rough surfaces, or brush mowers used for cutting brush and small trees.
Safety issues
Rotary mowers can throw out debris with extreme velocity and energy. Additionally, the blades of a self-powered push mower (gasoline or electric) can injure a careless or inattentive user; consequently, many come equipped with a dead man's switch to immediately disable the blade rotation when the user is no longer holding the handle. In the United States, over 12,000 people per year are hospitalized as a result of lawn mower accidents. In 2016, 86,000 adults and 4,500 children were admitted to the emergency room for lawnmower injuries. The vast majority of these injuries can be prevented by wearing protective footwear when mowing. The American Academy of Pediatrics recommends that children be at least 12 years old before they are allowed to use a walk-behind lawn mower and at least 16 years of age before using a riding mower and that they "should not operate lawn mowers until they have displayed the necessary levels of judgment, strength, coordination, and maturity". Persons using a mower should wear heavy footwear, eye protection, and hearing protection in the case of engine-powered mowers.
Environmental and occupational impact
A 2001 study showed that some mowers produce the same amount of pollution (emissions other than carbon dioxide) in one hour as driving a 1992 model vehicle for . Another estimate puts the amount of pollution from a lawn mower at four times the amount from a car, per hour, although this report is no longer available. Beginning in 2011, the United States Environmental Protection Agency set standards for lawn equipment emissions and expects a reduction of at least 35 percent.
Gas powered lawn mowers produce GHG emissions. A minimum-maintained lawn management practice with clipping recycling, and minimum irrigation and mowing, is recommended to mitigate global warming effects from urban turfgrass system.
Battery-powered lawn mowers offer cleaner alternatives to consumers by producing zero emissions, being more efficient, and eliminating risks of spilled gasoline. Gasoline-powered lawnmowers are not regulated to have emission-capturing technology.
Mowers can create significant noise pollution, and could cause hearing loss if used without hearing protection for prolonged periods of time. Lawn mowers also present an occupational hearing hazard to the nearly one million people who work in lawn service and ground-keeping. One study assessed the occupational noise exposure among groundskeepers at several North Carolina public universities and found noise levels from push lawn mowers measured between 86 and 95 decibels (A-weighted) and from riding lawn mowers between 88 and 96 dB(A); both types exceeded the National Institute for Occupational Safety and Health (NIOSH) Recommended Exposure Limit of 85 dB(A).
The risk of hearing loss and noise pollution can be reduced by using battery-operated mowers or appropriate hearing protection such as earplugs or earmuffs.
It is possible for a lawn mower to damage the underlying soil, the roots of the grass, and the mower itself if the blades cut through the grass and collide with the underlying ground. Therefore, it is important to adjust mower height properly and choose the right tires for a lawn mower to prevent any marking or digging.
See also
Alvin Straight
Ambient noise level
Groundskeeping (includes list of equipment)
Lawn mower racing
Noise control
Non-road engine
Organic lawn management
Roll over protection structure (for lawn tractors or ride-on mowers)
Small engine
References
Further reading
External links
English inventions
Gardening tools
Mower
1830 introductions
Home appliances
19th-century inventions | Lawn mower | [
"Physics",
"Technology"
] | 6,565 | [
"Physical systems",
"Machines",
"Home appliances"
] |
343,934 | https://en.wikipedia.org/wiki/Design%20life | The design life of a component or product is the period of time during which the item is expected by its designers to work within its specified parameters; in other words, the life expectancy of the item. It is not always the actual length of time between placement into service of a single item and that item's onset of wearout.
Another use of the term design life deals with consumer products. Many products employ design life as one factor of their differentiation from competing products and components. A disposable camera is designed to withstand a short life, whilst an expensive single-lens reflex camera may be expected to have a design life measured in years or decades.
Long design lives
Some products designed for heavy or demanding use are so well-made that they are retained and used well beyond their design life. Some public transport vehicles come into this category, as do a number of artificial satellites and spacecraft. In general, entry-level products—those at the lowest end of the price range fulfilling a certain specification—will tend to have shorter design lives than more expensive products fulfilling the same function, since there are savings to be made in using designs that are cheaper to implement, or, conversely, costs to be passed onto the customer in engineering to provide a safe margin leading to an increased working life. This economic truism leads to the phenomenon of products designed (or appearing to be designed) to last only so long as their warranty period.
Obsolescence
Design life is related to but distinct from the concept of planned obsolescence. The latter is the somewhat more nebulous notion that products are designed so as to become obsolete—at least in the eyes of the user—before the end of their design life. Two classic examples here are digital cameras, which become genuinely obsolete as a result of the very rapid rate of technological advances, although still in perfect working order; and non-digital cameras, which are perceived as obsolete after a year or so as they are no longer "the latest design" although actually capable of years of useful service.
See also
Availability
Circular economy
Consumables
Disposable product
Durability
Interchangeable parts
Maintainability
Planned obsolescence
Repairability
Service life
Source reduction
Throwaway society
ISO 15686
References
Design for the Real World: Human Ecology and Social Change, Victor Papanek, Academy Chicago Publishers; 2nd Rev edition (December 1985),
Industrial design
Product design
Sustainable design | Design life | [
"Engineering"
] | 487 | [
"Industrial design",
"Design engineering",
"Design",
"Product design"
] |
344,025 | https://en.wikipedia.org/wiki/Hyperreality | Hyperreality is a concept in post-structuralism that refers to the process of the evolution of notions of reality, leading to a cultural state of confusion between signs and symbols invented to stand in for reality, and direct perceptions of consensus reality. Hyperreality is seen as a condition in which, because of the compression of perceptions of reality in culture and media, what is generally regarded as real and what is understood as fiction are seamlessly blended together in experiences so that there is no longer any clear distinction between where one ends and the other begins.
The term was proposed by French philosopher Jean Baudrillard, whose postmodern work contributed to a scholarly tradition in the field of communication studies that speaks directly to larger social concerns. Postmodernism was established through the social turmoil of the 1960s, spurred by social movements that questioned preexisting conventions and social institutions. Through the postmodern lens, reality is viewed as a fragmented, complimentary and polysemic system with components that are produced by social and cultural activity. Social realities that constitute consensus reality are constantly produced and reproduced, changing through the extended use of signs and symbols which hence contribute to the creation of a greater hyperreality.
Origins and usage
The postmodern semiotic concept of hyperreality was contentiously coined by Baudrillard in Simulacra and Simulation (1981). Baudrillard defined "hyperreality" as "the generation by models of a real without origin or reality"; and his earlier book Symbolic Exchange and Death. Hyperreality is a representation, a sign, without an original referent. According to Baudrillard, the commodities in this theoretical state do not have use-value as defined by Karl Marx but can be understood as signs as defined by Ferdinand de Saussure. He believes hyperreality goes further than confusing or blending the 'real' with the symbol which represents it; it involves creating a symbol or set of signifiers which represent something that does not actually exist, like Santa Claus. Baudrillard borrows, from Jorge Luis Borges' "On Exactitude in Science" (already borrowed from Lewis Carroll), the example of a society whose cartographers create a map so detailed that it covers the very things it was designed to represent. When the empire declines, the map fades into the landscape. He says that, in such a case, neither the representation nor the real remains, just the hyperreal.
Baudrillard's idea of hyperreality was heavily influenced by phenomenology, semiotics, and the philosophy of Marshall McLuhan. Baudrillard, however, challenges McLuhan's famous statement that "the medium is the message," by suggesting that information devours its own content. He also suggested that there is a difference between the media and reality and what they represent. Hyperreality is the inability of consciousness to distinguish reality from a simulation of reality, especially in technologically advanced societies. However, Baudrillard's hyperreality theory goes a step further than McLuhan's medium theory: "There is not only an implosion of the message in the medium, there is, in the same movement, the implosion of the medium itself in the real, the implosion of the medium and of the real in a sort of hyperreal nebula, in which even the definition and distinct action of the medium can no longer be determined".
Italian author Umberto Eco explores the notion of hyperreality further by suggesting that the action of hyperreality is to desire reality and in the attempt to achieve that desire, to fabricate a false reality that is to be consumed as real. Linked to contemporary western culture, Umberto Eco and post-structuralists would argue that in current cultures, fundamental ideals are built on desire and particular sign-systems. Temenuga Trifonova from University of California, San Diego notes,
Significance
Hyperreality is significant as a paradigm to explain current cultural conditions. Consumerism, because of its reliance on sign exchange value (e.g. brand X shows that one is fashionable, car Y indicates one's wealth), could be seen as a contributing factor in the creation of hyperreality or the hyperreal condition. Hyperreality tricks consciousness into detaching from any real emotional engagement, instead opting for artificial simulation, and endless reproductions of fundamentally empty appearance. Essentially (although Baudrillard himself may balk at the use of this word), fulfillment or happiness is found through simulation and imitation of a transient simulacrum of reality, rather than any interaction with any "real" reality.
While hyperreality is not a new concept, its effects are more relevant in modern society, incorporating technological advancements like artificial intelligence, virtual reality and neurotechnology (simulated reality). This is attributed to the way it effectively captured the postmodern condition, particularly how people in the postmodern world seek stimulation by creating unreal worlds of spectacle and seduction and nothing more. There are dangers to the use of hyperreality within our culture; individuals may observe and accept hyperreal images as role models when the images don't necessarily represent real physical people. This can result in a desire to strive for an unobtainable ideal, or it may lead to a lack of unimpaired role models. Daniel J. Boorstin cautions against confusing celebrity worship with hero worship, "we come dangerously close to depriving ourselves of all real models. We lose sight of the men and women who do not simply seem great because they are famous but who are famous because they are great". He bemoans the loss of old heroes like Moses, Odysseus, Aeneas, Jesus, Julius Caesar, Muhammed, Joan of Arc, William Shakespeare, George Washington, Napoleon, and Abraham Lincoln, who did not have public relations (PR) agencies to construct hyperreal images of themselves. The dangers of hyperreality are also facilitated by information technologies, which provide tools to dominant powers that seek to encourage it to drive consumption and materialism. The danger in the pursuit of stimulation and seduction emerge not in the lack of meaning but, as Baudrillard maintained, "we are gorged with meaning and it is killing us."
Hyperreality, some sources point out, may provide insights into the postmodern movement by analyzing how simulations disrupt the binary opposition between reality and illusion but it does not address or resolve the contradictions inherent in this tension.
Key relational themes
The concepts most fundamental to hyperreality are those of simulation and the simulacrum, first conceptualized by Jean Baudrillard in his book Simulacra and Simulation. The two terms are separate entities with relational origin connections to Baudrillard's theory of hyperreality.
Simulation
Simulation is characterized by a blending of 'reality' and representation, where there is no clear indication of where the former stops and the latter begins. Simulation is no longer that of a territory, a referential being, or a substance; "It is the generation by models of a real without origin or reality: a hyperreal." Baudrillard suggests that simulation no longer takes place in a physical realm; it takes place within a space not categorized by physical limits i.e., within ourselves, technological simulations, etc.
Simulacrum
The simulacrum is "an image without resemblance"; as Gilles Deleuze summarized, it is the forsaking of "moral existence in order to enter into aesthetic existence". However, Baudrillard argues that a simulacrum is not a copy of the real, but becomes—through sociocultural compression—truth in its own right.
There are four steps of hyperreal reproduction:
Basic reflection of reality, i.e. in immediate perception
Perversion of reality, i.e. in representation
Pretense of reality, where there is no model
Simulacrum, which "bears no relation to any reality whatsoever"
Hyperstition
The concept of "hyperstition" as expounded upon by the English collective Cybernetic Culture Research Unit generalizes the notion of hyperreality to encompass the concept of "fictional entities that make themselves real." In Nick Land's own words:
The concept of hyperstition is also related to the concept of "theory-fiction", in which philosophy, critical theory and postmodern literature speculate on actual reality and engage with concepts for potentialities and virtualities. An oft-cited example of such a concept is cyberspace—originating in William Gibson's 1984 novel Neuromancer—which is a concept for the convergence between virtualities and actualities. By the mid-1990s, the realization of this concept had begun to emerge on a mass scale in the form of the internet.
Consequence
The truth was already being called into question with the rise of media and technology, but with the presence of hyperreality being used most and embraced as a new technology, there are a couple of issues or consequences of hyperreality. It's difficult enough to hear something on the news and choose not to believe it, but it's quite another to see an image of an event or anything and use your empirical sense to determine whether the news is true or false, which is one of the consequences of hyperrealism. The first is the possibility of various simulations being used to influence the audience, resulting in an inability to differentiate fiction from reality, which affects the overall truth value of a subject at hand. Another implication or disadvantage is the possibility of being manipulated by what we see.
The audience can interpret different messages depending on the ideology of the entity behind an image. As a result, power equates to control over the media and the people. Celebrities, for example, have their photographs taken and altered so that the public can see the final result. The public then perceives celebrities based on what they have seen rather than how they truly are. It can progress to the point where celebrities appear completely different. As a result of celebrities' body modifications and editing, there has been an increase in surgeries and a decrease in self-esteem during adolescence. Because the truth is threatened, a similar outcome for hyperreality is possible.
In culture
As society has transitioned toward a consumer culture, the combination of the free market economy and the advancements found within media and communication technologies have influenced this development towards a hyperreality. Through the emergence of new media technologies and the ever-growing role of media found within the modern day, a growing link is displayed between the incorporation and effects of hyperreality. The transition from Web 1.0 to Web 2.0 to Web3 has been studied as a process of transitioning towards hyperreality. On the basic level of hyperreality, Web 1.0 was designed for freely downloading and reading information, with readers being able to search for topics; Yahoo, Google, and MSN are examples of Web 1.0. Instagram, TikTok, and Messenger are examples of Web 2.0 platforms that transformed what was once a reading platform into an interaction platform. Web3 is a newer platform that allows users to fully integrate the virtual world into decentralized and autonomously controlled environments, such as Filecoin and the metaverse.
There is a strong link between media and the impact that the presence of hyperreality has on its viewers. This has shown to blur the lines between artificial realities and reality, influencing the day to day experiences of those exposed to it. As hyperreality captures the inability to distinguish reality from a simulation of reality, common media outlets such as news, social media platforms, radio and television contribute to this misconception of true reality. Descriptions of the impact of hyperreality can be found in popular media. They present themselves as becoming blended with reality, which influences the experience of life and truth for its viewers.
Baudrillard, like Roland Barthes before him, explained that these impacts have a direct effect on younger generations who idolize the heroes, characters or influencers found on these platforms. As media is a social institution that shapes and develops its members within society, the exposure to hyperreality found within these platforms presents an everlasting effect. Baudrillard concludes that exposure to hyperreality over time will lead, from the conservative perspective of the institutions themselves, to confusion and chaos, in turn leading to the destruction of identity, originality and character while ironically still being the mainstay of the institutions.
Social media and public image
With the introduction of the smartphone in the early 2000s, online presence and presence in the real world have become synonymous. An individual's digital footprint can often tell us more about an individual than their real lives. This is because people's behaviors can change dramatically on the internet with virtually no repercussions or laws telling them to do so; the internet has become the anarchist's safe haven. The role of social media in society has dramatically increased in recent decades and creating a public image or online presence has become an online standard. Twitter has become a main source for public figures to express themselves and for corporations to inform the public. The hyperreality environment on the internet has shifted dramatically over the course of the COVID-19 pandemic, so much so that it has an influence on the Italian Stock Exchange in 2021. The hyperreality created on social media platforms has been regarded as strong and influential enough for its quality and emotion to be translated into social reality, where value is lost and careers are damaged. Emotions expressed on social media are directly having real-life effects on numerous sectors despite having any factual basis or tangible information. As social media becomes more ingrained into the daily lives of countless individuals, the distinction between stories on the internet and truth in real life are becoming more blurred as it descends into the core of hyperreality.
Squid Game created hyperreal conditions on the internet where millions were sharing their own feelings and opinions about the show, even going as far as to play the games and practice the activities portrayed in the show. The hyperreal conditions were created so effectively that individuals were picking up unique Korean cultural aspects but only giving credit to the show and not the country; individuals believed the show created these games. This is hugely significant because it illustrates Baudrillard's notion of models of reality without reality; a fictional TV show produced real events and practices and completely removed the real cultural significance. The Hollywood sign in Los Angeles, California, itself produces similar notions, but is more a symbol of a facet of hyperreality—the creation of a city with its main target being media production.
The increase in social media influencers has given rise to a popular "storytelling" trend, where creators recount past experiences, often exaggerating and dramatizing the experience for perceived importance and relevance. The trend mixes reality with the virtual world as viewers often feel part of the creators' life and identify with this given image the creator produces for their audience. Social media currently offers what news and other sources of media could not forty years ago: the chance to not only share news but to also create news. To exaggerate this even further, TikTok has seen the nuance of AI accounts that present themselves as human-like animated beings with unique personalities, artificial social circles and personal likes and interests. Once designed by humans, now completely independent of any influence, these AI creations have mass followings that present conditions of perfect four-dimensional simulation as described by Baudrillard. With the incentive for viewership and notoriety, social media influencers/creators have little incentive to produce meaningful and actual news and instead lean toward these storytelling methods that produce large reactions that blur the lines of reality and false online narratives.
Disneyland
Both Umberto Eco and Jean Baudrillard refer to Disneyland as an example of hyperreality. Eco believes that Disneyland with its settings such as Main Street and full sized houses has been created to look "absolutely realistic", taking visitors' imagination to a "fantastic past". This false reality creates an illusion and makes it more desirable for people to buy this reality. Disneyland works in a system that enables visitors to feel that technology and the created atmosphere "can give us more reality than nature can". The "fake nature" of Disneyland satisfies our imagination and daydream fantasies in real life. The idea is that nothing in this world is real. Nothing is original, but all are endless copies of reality. Since we do not imagine the reality of simulations, both imagined and real are equally hyperreal, for example, the numerous simulated rides, including the submarine ride and the Mississippi boat tour. When entering Disneyland, consumers form into lines to gain access to each attraction. Then they are ordered by people with special uniforms to follow the rules, such as where to stand or where to sit. If the consumers follow each rule correctly, they can enjoy "the real thing" and see things that are not available to them outside of Disneyland's doors.
In his work Simulacra and Simulation, Baudrillard argues the "imaginary world" of Disneyland magnetizes people inside and has been presented as "imaginary" to make people believe that all its surroundings are "real". But he believes that the Los Angeles area is not real; thus it is hyperreal. Disneyland is a set of apparatuses which tries to bring imagination and fiction to what is called "real". This concerns the American values and way of life in a sense and "concealing the fact that the real is no longer real, and thus of saving the reality principle."
"The Disneyland imaginary is neither true or false: it is a deterrence machine set up in order to rejuvenate in reverse the fiction of the real. Whence the debility, the infantile degeneration of this imaginary. It's meant to be an infantile world, in order to make us believe that the adults are elsewhere, in the "real" world, and to conceal the fact that real childishness is everywhere, particularly among those adults who go there to act the child in order to foster illusions of their real childishness."
Examples
The hyperrealist painter Denis Peterson intentionally emphasized familiar signs and images which did not in fact faithfully reveal true reality. Instead, he coalesced these alternate perceptions of realities into subliminal depictions of contemporary cultures and boldly launched them into his body of work as hyperreality. No longer satisfied with an art-for-art's sake approach to realist cityscapes and the like, Peterson saw hyperreality as a vehicle for social change, oftentimes conjuring themes of corruption, decadence, and genocide in his subject matter.
The 1999 film Existenz follows Allegra Geller, a game designer who finds herself targeted by assassins while playing a virtual reality game of her own creation.
The 1999 film The Thirteenth Floor: a murder mystery set within a cutting edge computer simulator, exploring ownership and abuse within the Utopian ideals of AI.
The 2008 film Synecdoche, New York in which the life of the main character Caden Cotard is lived in the confines of a warehouse made to be the set of a play which is about his life, blurring all distinction between what is real and the simulation.
The 2014 film Birdman portrays a theater director haunted by making his show as authentic as possible, leading to people getting hurt.
Films in which characters and settings are either digitally enhanced or created entirely from CGI (e.g., 300, where the entire film was shot in front of a blue/green screen, with all settings super-imposed).
In A Clockwork Orange, when Alex says, "It's funny how the colors of the real world only seem really real when you viddy them on the screen" (sic.) when he undergoes Ludovico's Technique.
A well-manicured garden (nature as hyperreal).
Any massively promoted versions of historical or present "facts" (e.g., "General Ignorance" from QI, where the questions have seemingly obvious answers, which are actually wrong).
Professional sports athletes as super, invincible versions of human beings.
Many world cities and places which did not evolve as functional places with some basis in reality, as if they were creatio ex nihilo (literally 'creation out of nothing'): Black Rock City; Disney World; Dubai; Celebration, Florida; Cancun and Las Vegas.
TV and film in general (especially "reality" TV), due to its creation of a world of fantasy and its dependence that the viewer will engage with these fantasy worlds. The current trend is to glamorize the mundane using histrionics.
A retail store that looks completely stocked and perfect due to facing, creating an illusion of more merchandise than there actually is.
A high end sex doll used as a simulacrum of an unattainable partner.
A newly made building or item designed to look old, or to recreate or reproduce an older artifact, by simulating the feel of age or aging. Such as Reborn Dolls.
Constructed languages (such as E-Prime) or "reconstructed" extinct dialects.
Second Life, where the distinction becomes blurred when it becomes the platform for RL (real life) courses and conferences or leads to real world interactions behind the scenes.
Weak virtual reality.
The superfictional airline company Ingold Airlines.
Works within the spectrum of the Vaporwave musical genre often encompass themes of hyperreality through parody of the information revolution.
Plastic surgery, which can be described as the construction of faces that efface the distinction between "natural" and "artificial" in the syntax of beauty.
Airbrushed images of men and women; for example, Dove's Campaign for Real Beauty.
Heidiland, is a region in eastern Switzerland named after the "Heidi" novels by Johanna Spyri, encompassing alpine landscapes, villages, and recreational areas inspired by the story's setting. The labels throughout the village attraction treat Heidi as a historical figure with few hints of make-believe.
The restaurant Chain, which features nostalgic callbacks to real fast food chains (in particular Pizza Hut) though is a pastiche of fast food restaurants from a previous era.
The superfictional video game Petscop.
ChatGPT is highly proficient at confidently generating answers which can be either right or wrong, even blending both truth and falsities. For example, it has argued with all seriousness that the word cat starts with the letter S, but can employ many different formats, from essays to casual conversations. See Hallucination (artificial intelligence).
See also
Allegory of the cave
Authenticity (philosophy)
Database consumption
Escapism
Extended reality
Hypersociability
Immersion (virtual reality)
Life imitating art
Marx's theory of alienation
Metamodernism
Metaverse
Post-irony
Post-truth politics
The Real
Real life
Sandbox game
Simulacrum
Simulated reality
Social simulation
Solipsism
Suspension of disbelief
Superficiality
The Symbolic
References
Sources
Further reading
Jean Baudrillard, (2001) "The Precession of Simulacra", in Media and Cultural Studies : Keyworks, Durham & Kellner, eds.
D.M. Boje (1995), "Stories of the storytelling organization: a postmodern analysis of Disney as 'Tamara-land'", Academy of Management Journal, 38(4), pp. 997–1035.
Albert Borgmann, Crossing the Postmodern Divide (1992).
George Ritzer, The McDonaldization of Society (2004).
Charles Arthur Willard Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy. University of Chicago Press. 1996.
Lazaroiu, George. "Cybernews and hyperreality." Economics, Management and Financial Markets 3, no. 2 (2008): 69.
Pasco, M. O. D. (2008). Contemporary Media Society in the Age of Hyperreality. Prajñā Vihāra: Journal of Philosophy and Religion, 9(1).
External links
International Journal of Baudrillard Studies
Baudrillard and Hyperreality; Simulacro y régimen de mortandad en el Sistema de los objetos (Disney World and Hyperreality) (PDF) by Adolfo Vasquez Rocca (in Spanish)
Reality/Hyperreality, The Chicago School of Media Theory
Consensus reality
Information Age
Philosophy of technology
Postmodern theory
Reality
Social networks | Hyperreality | [
"Technology"
] | 5,023 | [
"Information Age",
"Philosophy of technology",
"Science and technology studies",
"Computing and society",
"Hyperreality"
] |
344,116 | https://en.wikipedia.org/wiki/Korteweg%E2%80%93De%20Vries%20equation | In mathematics, the Korteweg–De Vries (KdV) equation is a partial differential equation (PDE) which serves as a mathematical model of waves on shallow water surfaces. It is particularly notable as the prototypical example of an integrable PDE, exhibiting typical behaviors such as a large number of explicit solutions, in particular soliton solutions, and an infinite number of conserved quantities, despite the nonlinearity which typically renders PDEs intractable. The KdV can be solved by the inverse scattering method (ISM). In fact, Clifford Gardner, John M. Greene, Martin Kruskal and Robert Miura developed the classical inverse scattering method to solve the KdV equation.
The KdV equation was first introduced by and rediscovered by Diederik Korteweg and Gustav de Vries in 1895, who found the simplest solution, the one-soliton solution. Understanding of the equation and behavior of solutions was greatly advanced by the computer simulations of Norman Zabusky and Kruskal in 1965 and then the development of the inverse scattering transform in 1967.
Definition
The KdV equation is a partial differential equation that models (spatially) one-dimensional nonlinear dispersive nondissipative waves described by a function adhering to:
where accounts for dispersion and the nonlinear element is an advection term.
For modelling shallow water waves, is the height displacement of the water surface from its equilibrium height.
The constant in front of the last term is conventional but of no great significance: multiplying , , and by constants can be used to make the coefficients of any of the three terms equal to any given non-zero constants.
Soliton solutions
One-soliton solution
Consider solutions in which a fixed waveform, given by , maintains its shape as it travels to the right at phase speed . Such a solution is given by . Substituting it into the KdV equation gives the ordinary differential equation
or, integrating with respect to ,
where is a constant of integration. Interpreting the independent variable above as a virtual time variable, this means
satisfies Newton's equation of motion of a particle of unit mass in a cubic potential
.
If
then the potential function has local maximum at ; there is a solution in which starts at this point at 'virtual time'
, eventually slides down to the local minimum, then back up the other side, reaching an equal height, and then reverses direction, ending up at the local maximum again at time . In other words, approaches as . This is the characteristic shape of the solitary wave solution.
More precisely, the solution is
where stands for the hyperbolic secant and is an arbitrary constant. This describes a right-moving soliton with velocity .
N-soliton solution
There is a known expression for a solution which is an -soliton solution, which at late times resolves into separate single solitons. The solution depends on a set of decreasing positive parameters and a set of non-zero parameters . The solution is given in the form
where the components of the matrix are
This is derived using the inverse scattering method.
Integrals of motion
The KdV equation has infinitely many integrals of motion, functionals on a solution which do not change with time. They can be given explicitly as
where the polynomials are defined recursively by
The first few integrals of motion are:
the mass
the momentum
the energy .
Only the odd-numbered terms result in non-trivial (meaning non-zero) integrals of motion.
Lax pairs
The KdV equation
can be reformulated as the Lax equation
with a Sturm–Liouville operator:
where is the commutator such that . The Lax pair accounts for the infinite number of first integrals of the KdV equation.
In fact, is the time-independent Schrödinger operator (disregarding constants) with potential . It can be shown that due to this Lax formulation that in fact the eigenvalues do not depend on .
Zero-curvature representation
Setting the components of the Lax connection to be
the KdV equation is equivalent to the zero-curvature equation for the Lax connection,
Least action principle
The Korteweg–De Vries equation
is the Euler–Lagrange equation of motion derived from the Lagrangian density,
with defined by
Since the Lagrangian (eq (1)) contains second derivatives, the Euler–Lagrange equation of motion for this field is
where is a derivative with respect to the component.
A sum over is implied so eq (2) really reads,
Evaluate the five terms of eq (3) by plugging in eq (1),
Remember the definition , so use that to simplify the above terms,
Finally, plug these three non-zero terms back into eq (3) to see
which is exactly the KdV equation
Long-time asymptotics
It can be shown that any sufficiently fast decaying smooth solution will eventually split into a finite superposition of solitons travelling to the right plus a decaying dispersive part travelling to the left. This was first observed by and can be rigorously proven using the nonlinear steepest descent analysis for oscillatory Riemann–Hilbert problems.
History
The history of the KdV equation started with experiments by John Scott Russell in 1834, followed by theoretical investigations by Lord Rayleigh and Joseph Boussinesq around 1870 and, finally, Korteweg and De Vries in 1895.
The KdV equation was not studied much after this until discovered numerically that its solutions seemed to decompose at large times into a collection of "solitons": well separated solitary waves. Moreover, the solitons seems to be almost unaffected in shape by passing through each other (though this could cause a change in their position). They also made the connection to earlier numerical experiments by Fermi, Pasta, Ulam, and Tsingou by showing that the KdV equation was the continuum limit of the FPUT system. Development of the analytic solution by means of the inverse scattering transform was done in 1967 by Gardner, Greene, Kruskal and Miura.
The KdV equation is now seen to be closely connected to Huygens' principle.
Applications and connections
The KdV equation has several connections to physical problems. In addition to being the governing equation of the string in the Fermi–Pasta–Ulam–Tsingou problem in the continuum limit, it approximately describes the evolution of long, one-dimensional waves in many physical settings, including:
shallow-water waves with weakly non-linear restoring forces,
long internal waves in a density-stratified ocean,
ion acoustic waves in a plasma,
acoustic waves on a crystal lattice.
The KdV equation can also be solved using the inverse scattering transform such as those applied to the non-linear Schrödinger equation.
KdV equation and the Gross–Pitaevskii equation
Considering the simplified solutions of the form
we obtain the KdV equation as
or
Integrating and taking the special case in which the integration constant is zero, we have:
which is the special case of the generalized stationary Gross–Pitaevskii equation (GPE)
Therefore, for the certain class of solutions of generalized GPE ( for the true one-dimensional condensate and
while using the three dimensional equation in one dimension), two equations are one. Furthermore, taking the case with the minus sign and the real, one obtains an attractive self-interaction that should yield a bright soliton.
Variations
Many different variations of the KdV equations have been studied. Some are listed in the following table.
See also
Advection-diffusion equation
Benjamin–Bona–Mahony equation
Boussinesq approximation (water waves)
Cnoidal wave
Dispersion (water waves)
Dispersionless equation
Fifth-order Korteweg–De Vries equation
Kadomtsev–Petviashvili equation
KdV hierarchy
Modified KdV–Burgers equation
Novikov–Veselov equation
Schamel equation
Ursell number
Vector soliton
Notes
References
External links
Korteweg–De Vries equation at EqWorld: The World of Mathematical Equations.
Korteweg–De Vries equation at NEQwiki, the nonlinear equations encyclopedia.
Cylindrical Korteweg–De Vries equation at EqWorld: The World of Mathematical Equations.
Modified Korteweg–De Vries equation at EqWorld: The World of Mathematical Equations.
Modified Korteweg–De Vries equation at NEQwiki, the nonlinear equations encyclopedia.
Derivation of the Korteweg–De Vries equation for a narrow canal.
Three Solitons Solution of KdV Equation –
Three Solitons (unstable) Solution of KdV Equation –
Mathematical aspects of equations of Korteweg–De Vries type are discussed on the Dispersive PDE Wiki.
Solitons from the Korteweg–De Vries Equation by S. M. Blinder, The Wolfram Demonstrations Project.
Solitons & Nonlinear Wave Equations
Eponymous equations of physics
Partial differential equations
Exactly solvable models
Integrable systems
Solitons
Equations of fluid dynamics | Korteweg–De Vries equation | [
"Physics",
"Chemistry"
] | 1,904 | [
"Equations of fluid dynamics",
"Equations of physics",
"Integrable systems",
"Theoretical physics",
"Eponymous equations of physics",
"Fluid dynamics"
] |
344,123 | https://en.wikipedia.org/wiki/Heat%20sink | A heat sink (also commonly spelled heatsink,) is a passive heat exchanger that transfers the heat generated by an electronic or a mechanical device to a fluid medium, often air or a liquid coolant, where it is dissipated away from the device, thereby allowing regulation of the device's temperature. In computers, heat sinks are used to cool CPUs, GPUs, and some chipsets and RAM modules. Heat sinks are used with other high-power semiconductor devices such as power transistors and optoelectronics such as lasers and light-emitting diodes (LEDs), where the heat dissipation ability of the component itself is insufficient to moderate its temperature.
A heat sink is designed to maximize its surface area in contact with the cooling medium surrounding it, such as the air. Air velocity, choice of material, protrusion design and surface treatment are factors that affect the performance of a heat sink. Heat sink attachment methods and thermal interface materials also affect the die temperature of the integrated circuit. Thermal adhesive or thermal paste improve the heat sink's performance by filling air gaps between the heat sink and the heat spreader on the device. A heat sink is usually made out of a material with a high thermal conductivity, such as aluminium or copper.
Heat transfer principle
A heat sink transfers thermal energy from a higher-temperature device to a lower-temperature fluid medium. The fluid medium is frequently air, but can also be water, refrigerants, or even oil. If the fluid medium is water, the heat sink is frequently called a cold plate. In thermodynamics a heat sink is a heat reservoir that can absorb an arbitrary amount of heat without significantly changing temperature. Practical heat sinks for electronic devices must have a temperature higher than the surroundings to transfer heat by convection, radiation, and conduction. The power supplies of electronics are not absolutely efficient, so extra heat is produced that may be detrimental to the function of the device. As such, a heat sink is included in the design to disperse heat.
Fourier's law of heat conduction shows that when there is a temperature gradient in a body, heat will be transferred from the higher-temperature region to the lower-temperature region. The rate at which heat is transferred by conduction, , is proportional to the product of the temperature gradient and the cross-sectional area through which heat is transferred. When it is simplified to a one-dimensional form in the x direction, it can be expressed as:
For a heat sink in a duct, where air flows through the duct, the heat-sink base will usually be hotter than the air flowing through the duct. Applying the conservation of energy, for steady-state conditions, and Newton's law of cooling to the temperature nodes shown in the diagram gives the following set of equations:
where
is the air mass flow rate in kg/s
is the specific heat capacity of the incoming air, in J/(kg °C)
is the thermal resistance of the heatsink
Using the mean air temperature is an assumption that is valid for relatively short heat sinks. When compact heat exchangers are calculated, the logarithmic mean air temperature is used.
The above equations show that:
When the air flow through the heat sink decreases, this results in an increase in the average air temperature. This in turn increases the heat-sink base temperature. And additionally, the thermal resistance of the heat sink will also increase. The net result is a higher heat-sink base temperature.
The increase in heat-sink thermal resistance with decrease in flow rate will be shown later in this article.
The inlet air temperature relates strongly with the heat-sink base temperature. For example, if there is recirculation of air in a product, the inlet air temperature is not the ambient air temperature. The inlet air temperature of the heat sink is therefore higher, which also results in a higher heat-sink base temperature.
If there is no air flow around the heat sink, energy cannot be transferred.
A heat sink is not a device with the "magical ability to absorb heat like a sponge and send it off to a parallel universe".
Natural convection requires free flow of air over the heat sink. If fins are not aligned vertically, or if fins are too close together to allow sufficient air flow between them, the efficiency of the heat sink will decline.
Design factors
Thermal resistance
For semiconductor devices used in a variety of consumer and industrial electronics, the idea of thermal resistance simplifies the selection of heat sinks. The heat flow between the semiconductor die and ambient air is modeled as a series of resistances to heat flow; there is a resistance from the die to the device case, from the case to the heat sink, and from the heat sink to the ambient air. The sum of these resistances is the total thermal resistance from the die to the ambient air. Thermal resistance is defined as temperature rise per unit of power, analogous to electrical resistance, and is expressed in units of degrees Celsius per watt (°C/W). If the device dissipation in watts is known, and the total thermal resistance is calculated, the temperature rise of the die over the ambient air can be calculated.
The idea of thermal resistance of a semiconductor heat sink is an approximation. It does not take into account non-uniform distribution of heat over a device or heat sink. It only models a system in thermal equilibrium and does not take into account the change in temperatures with time. Nor does it reflect the non-linearity of radiation and convection with respect to temperature rise. However, manufacturers tabulate typical values of thermal resistance for heat sinks and semiconductor devices, which allows selection of commercially manufactured heat sinks to be simplified.
Commercial extruded aluminium heat sinks have a thermal resistance (heat sink to ambient air) ranging from for a large sink meant for TO-3 devices, up to as high as for a clip-on heat sink for a TO-92 small plastic case. The popular 2N3055 power transistor in a TO-3 case has an internal thermal resistance from junction to case of . The contact between the device case and heat sink may have a thermal resistance between , depending on the case size and use of grease or insulating mica washer.
Material
The materials for heat sink applications should have high heat capacity and thermal conductivity in order to absorb more heat energy without shifting towards a very high temperature and transmit it to the environment for efficient cooling. The most common heat sink materials are aluminium alloys. Aluminium alloy 1050 has one of the higher thermal conductivity values at 229 W/(m·K) and heat capacity of 922 J/(kg·K), but is mechanically soft. Aluminium alloys 6060 (low-stress), 6061, and 6063 are commonly used, with thermal conductivity values of 166 and 201 W/(m·K) respectively. The values depend on the temper of the alloy. One-piece aluminium heat sinks can be made by extrusion, casting, skiving or milling.
Copper has excellent heat-sink properties in terms of its thermal conductivity, corrosion resistance, biofouling resistance, and antimicrobial resistance (see also Copper in heat exchangers). Copper has around twice the thermal conductivity of aluminium, around 400 W/(m·K) for pure copper. Its main applications are in industrial facilities, power plants, solar thermal water systems, HVAC systems, gas water heaters, forced air heating and cooling systems, geothermal heating and cooling, and electronic systems.
Copper is three times as dense and more expensive than aluminium, and copper is less ductile than aluminum. One-piece copper heat sinks can be made by skiving or milling. Sheet-metal fins can be soldered onto a rectangular copper body.
Fin efficiency
Fin efficiency is one of the parameters that makes a higher-thermal-conductivity material important. A fin of a heat sink may be considered to be a flat plate with heat flowing in one end and being dissipated into the surrounding fluid as it travels to the other. As heat flows through the fin, the combination of the thermal resistance of the heat sink impeding the flow and the heat lost due to convection, the temperature of the fin and, therefore, the heat transfer to the fluid, will decrease from the base to the end of the fin. Fin efficiency is defined as the actual heat transferred by the fin, divided by the heat transfer were the fin to be isothermal (hypothetically the fin having infinite thermal conductivity). These equations are applicable for straight fins:
where
hf is the convection coefficient of the fin:
10 to 100 W/(m2·K) in air,
500 to 10,000 W/(m2·K) in water,
k is the thermal conductivity of the fin material:
120 to 240 W/(m·K) for aluminium,
Lf is the fin height (m),
tf is the fin thickness (m).
Fin efficiency is increased by decreasing the fin aspect ratio (making them thicker or shorter), or by using a more conductive material (copper instead of aluminium, for example).
Spreading resistance
Another parameter that concerns the thermal conductivity of the heat-sink material is spreading resistance. Spreading resistance occurs when thermal energy is transferred from a small area to a larger area in a substance with finite thermal conductivity. In a heat sink, this means that heat does not distribute uniformly through the heat-sink base. The spreading resistance phenomenon is shown by how the heat travels from the heat source location and causes a large temperature gradient between the heat source and the edges of the heat sink. This means that some fins are at a lower temperature than if the heat source were uniform across the base of the heat sink. This nonuniformity increases the heat sink's effective thermal resistance.
To decrease the spreading resistance in the base of a heat sink:
increase the base thickness,
choose a different material with higher thermal conductivity,
use a vapor chamber or heat pipe in the heat sink base.
Fin arrangements
A pin fin heat sink is a heat sink that has pins that extend from its base. The pins can be cylindrical, elliptical, or square. A second type of heat sink fin arrangement is the straight fin. A variation on the straight fin heat sink is a cross-cut heat sink. A third type of heat sink is the flared fin heat sink, where the fins are not parallel to one another. Flaring the fins decreases flow resistance and makes more air go through the heat-sink fin channel; otherwise, more air would bypass the fins. Slanting them keeps the overall dimensions the same, but offers longer fins. Examples of the three types are shown in the image on the right.
Forghan, et al. have published data on tests conducted on pin fin, straight fin, and flared fin heat sinks. They found that for low air approach velocity, typically around 1 m/s, the thermal performance is at least 20% better than straight fin heat sinks. Lasance and Eggink also found that for the bypass configurations that they tested, the flared heat sink performed better than the other heat sinks tested.
Generally, the more surface area a heat sink has, the better its performance. Real-world performance depends on the design and application. The concept of a pin fin heat sink is to pack as much surface area into a given volume as possible, while working in any orientation of fluid flow. Kordyban has compared the performance of a pin-fin and a straight-fin heat sink of similar dimensions. Although the pin-fin has 194 cm2 surface area while the straight-fin has 58 cm2, the temperature difference between the heat-sink base and the ambient air for the pin fin is , but for the straight-fin it was 44 °C, or 6 °C better than the pin fin. Pin fin heat sink performance is significantly better than straight fins when used in their optimal application where the fluid flows axially along the pins rather than only tangentially across the pins.
Cavities (inverted fins)
Cavities (inverted fins) embedded in a heat source are the regions formed between adjacent fins that stand for the essential promoters of nucleate boiling or condensation. These cavities are usually utilized to extract heat from a variety of heat-generating bodies to a heat sink.
Conductive thick plate between the heat source and the heat sink
Placing a conductive thick plate as a heat-transfer interface between a heat source and a cold flowing fluid (or any other heat sink) may improve the cooling performance. In such arrangement, the heat source is cooled under the thick plate instead of being cooled in direct contact with the cooling fluid. It is shown that the thick plate can significantly improve the heat transfer between the heat source and the cooling fluid by conducting the heat current in an optimal manner. The two most attractive advantages of this method are that no additional pumping power and no extra heat-transfer surface area, that is quite different from fins (extended surfaces).
Surface color
The heat transfer from the heat sink occurs by convection of the surrounding air, conduction through the air, and radiation.
Heat transfer by radiation is a function of both the heat-sink temperature and the temperature of the surroundings that the heat sink is optically coupled with. When both of these temperatures are on the order of 0 °C to 100 °C, the contribution of radiation compared to convection is generally small, and this factor is often neglected. In this case, finned heat sinks operating in either natural-convection or forced-flow will not be affected significantly by surface emissivity.
In situations where convection is low, such as a flat non-finned panel with low airflow, radiative cooling can be a significant factor. Here the surface properties may be an important design factor. Matte-black surfaces radiate much more efficiently than shiny bare metal. A shiny metal surface has low emissivity. The emissivity of a material is tremendously frequency-dependent and is related to absorptivity (of which shiny metal surfaces have very little). For most materials, the emissivity in the visible spectrum is similar to the emissivity in the infrared spectrum; however, there are exceptions
notably, certain metal oxides that are used as "selective surfaces".
In vacuum or outer space, there is no convective heat transfer, thus in these environments, radiation is the only factor governing heat flow between the heat sink and the environment. For a satellite in space, a surface facing the Sun will absorb a lot of radiant heat, because the Sun's surface temperature is nearly 6000 K, whereas the same surface facing deep space will radiate a lot of heat, since deep space has an effective temperature of only several Kelvin.
Engineering applications
Microprocessor cooling
Heat dissipation is an unavoidable by-product of electronic devices and circuits. In general, the temperature of the device or component will depend on the thermal resistance from the component to the environment, and the heat dissipated by the component. To ensure that the component does not overheat, a thermal engineer seeks to find an efficient heat transfer path from the device to the environment. The heat transfer path may be from the component to a printed circuit board (PCB), to a heat sink, to air flow provided by a fan, but in all instances, eventually to the environment.
Two additional design factors also influence the thermal/mechanical performance of the thermal design:
The method by which the heat sink is mounted on a component or processor. This will be discussed under the section attachment methods.
For each interface between two objects in contact with each other, there will be a temperature drop across the interface. For such composite systems, the temperature drop across the interface may be appreciable. This temperature change may be attributed to what is known as the thermal contact resistance. Thermal interface materials (TIM) decrease the thermal contact resistance.
Attachment methods
As power dissipation of components increases and component package size decreases, thermal engineers must innovate to ensure components won't overheat. Devices that run cooler last longer. A heat sink design must fulfill both its thermal as well as its mechanical requirements. Concerning the latter, the component must remain in thermal contact with its heat sink with reasonable shock and vibration. The heat sink could be the copper foil of a circuit board, or a separate heat sink mounted onto the component or circuit board. Attachment methods include thermally conductive tape or epoxy, wire-form z clips, flat spring clips, standoff spacers, and push pins with ends that expand after installing.
Thermally conductive tape
Thermally conductive tape is one of the most cost-effective heat sink attachment materials. It is suitable for low-mass heat sinks and for components with low power dissipation. It consists of a thermally conductive carrier material with a pressure-sensitive adhesive on each side.
This tape is applied to the base of the heat sink, which is then attached to the component. Following are factors that influence the performance of thermal tape:
Surfaces of both the component and heat sink must be clean, with no residue such as a film of silicone grease.
Preload pressure is essential to ensure good contact. Insufficient pressure results in areas of non-contact with trapped air, and results in higher-than-expected interface thermal resistance.
Thicker tapes tend to provide better "wettability" with uneven component surfaces. "Wettability" is the percentage area of contact of a tape on a component. Thicker tapes, however, have a higher thermal resistance than thinner tapes. From a design standpoint, it is best to strike a balance by selecting a tape thickness that provides maximum "wettability" with minimum thermal resistance.
Epoxy
Epoxy is more expensive than tape, but provides a greater mechanical bond between the heat sink and component, as well as improved thermal conductivity. The epoxy chosen must be formulated for this purpose. Most epoxies are two-part liquid formulations that must be thoroughly mixed before being applied to the heat sink, and before the heat sink is placed on the component. The epoxy is then cured for a specified time, which can vary from 2 hours to 48 hours. Faster cure time can be achieved at higher temperatures. The surfaces to which the epoxy is applied must be clean and free of any residue.
The epoxy bond between the heat sink and component is semi-permanent/permanent. This makes re-work very difficult and at times impossible. The most typical damage caused by rework is the separation of the component die heat spreader from its package.
Wire form Z-clips
More expensive than tape and epoxy, wire form z-clips attach heat sinks mechanically. To use the z-clips, the printed circuit board must have anchors. Anchors can be either soldered onto the board, or pushed through. Either type requires holes to be designed into the board. The use of RoHS solder must be allowed for because such solder is mechanically weaker than traditional Pb/Sn solder.
To assemble with a z-clip, attach one side of it to one of the anchors. Deflect the spring until the other side of the clip can be placed in the other anchor. The deflection develops a spring load on the component, which maintains very good contact. In addition to the mechanical attachment that the z-clip provides, it also permits using higher-performance thermal interface materials, such as phase change types.
Clips
Available for processors and ball grid array (BGA) components, clips allow the attachment of a BGA heat sink directly to the component. The clips make use of the gap created by the ball grid array (BGA) between the component underside and PCB top surface. The clips therefore require no holes in the PCB. They also allow for easy rework of components.
Push pins with compression springs
For larger heat sinks and higher preloads, push pins with compression springs are very effective. The push pins, typically made of brass or plastic, have a flexible barb at the end that engages with a hole in the PCB; once installed, the barb retains the pin. The compression spring holds the assembly together and maintains contact between the heat sink and component. Care is needed in selection of push pin size. Too great an insertion force can result in the die cracking and consequent component failure.
Threaded standoffs with compression springs
For very large heat sinks, there is no substitute for the threaded standoff and compression spring attachment method. A threaded standoff is essentially a hollow metal tube with internal threads. One end is secured with a screw through a hole in the PCB. The other end accepts a screw which compresses the spring, completing the assembly. A typical heat sink assembly uses two to four standoffs, which tends to make this the most costly heat sink attachment design. Another disadvantage is the need for holes in the PCB.
Thermal interface materials
Thermal contact resistance occurs due to the voids created by surface roughness effects, defects and misalignment of the interface. The voids present in the interface are filled with air. Heat transfer is therefore due to conduction across the actual contact area and to conduction (or natural convection) and radiation across the gaps. If the contact area is small, as it is for rough surfaces, the major contribution to the resistance is made by the gaps. To decrease the thermal contact resistance, the surface roughness can be decreased while the interface pressure is increased. However, these improving methods are not always practical or possible for electronic equipment. Thermal interface materials (TIM) are a common way to overcome these limitations.
Properly applied thermal interface materials displace the air that is present in the gaps between the two objects with a material that has a much-higher thermal conductivity. Air has a thermal conductivity of 0.022 W/(m·K) while TIMs have conductivities of 0.3 W/(m·K) and higher.
When selecting a TIM, care must be taken with the values supplied by the manufacturer. Most manufacturers give a value for the thermal conductivity of a material. However, the thermal conductivity does not take into account the interface resistances. Therefore, if a TIM has a high thermal conductivity, it does not necessarily mean that the interface resistance will be low.
Selection of a TIM is based on three parameters: the interface gap which the TIM must fill, the contact pressure, and the electrical resistivity of the TIM. The contact pressure is the pressure applied to the interface between the two materials. The selection does not include the cost of the material. Electrical resistivity may be important depending upon electrical design details.
Light-emitting diode lamps
Light-emitting diode (LED) performance and lifetime are strong functions of their temperature. Effective cooling is therefore essential. A case study of a LED based downlighter shows an example of the calculations done in order to calculate the required heat sink necessary for the effective cooling of lighting system. The article also shows that in order to get confidence in the results, multiple independent solutions are required that give similar results. Specifically, results of the experimental, numerical and theoretical methods should all be within 10% of each other to give high confidence in the results.
In soldering
Temporary heat sinks are sometimes used while soldering circuit boards, preventing excessive heat from damaging sensitive nearby electronics. In the simplest case, this means partially gripping a component using a heavy metal crocodile clip, hemostat, or similar clamp. Modern semiconductor devices, which are designed to be assembled by reflow soldering, can usually tolerate soldering temperatures without damage. On the other hand, electrical components such as magnetic reed switches can malfunction if exposed to hotter soldering irons, so this practice is still very much in use.
Methods to determine performance
In general, a heat sink performance is a function of material thermal conductivity, dimensions, fin type, heat transfer coefficient, air flow rate, and duct size. To determine the thermal performance of a heat sink, a theoretical model can be made. Alternatively, the thermal performance can be measured experimentally. Due to the complex nature of the highly 3D flow in present applications, numerical methods or computational fluid dynamics (CFD) can also be used. This section will discuss the aforementioned methods for the determination of the heat sink thermal performance.
A heat transfer theoretical model
One of the methods to determine the performance of a heat sink is to use heat transfer and fluid dynamics theory. One such method has been published by Jeggels, et al., though this work is limited to ducted flow. Ducted flow is where the air is forced to flow through a channel which fits tightly over the heat sink. This makes sure that all the air goes through the channels formed by the fins of the heat sink. When the air flow is not ducted, a certain percentage of air flow will bypass the heat sink. Flow bypass was found to increase with increasing fin density and clearance, while remaining relatively insensitive to inlet duct velocity.
The heat sink thermal resistance model consists of two resistances, namely the resistance in the heat sink base, , and the resistance in the fins, . The heat sink base thermal resistance, , can be written as follows if the source is a uniformly applied the heat sink base. If it is not, then the base resistance is primarily spreading resistance:
(4)
where is the heat sink base thickness, is the heat sink material thermal conductivity and is the area of the heat sink base.
The thermal resistance from the base of the fins to the air, , can be calculated by the following formulas:
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
The flow rate can be determined by the intersection of the heat sink system curve and the fan curve. The heat sink system curve can be calculated by the flow resistance of the channels and inlet and outlet losses as done in standard fluid mechanics text books, such as Potter, et al. and White.
Once the heat sink base and fin resistances are known, then the heat sink thermal resistance, can be calculated as:
(14).
Using the equations 5 to 13 and the dimensional data in, the thermal resistance for the fins was calculated for various air flow rates. The data for the thermal resistance and heat transfer coefficient are shown in the diagram, which shows that for an increasing air flow rate, the thermal resistance of the heat sink decreases.
Experimental methods
Experimental tests are one of the more popular ways to determine the heat sink thermal performance. In order to determine the heat sink thermal resistance, the flow rate, input power, inlet air temperature and heat sink base temperature need to be known. Vendor-supplied data is commonly provided for ducted test results. However, the results are optimistic and can give misleading data when heat sinks are used in an unducted application. More details on heat sink testing methods and common oversights can be found in Azar, et al.
Numerical methods
In industry, thermal analyses are often ignored in the design process or performed too late — when design changes are limited and become too costly. Of the three methods mentioned in this article, theoretical and numerical methods can be used to determine an estimate of the heat sink or component temperatures of products before a physical model has been made. A theoretical model is normally used as a first order estimate. Online heat sink calculators can provide a reasonable estimate of forced and natural convection heat sink performance based on a combination of theoretical and empirically derived correlations. Numerical methods or computational fluid dynamics (CFD) provide a qualitative (and sometimes even quantitative) prediction of fluid flows. What this means is that it will give a visual or post-processed result of a simulation, like the images in figures 16 and 17, and the CFD animations in figure 18 and 19, but the quantitative or absolute accuracy of the result is sensitive to the inclusion and accuracy of the appropriate parameters.
CFD can give an insight into flow patterns that are difficult, expensive or impossible to study using experimental methods. Experiments can give a quantitative description of flow phenomena using measurements for one quantity at a time, at a limited number of points and time instances. If a full-scale model is not available or not practical, scale models or dummy models can be used. The experiments can have a limited range of problems and operating conditions. Simulations can give a prediction of flow phenomena using CFD software for all desired quantities, with high resolution in space and time and virtually any problem and realistic operating conditions. However, if critical, the results may need to be validated.
See also
Computer cooling
Heat spreader
Heat pipe
Heat pump
Thermal conductivity of diamond
Radiator
Thermal interface material
Thermal management (electronics)
Thermal resistance
Thermoelectric cooling
References
External links
Computer hardware cooling
Heat exchangers
Heat transfer
pt:Dissipador | Heat sink | [
"Physics",
"Chemistry",
"Engineering"
] | 5,896 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Chemical equipment",
"Heat exchangers",
"Thermodynamics"
] |
344,127 | https://en.wikipedia.org/wiki/Signal%20trace | In electronics, a signal trace or circuit trace on a printed circuit board (PCB) or integrated circuit (IC) is the equivalent of a wire for conducting signals. Each trace consists of a flat, narrow part of the copper foil that remains after etching. Signal traces are usually narrower than power or ground traces because the current carrying requirements are usually much less.
See also
Ground plane
Stripline
Microstrip
References
Electrical connectors
Printed circuit board manufacturing | Signal trace | [
"Engineering"
] | 92 | [
"Electrical engineering",
"Electronic engineering",
"Printed circuit board manufacturing"
] |
344,136 | https://en.wikipedia.org/wiki/Electrical%20termination | In electronics, electrical termination is the practice of ending a transmission line with a device that matches the characteristic impedance of the line. Termination prevents signals from reflecting off the end of the transmission line. Reflections at the ends of unterminated transmission lines cause distortion, which can produce ambiguous digital signal levels and misoperation of digital systems. Reflections in analog signal systems cause such effects as video ghosting, or power loss in radio transmitter transmission lines.
Transmission lines
Signal termination often requires the installation of a terminator at the beginning and end of a wire or cable to prevent an RF signal from being reflected back from each end, causing interference, or power loss. The terminator is usually placed at the end of a transmission line or daisy chain bus (such as in SCSI), and is designed to match the AC impedance of the cable and hence minimize signal reflections, and power losses. Less commonly, a terminator is also placed at the driving end of the wire or cable, if not already part of the signal-generating equipment.
Radio frequency currents tend to reflect from discontinuities in the cable, such as connectors and joints, and travel back down the cable toward the source, causing interference as primary reflections. Secondary reflections can also occur at the cable starts, allowing interference to persist as repeated echoes of old data. These reflections also act as bottlenecks, preventing the signal power from reaching the destination.
Transmission line cables require impedance matching to carry electromagnetic signals with minimal reflections and power losses. The distinguishing feature of most transmission line cables is that they have uniform cross-sectional dimensions along their length, giving them a uniform electrical characteristic impedance. Signal terminators are designed to specifically match the characteristic impedances at both cable ends. For many systems, the terminator is a resistor, with a value chosen to match the characteristic impedance of the transmission line and chosen to have acceptably low parasitic inductance and capacitance at the frequencies relevant to the system. Examples include 75-ohm resistors often used to terminate 75-ohm video transmission coaxial cables.
Types of transmission line cables include balanced line such as ladder line, and twisted pairs (Cat-6 Ethernet, Parallel SCSI, ADSL, Landline Phone, XLR audio, USB, Firewire, Serial); and unbalanced lines such as coaxial cable (Radio antenna, CATV, 10BASE5 Ethernet).
Types of electrical and signal terminators
Passive
Passive terminators often consist of a single resistor; however, significantly reactive loads may require other passive components such as inductors, capacitors, or transformers.
Active
Active terminators consist of a voltage regulator that keeps the voltage used for the terminating resistor(s) at a constant level.
Forced perfect termination
Forced perfect termination (FPT) can be used on single ended buses where diodes remove over and undershoot conditions. The signal is locked between two actively regulated voltage levels, which results in superior performance over a standard active terminator.
Signal Termination Applications
SCSI
All parallel SCSI units use terminators. SCSI is primarily used for storage and backup. An active terminator is a type of single-ended SCSI terminator with a built-in voltage regulator to compensate for variations in terminator power.
Controller Area Network
Controller area network, commonly known as CAN Bus, uses terminators consisting of a 120 ohm resistor.
Dummy load
Dummy loads are commonly used in HF to EHF circuits.
Ethernet coaxial 50 ohm
10BASE2 networks absolutely must have proper termination with a 50 ohm BNC terminator. If the bus network is not properly terminated, too much power will be reflected, causing all of the computers on the bus to lose network connectivity.
Antenna network 75 ohm
A terminating resistor for a television coaxial cable is often in the form of a cap, threaded to screw onto an F connector. Antenna cables are sometimes used for internet connections; however, RG-6 should not be used for 10BASE2 (which should use RG-58) as the impedance mismatch can cause phasing problems with the baseband signal.
Unibus
The Digital Equipment Corporation minicomputer Unibus systems used terminator cards with 178 Ω pull-up resistors on the multi-drop address and data lines and 383 Ω on the single-drop signal lines.
MIL-STD-1553
Terminating resistor values of 78.7 ohms 2 watt 1% are used on the MIL-STD-1553 bus. At the two ends of the bus, resistors connect between the positive (high) and negative (low) signal wires either in internally terminated bus couplers or external connectorized terminators.
The MIL-STD-1553B bus must be terminated at both ends to minimize the effects of signal reflections that can cause waveform distortion and disruption or intermittent communications failures.
Optionally, a high-impedance terminator (1000 to 3000 ohms) may be used in vehicle applications to simulate a future load from an unspecified device.
Connectorized terminators are available with or without safety chains.
See also
Electrical connector
Electrical network
MIL-STD-1553
Telecommunications pedestal
References
Electronic circuits
SCSI | Electrical termination | [
"Engineering"
] | 1,078 | [
"Electronic engineering",
"Electronic circuits"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.