id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
4,725,430 | https://en.wikipedia.org/wiki/Axial%20multipole%20moments | Axial multipole moments are a series expansion of the electric potential of a charge distribution localized close to the origin along one Cartesian axis, denoted here as the z-axis. However, the axial multipole expansion can also be applied to any potential or field that varies inversely with the distance to the source, i.e., as . For clarity, we first illustrate the expansion for a single point charge, then generalize to an arbitrary charge density localized to the z-axis.
Axial multipole moments of a point charge
The electric potential of a point charge q located on the z-axis at (Fig. 1) equals
If the radius r of the observation point is greater than a, we may factor out and expand the square root in powers of using Legendre polynomials
where the axial multipole moments contain everything specific to a given charge distribution; the other parts of the electric potential depend only on the coordinates of the observation point P. Special cases include the axial monopole moment , the axial dipole moment and the axial quadrupole moment . This illustrates the general theorem that the lowest non-zero multipole moment is independent of the origin of the coordinate system, but higher multipole moments are not (in general).
Conversely, if the radius r is less than a, we may factor out and expand in powers of , once again using Legendre polynomials
where the interior axial multipole moments contain everything specific to a given charge distribution; the other parts depend only on the coordinates of the observation point P.
General axial multipole moments
To get the general axial multipole moments, we replace the point charge of the previous section with an infinitesimal charge element , where represents the charge density at position on the z-axis. If the radius r of the observation point P is greater than the largest for which is significant (denoted ), the electric potential may be written
where the axial multipole moments are defined
Special cases include the axial monopole moment (=total charge)
the axial dipole moment , and the axial quadrupole moment . Each successive term in the expansion varies inversely with a greater power of , e.g., the monopole potential varies as , the dipole potential varies as , the quadrupole potential varies as , etc. Thus, at large distances (), the potential is well-approximated by the leading nonzero multipole term.
The lowest non-zero axial multipole moment is invariant under a shift b in origin, but higher moments generally depend on the choice of origin. The shifted multipole moments would be
Expanding the polynomial under the integral
leads to the equation
If the lower moments are zero, then . The same equation shows that multipole moments higher than the first non-zero moment do depend on the choice of origin (in general).
Interior axial multipole moments
Conversely, if the radius r is smaller than the smallest for which is significant (denoted ), the electric potential may be written
where the interior axial multipole moments are defined
Special cases include the interior axial monopole moment ( the total charge)
the interior axial dipole moment , etc. Each successive term in the expansion varies with a greater power of , e.g., the interior monopole potential varies as , the dipole potential varies as , etc. At short distances (), the potential is well-approximated by the leading nonzero interior multipole term.
See also
Potential theory
Multipole expansion
Spherical multipole moments
Cylindrical multipole moments
Solid harmonics
Laplace expansion
References
Electromagnetism
Potential theory
Moment (physics) | Axial multipole moments | Physics,Mathematics | 718 |
612,029 | https://en.wikipedia.org/wiki/Cyclic%20model | A cyclic model (or oscillating model) is any of several cosmological models in which the universe follows infinite, or indefinite, self-sustaining cycles. For example, the oscillating universe theory briefly considered by Albert Einstein in 1930 theorized a universe following an eternal series of oscillations, each beginning with a Big Bang and ending with a Big Crunch; in the interim, the universe would expand for a period of time before the gravitational attraction of matter causes it to collapse back in and undergo a bounce.
Overview
In the 1920s, theoretical physicists, most notably Albert Einstein, considered the possibility of a cyclic model for the universe as an (everlasting) alternative to the model of an expanding universe. In 1922, Alexander Friedmann introduced the Oscillating Universe Theory. However, work by Richard C. Tolman in 1934 showed that these early attempts failed because of the cyclic problem: according to the second law of thermodynamics, entropy can only increase. This implies that successive cycles grow longer and larger. Extrapolating back in time, cycles before the present one become shorter and smaller culminating again in a Big Bang and thus not replacing it. This puzzling situation remained for many decades until the early 21st century when the recently discovered dark energy component provided new hope for a consistent cyclic cosmology. In 2011, a five-year survey of 200,000 galaxies and spanning 7 billion years of cosmic time confirmed that "dark energy is driving our universe apart at accelerating speeds."
One new cyclic model is the brane cosmology model of the creation of the universe, derived from the earlier ekpyrotic model. It was proposed in 2001 by Paul Steinhardt of Princeton University and Neil Turok of Cambridge University. The theory describes a universe exploding into existence not just once, but repeatedly over time. The theory could potentially explain why a repulsive form of energy known as the cosmological constant, which is accelerating the expansion of the universe, is several orders of magnitude smaller than predicted by the standard Big Bang model.
A different cyclic model relying on the notion of phantom energy was proposed in 2007 by Lauris Baum and Paul Frampton of the University of North Carolina at Chapel Hill.
Other cyclic models include conformal cyclic cosmology and loop quantum cosmology.
The Steinhardt–Turok model
In this cyclic model, two parallel orbifold planes or M-branes collide periodically in a higher-dimensional space. The visible four-dimensional universe lies on one of these branes. The collisions correspond to a reversal from contraction to expansion, or a Big Crunch followed immediately by a Big Bang. The matter and radiation we see today were generated during the most recent collision in a pattern dictated by quantum fluctuations created before the branes. After billions of years the universe reached the state we observe today; after additional billions of years it will ultimately begin to contract again. Dark energy corresponds to a force between the branes, and serves the crucial role of solving the monopole, horizon, and flatness problems. Moreover, the cycles can continue indefinitely into the past and the future, and the solution is an attractor, so it can provide a complete history of the universe.
As Richard C. Tolman showed, the earlier cyclic model failed because the universe would undergo inevitable thermodynamic heat death. However, the newer cyclic model evades this by having a net expansion each cycle, preventing entropy from building up. However, there remain major open issues in the model. Foremost among them is that colliding branes are not understood by string theorists, and nobody knows if the scale invariant spectrum will be destroyed by the big crunch. Moreover, as with cosmic inflation, while the general character of the forces (in the ekpyrotic scenario, a force between branes) required to create the vacuum fluctuations is known, there is no candidate from particle physics.
The Baum–Frampton model
This more recent cyclic model of 2007 assumes an exotic form of dark energy called phantom energy, which possesses negative kinetic energy and would usually cause the universe to end in a Big Rip. This condition is achieved if the universe is dominated by dark energy with a cosmological equation of state parameter satisfying the condition , for energy density and pressure p. By contrast, Steinhardt–Turok assume . In the Baum–Frampton model, a septillionth (or less) of a second (i.e. 10−24 seconds or less) before the would-be Big Rip, a turnaround occurs and only one causal patch is retained as our universe. The generic patch contains no quark, lepton or force carrier; only dark energy – and its entropy thereby vanishes. The adiabatic process of contraction of this much smaller universe takes place with constant vanishing entropy and with no matter including no black holes which disintegrated before turnaround.
The idea that the universe "comes back empty" is a central new idea of this cyclic model, and avoids many difficulties confronting matter in a contracting phase such as excessive structure formation, proliferation and expansion of black holes, as well as going through phase transitions such as those of QCD and electroweak symmetry restoration. Any of these would tend strongly to produce an unwanted premature bounce, simply to avoid violation of the second law of thermodynamics. The condition of may be logically inevitable in a truly infinitely cyclic cosmology because of the entropy problem. Nevertheless, many technical back up calculations are necessary to confirm consistency of the approach. Although the model borrows ideas from string theory, it is not necessarily committed to strings, or to higher dimensions, yet such speculative devices may provide the most expeditious methods to investigate the internal consistency. The value of in the Baum–Frampton model can be made arbitrarily close to, but must be less than, −1.
Other cyclic models
Conformal cyclic cosmology—a general relativity based theory by Roger Penrose in which the universe expands until all the matter decays and is turned to light—so there is nothing in the universe that has any time or distance scale associated with it. This permits it to become identical with the Big Bang, so starting the next cycle.
Loop quantum cosmology which predicts a "quantum bridge" between contracting and expanding cosmological branches.
See also
Physical cosmologies:
Big Bounce
Conformal cyclic cosmology
Religion:
Bhavacakra
Cycles of time in Hinduism
Eternal return
Historic recurrence
Kalachakra
Wheel of time
References
Further reading
S. W. Hawking and G. F. R. Ellis, The large-scale structure of space-time (Cambridge, 1973).
External links
Paul J. Steinhardt, Department of Physics, Princeton University
Paul H. Frampton, Department of Physics and Astronomy, The University of North Carolina at Chapel Hill
"The Cyclic Universe": A Talk with Neil Turok
Roger Penrose—Cyclical Universe Model
Physical cosmology
String theory
1920s in science | Cyclic model | Physics,Astronomy | 1,426 |
75,337,901 | https://en.wikipedia.org/wiki/Gravity-1 | Gravity-1 () is a solid-propellant expendable medium-lift launch vehicle designed, manufactured and launched by Chinese aerospace company Orienspace. It can carry a payload of up to to LEO or to SSO, enabling the deployment of large-scale satellite constellations. The rocket has a height of 30 meters, a take-off weight of 400 tonnes, a take-off thrust of 600 tonnes, and a fairing diameter of 4.2 meters. Its maiden launch was conducted from a sea launch platform in the Yellow Sea on January 11, 2024, breaking records as both the world's most powerful solid-fuel carrier rocket and China's most powerful commercial launch vehicle to date. Large pieces of debris were seen during the launch, which carried 3 Yunyao-1 meteorological satellites built by the Shanghai Academy of Spaceflight Technology, as part of the planned 90-satellite Yunyao constellation.
Gravity-1 consists of seven solid rocket motors (SRB) in total. The first four side-mounted SRBs are ignited on the ground, while three core boosters are air-lit in sequence. The launch cost for Gravity-1 is no higher than US$39 million. Gravity-1 offers a quick-response-time of only five hours between manufacturing completion and launch. Orienspace has signed contracts for the launch of more than one hundred satellites.
List of launches
References
Vehicles introduced in 2024
2024 in spaceflight | Gravity-1 | Astronomy | 297 |
68,783,415 | https://en.wikipedia.org/wiki/Finials%20of%20Cologne%20Cathedral | The finials of Cologne Cathedral from the tops of the two towers (north and south towers) at a height of 149 to 157 metres. A copy of this finial in original size, but made of concrete, has stood below the steps in front of the west façade of the cathedral since 1991.
Shape and construction
The finials consist of a central shaft surrounded by two leaf wreaths of different sizes. They date from the last construction phase of Cologne Cathedral around 1880, although the plans still go back to master builder Ernst Friedrich Zwirner († 1861), who based his plans on the original, medieval façade plan F. In this design, the finials were to have a diameter of 5.20 metres.
Zwirner's successor as cathedral architect was , who is considered to have completed the cathedral. He was already planning a smaller diameter of initially 5.02 metres, later 4.75 metres, for the lower leaf wreath. The natural limitations of the material to be extracted from the Obernkirchen Sandstein finally tipped the scales: the final diameter of the lower leaf wreath is 4.58 metres, the height around eight metres.
In addition to the size of the stone blocks, transporting them to heights of over 150 metres posed a challenge in the 19th century: Not only were scaffolding and rope hoists too weak, but the steam-powered freight lift could carry a maximum weight of four tonnes. A one-piece lower leaf wreath alone would have weighed over 17 tonnes. This is one of the reasons why the finials, with their approximately 37 cubic metres of stone each, are made up of a total of 24 individual stones.
To stabilise the construction of the spire, a system of brackets and reinforcements, mostly made of copper, was developed to counter the danger of corrosion. The leaves of the lower ring of leaves, joined together in the middle on a comparatively small surface, project outwards up to 2.30. They are therefore supported on the one hand by stone brackets from below, but held in place on the top by an octagonal copper band on the shaft and by metal rods.
A wrought-iron rod 10 centimetres in diameter and 21 metres long was passed through the centre of the shaft to stabilise it with a copper sheath. This rod hangs downwards into the tower's spire and is weighted down in the manner of a pendulum.
Copper ladders lead from an exit about 17 metres below to the tops of the finials, where there is a lightning rod.
Structure and modification
The finials were made in the winter of 1879/80 in the stonemasons' workshop of the cathedral building lodge; the raising and setting began on 16 July 1880, after the raising scaffolding had been reinforced as a precaution. For example, the hemp rope was replaced by steel cables.
The finial of the north tower was completed and put in place on 23 July 1880, that of the south tower on 14 August 1880 – but without the keystone, which was put in place to celebrate the completion of the cathedral on 15 October 1880.
Shortly after completion, however, the protests of the population increased, as the finials appeared too compact and bulky despite their removal. For this reason, it was decided shortly afterward to rework the leaf wreaths by hand.
In the winter of 1880/81, wooden housings were mounted around the finials to create a heated workspace for the workers in the cold. 40 stonemasons worked until 12 February 1881 to make the leaf wreaths more filigree afterward.
Model on the Domplatte
Cathedral master builder Richard Voigtel had already originally striven for the production of a third finial as a "monument to the completion of the cathedral". In a sketch and design from 1879, he envisaged a 10.5-metre-high replica of the finials to be erected on the south-eastern corner of the cathedral terrace. However, Voigtel could not prevail with this idea.
In 1980, the year of the cathedral's jubilee, the sculptor Uspelkat made a plastic model based on construction drawings, which was erected in front of the cathedral on 18 March 1980. Although not entirely true to scale and to the original, it enjoyed great popularity until it was severely damaged by storm Wiebke in 1990.
On 11 October 1991, the Cologne Tourist Office had a newly created model of the finial erected in front of the cathedral. The concrete model of the southern finial on a scale of 1:1 was placed 50 metres in front of the west façade of the cathedral between the street Unter Fettenhennen and the . The faithful sculpture demonstrates the dimension and details of its prototype.
In an effort to replace the model with a durable structure, the choice fell on a concrete casting due to the considerably lower costs compared to natural stone. First, the finial of the south tower was re-measured and photographed from the air. Using a plaster model on a scale of 1:10, segmentation, reinforcement, formwork and concreting sequences were developed. On a 1:1 raw model made of polystyrene foam blocks, the later made of silicone rubber was applied, which received a supporting body made of epoxy for casting. The finished structure comprised 13 prefabricated parts made of dark grey through-coloured reinforced concrete. Except for the massive leaf crowns and the keystone, all parts were designed as hollow bodies with wall thicknesses between 15 and 20 centimetres for reasons of weight saving.
The finial, which was assembled using a crane, is almost 10 metres high, 5 metres wide and weighs 35 tonnes, less than half the weight of the natural stone model. It is set in a circular flowerbed and bears explanatory panels in 15 languages on its base.
The model of the finial has become a popular meeting point in front of the cathedral and is the starting point for numerous city tours around Cologne Cathedral.
Discussion on the location of the crucifers replica
In 2012, the "Urban Congress" project commissioned by the City of Cologne, which focused on the conscious handling of art in the public urban space of Cologne, presented a number of recommendations for action, including the removal of the crucifix replica in front of the cathedral, with the aim of calming down or "clearing out" the area in front of the cathedral and giving the actual art monument at this location, the Taubenbrunnen by Ewald Mataré, a more prominent place and a new visibility.
In December 2014, the city centre district council decided to relocate and commissioned the city administration to look for an alternative location, which, however, was not found even months later. Alternative locations discussed included the Burgmauer located in the western viewing axis of the cathedral, the or the location of the former concrete mushrooms on the Domplatte, the district council had finally decided on the Deutz side of the Rhine near the Hohenzollern Bridge, i.e. the viewing axis of the cathedral on the right bank of the Rhine. In the recommendation for action of the Urban Congress, the location of the Dombauhütte in front of the cathedral choir is considered typologically sensible, but the terrace of the Café Reichard opposite is also considered.
Barbara Schock-Werner had already criticised the object at this location during her tenure as cathedral architect, since it was charged with a significance in the middle of the cathedral's visual axis that it did not have. City dean also supported the decision, as well as the then cathedral provost , the architect Allmann Sattler Wappner commissioned with the cathedral slab reconstruction as well as from the Romano-Germanic Museum. Overall, the "majority of the interlocutors" in public and private discussions of the "Urban Congress" had considered the current location "unsuitable both for the Dove Fountain and for the perspective on the main portal of the cathedral". After the publication of the plans, opposing voices were found in letters to the editor and comments in the daily press as well as in an online petition, which found almost 2900 supporters. In politics, at the district level, the Greens, Die Linke, Deine Freunde and Pirates were in favour of demolition; in the city council, the SPD was against it, the CDU was in favour of a move to the Burgmauer, although both groups questioned in principle the decision-making power of the district council on this point. A dialogue commissioned by the city council at the end of 2015 between the new Lord Mayor Henriette Reker and District Mayor led to the compromise that the finial would remain in its place for the time being until the planned renewal of the western cathedral surroundings.
References
Further reading
70. Dombaubericht by Richard Voigtel in the : amtliche Mittheilungen des Central-Dombau-Vereins, Nr. 325, 14 April 1882,
External links
Cologne Cathedral
Ornaments (architecture)
Stone buildings
Roofs | Finials of Cologne Cathedral | Technology,Engineering | 1,831 |
67,642,894 | https://en.wikipedia.org/wiki/CendR | CendR (C-end Rule) is a position-dependent protein motif that regulates cellular uptake and vascular permeability through interaction with neuropilin-1. The CendR motif has a consensus (R/K)XX(R/K) and it is able to interact with its receptor only when the second basic residue is exposed at the C-terminus.
Mechanism of action
C-terminal CendR motif engages with widely expressed neuropilin-1 receptors to trigger an increased permeability of the vasculature and penetration of tissue parenchyma by an endocytotic/exocytotic transport mechanism. The CendR pathway starts with an endocytosis step that is distinct from known endocytosis pathways. It most closely resembles macropinocytosis, but unlike macropinocytosis, the CendR pathway is receptor (neuropilin)-initiated and its activity is controlled by the nutrient status of the cell or tissue. CendR is an active transport process that requires energy. It is not limited to extravasation, but also includes penetration of tissue parenchyma, potentially via cell-to-cell transport.
CendR elements that are not C-terminally exposed are unable to bind to neuropilin-1. However, such cryptic CendR elements can be activated by proteolytic cleavage (e.g. by furin, urokinase type plasminogen activator, and other proteases of suitable substrate specificity).
Clinical significance
The CendR pathway is used to enhance transport of coupled and co-administered anti-cancer drugs into tumors. Tumor penetrating peptides (TPP, a class of tumor homing peptides containing a cryptic CendR motif) activate tumor specific transport through a three-step process that involves binding to a primary tumor-specific receptor, a proteolytic activation of CendR element, and binding to NRP-1 to activate the trans-tissue transport pathway. Clinical-stage prototypic CendR peptide iRGD, developed by Lisata Therapeutics as LSTA1, is utilized to make solid tumors temporarily more accessible to circulating anti-cancer drugs to increase their therapeutic index. Several viruses, including the SARS-CoV2 coronavirus, are also using the CendR system for cellular entry and tissue penetration, and it is known that viruses that have the system are more virulent and deadly.
References
Peptides
Infectious diseases | CendR | Chemistry | 519 |
23,830,729 | https://en.wikipedia.org/wiki/Money%20burning | Money burning or burning money is the purposeful act of destroying money. In the prototypical example, banknotes are destroyed by setting them on fire. Burning money decreases the wealth of the owner without directly enriching any particular party. It also reduces the money supply and (very slightly) slows down the inflation rate.
Money is usually burned to communicate a message, either for artistic effect, as a form of protest, or as a signal. In some games, a player can sometimes benefit from the ability to burn money (battle of the sexes). The burning of money is illegal in some jurisdictions.
Macroeconomic effect
For the purposes of macroeconomics, burning money is equivalent to removing the money from circulation, and locking it away forever; the salient feature is that no one may ever use the money again. Burning money shrinks the money supply, and is therefore a special case of contractionary monetary policy that can be implemented by anyone. In the usual case, the central bank withdraws money from circulation by selling government bonds or foreign currency. The difference with money burning is that the central bank does not have to exchange any assets of value for the money burnt. Money burning is thus equivalent to gifting the money back to the central bank (or other money issuing authority). If the economy is at full employment equilibrium, shrinking the money supply causes deflation (or decreases the rate of inflation), increasing the real value of the money left in circulation.
Assuming that the burned money is paper money with negligible intrinsic value, no real goods are destroyed, so the overall wealth of the world is unaffected. Instead, all surviving money slightly increases in value; everyone gains wealth in proportion to the amount of money they already hold. Economist Steven Landsburg proposes in The Armchair Economist that burning one's fortune (in paper money) is a form of philanthropy more egalitarian than deeding it to the United States Treasury. In 1920, Thomas Nixon Carver wrote that dumping money into the sea is better for society than spending it wastefully, as the latter wastes the labor that it hires.
Opposites
Central banks routinely collect and destroy worn-out coins and banknotes in exchange for new ones. This does not affect the money supply, and is done to maintain a healthy population of usable currency. The practice raises an interesting possibility. If an individual can steal the money before it is incinerated, the effect is the opposite of burning money; the thief is enriched at the expense of the rest of society. One such incident at the Bank of England inspired the 2001 TV movie Hot Money and the 2008 film Mad Money.
Another, more common near-opposite is the creation of counterfeit money. Undetected counterfeit decreases the value of existing money—one of the reasons why attempting to pass it is illegal in most jurisdictions and is aggressively investigated. Another way to analyze the cost of forgery is to consider the effects of a central bank's monetary policy. Taking the United States as an example, if the Federal Reserve decides that the monetary base should be a given amount, then every $100 bill forged is a bill the Fed cannot print and use to buy Treasury bonds. The interest earnings (after expenses) on those bonds is turned over to the US Treasury, so any lost interest must be made up by U.S. taxpayers, who therefore bear the cost of counterfeiting.
Rationales
Behaviorally speaking, burning money is usually seen as a purely negative act. The cognitive impact of burning money can even be a useful motivational tool: patients who suffer from nail biting may be trained to burn a dollar bill every time they engage in the habit. One study found this form of suppression training by self-punishment to be effective compared to control groups, although not as effective as substitution training.
On the other hand, there are some situations where burning money might not be so unreasonable. It is said that the ancient Greek philosopher Aristippus was once on a ship at sea when he was threatened by pirates; he took out his money, counted it, and dropped it into the sea, commenting, "Better for the money to perish because of Aristippus than vice versa." Cicero would later cite this episode as an example of a circumstance that must be considered in its full context: "...it is a useless act to throw money into the sea; but not with the design which Aristippus had when he did so."
Since around 2015 the UK has experienced a surge of interest in money burning. This has been noted in both the book The Mysterium and in a Kindred Spirit article that is reprinted in International Times. An annual Mass Burn Event - where people are invited to burn money - is held at the Cockpit Theatre in London every autumn.
Symbolism
Publicly burning money can be an act of protest or a kind of artistic statement. Often the point is to emphasize money's intrinsic worthlessness.
In 1984, Serge Gainsbourg burned a 500 French franc note on television to protest against heavy taxation.
On 23 August 1994, the K Foundation (an art duo consisting of Bill Drummond and Jimmy Cauty) burned one million pounds sterling in cash on the Scottish island of Jura. This money represented the bulk of the K Foundation's funds, earned by Drummond and Cauty as The KLF, one of the United Kingdom's most successful pop groups of the early 1990s. The duo have never fully explained their motivations for the burning.
In the 1995 film Dead Presidents, the title sequence directed by Kyle Cooper features close shots of burning U.S. bills; it took two days of shooting and experimenting with the paper to get the effect right.
In the early 18th century, New York City courts would publicly burn the counterfeit bills they gathered, to show that they were both dangerous and worthless.
In traditional Chinese and Vietnamese ancestor veneration, imitation money in the form of joss paper are ceremonially burned, with the aspiration that the dead may use the money to finance a more comfortable afterlife.
In 2010, the spokesperson for the Swedish Feminist Initiative, Gudrun Schyman, burned SEK 100,000 during a speech about the inequality in wages for men and women.
In 2018, a collective of artists called Distributed Gallery have created a machine named Chaos Machine which burns banknotes and turns them into cryptocurrencies while playing music.
Game theory
In game theory, a threat to burn money can affect the strategies of the players involved; a classic example is the situation described as 'battle of the sexes', where the ability to burn money allows the player to achieve the desired equilibrium without actually having to burn money.
For commodity value
Fiat money can sometimes be destroyed by converting it into commodity form, rather than completely forfeiting the value of the money. Sometimes, currency intended for use as fiat money becomes more valuable as a commodity, usually when inflation causes its face value to fall below its intrinsic value. For example, in India in 2007, Rupee coins disappeared from the market when their face value dropped below the value of the stainless steel from which they were made. Similarly, in 1965, the US government had to switch from silver to copper-nickel clad quarter coins because the silver value of the coins had exceeded their face value and were being melted down by individuals for profit. The same occurred to 5-franc coins of Switzerland, which up to the year 1969 were minted using a silver alloy. At the peak of hyperinflation in the Weimar Republic, people burned banknotes for warmth, as their face value had fallen below their value as fuel.
Legality
The legality of money burning varies with jurisdiction.
Australia
Section 16 of the Crimes (Currency) Act 1981 prohibits deliberate damage and destruction of Australian money without a relevant legal permit. The law covers both current Australian money and historical coins and notes. Breaking this law can lead to detention or a fine.
According to this law, even writing words on a banknote can be punished.
Brazil
In Brazil, whether it is illegal for a person to burn his or her own money is a controversial topic. It is not mentioned explicitly in Brazilian law. João Sidney Figueiredo Filho, has affirmed that "when money is inside the Central Bank, then it is the property of the National Treasury. When it leaves, it is not." But the chief of police Jéferson Botelho Pereira has concluded that "whoever rips money is committing a crime against the property of the Union".
The production of paper money by the State Bank is under the exclusive authority of the Central Bank, and is issued by the Brazilian Union. By that reasoning, the paper on which the money is printed is the property of the State, and its intrinsic value belongs to the person. Articles 98 and 99 of the New Brazilian Civil Code give "money" its own definition. This is because a banknote cannot become a common good, if the owner himself decides to keep it in his possession indefinitely. This makes money different from other state assets such as rivers, seas, streets, roads and piazzas, which are all immobile.
Canada
The Currency Act states that "no person shall melt down, break up or use otherwise than as currency any coin that is legal tender in Canada."
Similarly, Section 456 of The Criminal Code of Canada says: "Every one who (a) defaces a current coin, or (b) utters a current coin that has been defaced, is guilty of an offence punishable on summary conviction."
However neither the Currency Act nor Criminal Code mention paper currency. It therefore remains legal to completely destroy paper currency.
Euro Zone
According to the European Commission's Recommendation dated 22 March 2010, "Member states must not prohibit or punish the complete destruction of small quantities of Euro coins or notes when this happens in private. However they must prohibit the unauthorised destruction of large amounts of Euro coins or notes." Also, "Member states must not encourage the mutilation of Euro notes or coins for artistic purposes, but they are required to tolerate it. Mutilated coins or notes should be considered unfit for circulation."
The European Union defines "falsifying or fraudulently altering money in any way" as a crime. Also, according to the EU ruling 1210/2010, "all money that is unfit for circulation must be delivered to the relevant national authority". EU countries must remove the currency from circulation and reimburse the holder", no matter what the country of issue.
The European Central Bank has established that "Member states may refuse to reimburse Euro money that has been deliberately rendered unfit for circulation, or where it has been caused by a process that would predictably have led to the money becoming unfit. The exception to this is money collected for charitable purposes, such as coins thrown into a fountain". The ECB legal department also states "the ECB will refuse to replace money that has been stamped for advertising purposes".
The European Union provides an obligation at the community level to retire "neutralized" notes from circulation, or those rendered unfit for security systems.
New Zealand
Section 154 of the Reserve Bank of New Zealand Act 2021 makes it an offence to wilfully deface, disfigure, or mutilate any bank note in New Zealand. The penalty is a fine of up to NZ$1,000.
Philippines
By Presidential Decree No. 247, willful mutilation of coins or notes is punishable by a fine of up to , imprisonment up to 5 years, or both.
Singapore
Singapore's Currency Act states that any person who mutilate, destroy, deface, or causes any change (to diminish value/utility of) currency note or coin is fined up to $2,000.
Sri Lanka
The Central Bank of Sri Lanka Act No.16 of 2023 makes mutilation of bank notes an offence punishable by imprisonment, a fine, or both.
Taiwan
Intentionally damaging coins or notes, and rendering them unusable, is fined up to 5 times the nominal value of such coins or notes.
Turkey
In Turkey, defacement or destruction of banknotes can be punished with fines or with prison sentences.
United Kingdom
The Currency and Bank Notes Act 1928 is an Act of the Parliament of the United Kingdom relating to banknotes. Among other things, it makes it a criminal offence to deface a banknote (but not to destroy one).
Under Section 10 of the Coinage Act 1971 "No person shall, except under the authority of a licence granted by the Treasury, melt down or break up any metal coin which is for the time being current in the United Kingdom or which, having been current there, has at any time after 16th May 1969 ceased to be so." As the process of creating elongated coins does not require them to be melted nor broken up, however, Section 10 does not apply and coin elongation is legal within the UK with penny press machines.
United States
In the United States, burning banknotes is prohibited under , which includes "any other thing" that renders a note "unfit to be reissued". In an amicus brief for Atwater v. City of Lago Vista, Solicitor General Seth Waxman writes that arresting an individual who removes the corner dollar values "may expose a counterfeiting operation". It is unclear if the statute has ever been applied in response to the complete destruction of a bill. Certainly people have publicly burned small amounts of money for political protests that were picked up by the media — Living Things at South by Southwest, Larry Kudlow on The Call, both in 2009 — without apparent consequence.
The question of legality has been compared to the much more politically charged issue of flag desecration. It can be argued that the desecration of the flag is comparable to the desecration of a photograph of Legal Tender (provided it was modified as to not violate counterfeiting laws). In 1989, in a Senate Judiciary Committee hearing on the Flag Protection Act, William Barr testified that any regulation protecting something purely for its symbolic value would be struck down as unconstitutional. The Senate report recommending passage of the Act argued that Barr's theory would render 18 U.S.C. § 333 unconstitutional as well. In a dissent in Smith v. Goguen, Justice Rehnquist counted 18 U.S.C. § 333 in a group of statutes in which the Government protects its interest in some private property which is "not a traditional property interest". On the other hand, the Government's interest in protecting circulating currency might not be purely symbolic; it costs the Bureau of Engraving and Printing approximately 5 cents to replace a note.
Legal Tender, a 1996 telerobotic art installment by Ken Goldberg, Eric Paulos, Judith Donath, and Mark Pauline, was an experiment to see if the law could instill a sense of physical risk in online interactions. After participants were advised that 18 U.S.C. § 333 threatened them with up to six months in jail, they were given the option of remotely defacing small portions of a pair of "purportedly authentic" $100 bills over the web. A crime may be occurring — but "only if the bills are real, the web site is authentic, and the experiment actually performed." In fact, one bill was real and the other counterfeit. Almost all of the participants reported that they believed the experiment and the bills to be faked.
The destruction of money is also bound by the same laws that govern the destruction of other personal property. In particular, one cannot empower the executor of one's estate to burn one's money after one dies.
See also
Crop burning
Fireproof banknote
K Foundation Burn a Million Quid
Money to Burn (performance art)
References
Further reading
External links
Monetary economics
Fire
Financial crimes | Money burning | Chemistry | 3,219 |
10,025,647 | https://en.wikipedia.org/wiki/APH-1 | APH-1 (anterior pharynx-defective 1) is a protein originally identified in the round worm Caenorhabditis elegans as a regulator of the cell-surface localization of nicastrin in the Notch signaling pathway.
APH-1 homologs in other organisms, including humans (APH1A and APH1B), have since been identified as components of the gamma secretase complex along with the catalytic subunit presenilin and the regulatory subunits nicastrin and PEN-2. The gamma-secretase complex is a multimeric protease responsible for the intramembrane proteolysis of transmembrane proteins such as the Notch protein and amyloid precursor protein (APP). Gamma-secretase cleavage of APP is one of two proteolytic steps required to generate the peptide known as amyloid beta, whose misfolded form is implicated in the causation of Alzheimer's disease. All of the components of the gamma-secretase complex undergo extensive post-translational modification, especially proteolytic activation; APH-1 and PEN-2 are regarded as regulators of the maturation process of the catalytic component presenilin. APH-1 contains a conserved alpha helix interaction motif glycine-X-X-X-glycine (GXXXG) that is essential to both assembly of the gamma secretase complex and to the maturation of the components.
Alternative splicing
In humans, the genes APH1A and APH1B encode the APH-1 proteins, which are integral components of the gamma-secretase complex, a multi-protein complex essential for the intramembrane cleavage of various substrates, including the amyloid precursor protein (APP) and Notch receptors. APH1A is located on chromosome 1q21.2, while APH1B is found on chromosome 15q22.2. Both genes exhibit alternative splicing, leading to the generation of multiple transcript variants that enhance the functional diversity of the gamma-secretase complex.
The alternative splicing of APH1A and APH1B contributes significantly to the regulation of gamma-secretase activity. Studies have shown that different isoforms of APH1 can modulate the cleavage of APP, influencing the production of amyloid-beta peptides, which are implicated in Alzheimer's disease. Moreover, the expression levels of these isoforms can vary in different tissues and under various pathological conditions, suggesting a complex regulatory mechanism that may have implications for diseases such as cancer and neurodegeneration. The involvement of APH1A and APH1B in the Notch signaling pathway further underscores their importance in developmental processes and cellular fate decisions, which can be disrupted in various cancers.
The functional versatility provided by the alternative splicing of APH1A and APH1B is crucial for the gamma-secretase complex's role in cellular signaling and proteolytic processing. For example, APH1A has been shown to be critical for the activity of the gamma-secretase complex, and its alternative splicing can influence the complex's substrate specificity and cleavage efficiency. Additionally, the interplay between APH1 isoforms and other components of the gamma-secretase complex, such as presenilins and nicastrin, is essential for maintaining the proper function of this protease.
Differences between APH1A and APH1B
Expression patterns
APH1A and APH1B, while homologous, exhibit distinct expression patterns across various tissues. APH1A is known for its ubiquitous expression, with significantly higher levels observed in the brain, heart, and skeletal muscle. In contrast, APH1B displays a more restricted expression profile, being predominantly expressed in the brain and testis. This differential expression suggests that APH1A may play a more generalized role in cellular processes, while APH1B could be more specialized, particularly in neural and reproductive tissues. Recent studies have highlighted the potential of APH1B as a peripheral biomarker for Alzheimer's disease (AD). Specifically, dysregulated expression levels of APH1B in peripheral blood have been associated with brain atrophy and amyloid-β deposition in AD patients. This association indicates that APH1B could serve as a valuable indicator of disease progression, providing insights into the underlying pathological mechanisms of AD.
Gamma-secretase activity
Functional studies have demonstrated that APH1A- and APH1B-containing gamma-secretase complexes exhibit distinct effects on enzyme activity and substrate processing. Notably, complexes containing APH1B have been shown to produce higher amounts of amyloid-beta 42 (Aβ42), a peptide closely linked to the pathology of Alzheimer's disease, compared to those containing APH1A. This difference in Aβ42 production is significant, as elevated levels of this peptide are associated with the formation of amyloid plaques, a hallmark of AD. The variations in substrate specificity and activity between the two isoforms could influence critical biological processes, including the processing of amyloid precursor protein (APP) and Notch signaling pathways. For instance, studies suggest that the presence of APH1B may lead to a shift in the cleavage patterns of APP, potentially favoring the production of longer and more pathogenic Aβ species. This altered processing could have profound implications for neuronal health and the progression of neurodegenerative diseases.
Structure
APH-1 proteins, which include APH1A and APH1B, are classified as polytopic membrane proteins characterized by the presence of seven transmembrane domains (TMDs). This structural feature is crucial for their integration into cellular membranes and their interaction with other components of the gamma-secretase complex. The topology of APH-1 enables it to span the lipid bilayer multiple times, effectively creating a scaffold that supports the assembly and stability of the gamma-secretase complex.
The seven TMDs of APH-1 facilitate its proper localization within the membrane, allowing it to interact with other integral membrane proteins, such as presenilin and nicastrin, which are also essential components of the gamma-secretase complex. The arrangement of these transmembrane domains is vital for the functional integrity of the complex, as it influences the accessibility of substrates and the catalytic activity of the gamma-secretase. In addition to the transmembrane domains, APH-1 proteins contain a conserved GXXXG motif within their transmembrane regions. This motif is critical for mediating helix-helix interactions that are essential for the assembly of the gamma-secretase complex. The GXXXG motif facilitates the dimerization of transmembrane helices, promoting the stability and functionality of the protein complex. Furthermore, APH-1 contains other conserved sequences that play significant roles in maintaining the protein's stability and facilitating interactions with nicastrin and presenilin. These structural motifs and domains are not only important for the assembly of the gamma-secretase complex but also for its enzymatic activity. The interactions between APH-1 and other components are crucial for the proper processing of substrates, including amyloid precursor protein (APP) and Notch receptors, which are involved in critical cellular signaling pathways.
Regulation of expression
Transcriptional
The expression of APH-1 genes, which include APH1A and APH1B, is regulated by several transcription factors and signaling pathways. One significant pathway involved in this regulation is the Notch signaling pathway, which can modulate the expression of APH-1, creating a feedback loop that adjusts gamma-secretase activity according to cellular needs. This interaction underscores the importance of APH-1 in cellular signaling and its potential role in maintaining homeostasis within the gamma-secretase complex.
Additionally, factors such as hypoxia-inducible factor (HIF) have been shown to influence APH-1 expression under specific physiological conditions, particularly in response to low oxygen levels. This suggests that APH-1 may play a role in cellular adaptation to hypoxic environments, further emphasizing its regulatory complexity.
Post-translational modifications
Post-translational modifications (PTMs) of APH-1, including glycosylation and phosphorylation, significantly affect the protein's stability, localization, and interactions within the gamma-secretase complex. Glycosylation, for instance, is a major PTM that can influence protein folding, stability, and interactions with other proteins. The addition of carbohydrate moieties can affect how APH-1 interacts with other components of the gamma-secretase complex, thereby impacting its overall function.
Phosphorylation is another critical PTM that can modulate APH-1 activity. It has been shown that phosphorylation can alter protein conformation, localization, and interaction dynamics, which are essential for the proper functioning of the gamma-secretase complex. The interplay between different types of PTMs can create a complex regulatory network that fine-tunes APH-1 activity in response to various cellular signals and conditions.
Clinical significance
Altered expression of APH-1 genes has been investigated in the context of Alzheimer's disease and other neurological disorders. Variations in these genes may modulate disease risk or progression by affecting gamma-secretase activity and amyloid-beta production. Elevated expression of APH1B in peripheral blood has been associated with brain atrophy and increased amyloid-β deposition in Alzheimer's patients, indicating its potential as a biomarker.
As a drug target
Targeting APH-1 offers a potential therapeutic avenue for modulating gamma-secretase activity without completely inhibiting its function. Small molecules or peptides that specifically disrupt APH-1 interactions within the complex could reduce amyloid-beta production while minimizing side effects. Modulating the composition of the gamma-secretase complex to favor APH1A over APH1B may reduce the production of neurotoxic Aβ42 species.
References
External links
Alzheimer's disease
Proteins | APH-1 | Chemistry | 2,104 |
41,564,118 | https://en.wikipedia.org/wiki/C15H22N2O6 | {{DISPLAYTITLE:C15H22N2O6}}
The molecular formula C15H22N2O6 may refer to:
Lysine acetylsalicylate
Nipradilol | C15H22N2O6 | Chemistry | 47 |
2,665,507 | https://en.wikipedia.org/wiki/Xi%20Serpentis | Xi Serpentis, Latinized from ξ Serpentis, is a triple star system in the Serpens Cauda (tail) section of the equatorial constellation Serpens. Based upon an annual parallax shift of 30.98 mas as seen from Earth, it is located 105.3 light years from the Sun. The star system is visible to the naked eye with a base apparent visual magnitude of +3.54. It is moving closer to the Sun and will make perihelion passage at a distance of in around 690,000 years.
The inner pair form a single-lined spectroscopic binary with an orbital period of 2.29 days following a circular orbit with an eccentricity of 0.00. The primary, component Aa, has a visual magnitude of 3.54. It is a white-hued G-type giant star with a stellar classification of . This indicates it is a chemically peculiar Ap star with an abnormal abundance of strontium. The primary has around double the mass of the Sun, while its close companion, component Ab, has only 18% of the Sun's mass.
The third member, component B, is a magnitude 13.0 common proper motion companion. As of 2012, it was located at an angular separation of 24 arc seconds along a position angle of 78° from the inner pair. It has about 27% of the Sun's mass and an estimated orbital period of 14,763 years.
Name
In Chinese, Tiān Shì Zuǒ Yuán (), meaning Left Wall of Heavenly Market Enclosure, refers to an asterism which represents eleven old states (and region) in China and which marks the left borderline of the enclosure, consisting of ξ Serpentis, δ Herculis, λ Herculis, μ Herculis, ο Herculis, 112 Herculis, ζ Aquilae, η Serpentis, θ1 Serpentis, ν Ophiuchi and η Ophiuchi. Consequently, the Chinese name for ξ Serpentis itself is (, ), representing the region of Nanhai (南海, lit. meaning southern sea)
References
F-type giants
Triple star systems
Spectroscopic binaries
Serpentis, Xi
Serpens
Durchmusterung objects
Serpentis, 55
159876
086263
6561 | Xi Serpentis | Astronomy | 473 |
57,319,151 | https://en.wikipedia.org/wiki/Paul%20Clavin | Paul Clavin is a French scientist at Aix-Marseille University, working in the field of combustion and statistical mechanics. He is the founder of Institute for Research on Nonequilibrium Phenomena (IRPHE).
Biography
Paul Clavin obtained his first degree at ENSMA and then a Master's degree in Mathematics and Plasma Physics. For his PhD, he joined Ilya Prigogine in Brussels from 1967 to 1970 and then returned to Poitiers. Paul Clavin moved to Aix-Marseille University in the late 1970s and created the combustion research group.
Clavin served as the chair of the Physical Mechanics at Institut Universitaire de France from 1993 to 2004 and the administrator from 2000 to 2005. He received Ya.B. Zeldovich Gold Medal from The Combustion Institute in 2014 and a fellow of The Combustion Institute. A workshop titled Out-of-Equilibrium Dynamics was conducted in 2012 in honor of Clavin's 70th birthday. He is the recipient of Grand Prix award from French Academy of Sciences in 1998 and received Plumey award from Société Française de Physique in 1988. He was elected membre correspondant at the French Academy of sciences in 1997.
Books
See also
References
French fluid dynamicists
Living people
Fellows of the Combustion Institute
Year of birth missing (living people)
University of Poitiers alumni
Members of the French Academy of Sciences | Paul Clavin | Chemistry | 280 |
71,027,014 | https://en.wikipedia.org/wiki/Quasisymmetry | In magnetic confinement fusion, quasisymmetry (sometimes hyphenated as quasi-symmetry) is a type of continuous symmetry in the magnetic field strength of a stellarator. Quasisymmetry is desired, as Noether's theorem implies that there exists a conserved quantity in such cases. This conserved quantity ensures that particles stick to the flux surface, resulting in better confinement and neoclassical transport.
It is currently unknown if it is mathematically possible to construct a quasi-symmetric magnetic field which upholds magnetohydrodynamic force balance, which is required for stability. There are stellarator designs which are very close to being quasisymmetric, and it is possible to find solutions by generalizing the magnetohydrodynamic force balance equation. Quasisymmetric systems are a subset of omnigenous systems. The Helically Symmetric eXperiment and the National Compact Stellarator Experiment are designed to be quasisymmetric.
References
Magnetic confinement fusion
Symmetry | Quasisymmetry | Physics,Mathematics | 198 |
54,052,081 | https://en.wikipedia.org/wiki/Stephen%20Hawking%20Medal%20for%20Science%20Communication | The Stephen Hawking Medal for Science Communication is an honor bestowed by the Starmus Festival to individuals and teams in science and the arts to recognize the work of those helping to promote the public awareness of science.
History
The Stephen Hawking Medal for Science Communication was initially announced on December 16, 2015, at the Royal Society in London, by a panel including Professor Stephen Hawking, the Starmus founding director Professor Garik Israelian, Dr. Brian May, Professor Richard Dawkins, Alexei Leonov, Nobel laureate Sir Harold Kroto, Kip Thorne, Hans Zimmer and Sarah Brightman were special guests during the ceremony and made speeches.
The Stephen Hawking Medals are awarded to the Science Communicator of the Year in three categories:
Music & Arts
Science Writing
Films & Entertainment
Lifetime Achievement
During the presentation of the Medal, Stephen Hawking said:
"I am delighted to present the Stephen Hawking Medal for Science Communication to be awarded at STARMUS festivals. This medal will recognize excellence in science communication across different media, whether in writing, broadcasting, music, film, or fine art. This takes account of the great diversity, richness, creativity, and scope that science communicators use to reach a wide popular audience... I am very pleased to support and honour the work of science communicators, and look forward to awarding The Stephen Hawking Medal at STARMUS Festivals."
The first Stephen Hawking Medals for Science Communication were awarded at the third Starmus Festival in June 2016. The winners were selected by Stephen Hawking himself and received the Medal from him.
Professor Hawking said of the award:
By engaging with everyone from school children to politicians to pensioners, science communicators put science right at the heart of daily life. Bringing science to the people brings people into science. This matters to me, to you, to the world as a whole.
After Starmus III, the Starmus Advisory Board joined Stephen Hawking in the selection of winners.
The medal
The design of the medal used a portrait of Professor Hawking by cosmonaut Alexei Leonov, the first man to perform a spacewalk and member of the Advisory Board of Starmus since its first edition. The other side combines the image of Alexei Leonov during the first spacewalk, and the “Red Special” – Brian May’s guitar – to demonstrate music, another major component of the Starmus Festival.
The Medal itself was designed by Alexei Leonov and Brian May.
Brian May said about the Medal:
"The Stephen Hawking Medal will be awarded for the first time at Starmus III to the human being who by their sharing of science and music with us all, is the greatest inspiration to the next generation of artists and scientists."
During the Stephen Hawking Medal Award Ceremony at Starmus III, Alexei Leonov pointed out:
"I did a sketch of Stephen Hawking… and when I showed it to him, I saw a big smile on his face. The Stephen Hawking Medal created by STARMUS will be awarded to the best science communicators in the world in three categories: science and/or science-fiction writers, musicians and artists, and people in the film and entertainment industry. I am honoured to be a part of this historical medal."
Recipients
2016
Music & Arts: Hans Zimmer
Science writing: Jim Al-Khalili
Films & Entertainment: Science documentary Particle Fever
2017
Music & Arts: Jean-Michel Jarre
Science Writing: Neil deGrasse Tyson
Films & Entertainment: Sitcom The Big Bang Theory
2019
Music & Arts: Brian Eno
Science Writing: Elon Musk
Films & Entertainment: Science documentary Apollo 11
Lifetime Achievement: Buzz Aldrin
2022
Music & Arts: Brian May
Science Writing: Diane Ackerman
Films & Entertainment: NASA TV and Communications Office
Lifetime Achievement: Jane Goodall
2024
Presented on May 15, 2024 at the Starmus Festival:
Music & Arts: Laurie Anderson
Science Writing: Sylvia Earle
Films & Entertainment: Christopher Nolan
Lifetime Achievement: David Attenborough
References
External links
Starmus: Medal Home Page
Guardian Newspaper: 2016 Winners
Astronomy Magazine: Announcement about institution of medal
Science communication awards
Awards established in 2015 | Stephen Hawking Medal for Science Communication | Technology | 849 |
59,734,475 | https://en.wikipedia.org/wiki/Journal%20of%20Astrophysics%20and%20Astronomy | The Journal of Astrophysics and Astronomy is a peer-reviewed scientific journal of astrophysics and astronomy established in 1980. It is co-published bimonthly by Springer India, the Indian Academy of Sciences, and Astronomical Society of India. The journal is edited by Annapurni Subramaniam.
Indexing and abstracting
The journal is abstracted and indexed in the following bibliographic databases:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.270.
References
External links
Academic journals established in 1980
Bimonthly journals
English-language journals
Astrophysics journals
Astronomy journals
Springer Science+Business Media academic journals | Journal of Astrophysics and Astronomy | Physics,Astronomy | 130 |
24,144,530 | https://en.wikipedia.org/wiki/C22H36O2 | {{DISPLAYTITLE:C22H36O2}}
The molecular formula C22H36O2 (molar mass: 332.52 g/mol, exact mass: 332.2715 u) may refer to:
Docosatetraenoic acid
Ganaxolone
Cannabicyclohexanol
O-1656
Molecular formulas | C22H36O2 | Physics,Chemistry | 76 |
1,798,812 | https://en.wikipedia.org/wiki/LattisNet | LattisNet was a family of computer networking hardware and software products built and sold by SynOptics Communications (also rebranded by Western Digital) during the 1980s. Examples were the 1000, 2500 and 3000 series of LattisHub network hubs.
LattisNet was the first implementation of 10 Megabits per second local area networking over unshielded twisted pair wiring in a star topology.
Ethernet variants
During the early 1980s most networks used coaxial cable as the primary form of premises cabling in Ethernet implementations. In 1985 SynOptics shipped its first hub for fiber optics and shielded twisted pair.
SynOptics' co-founder, Engineer Ronald V. Schmidt, had experimented with a fiber-optic variant of Ethernet called Fibernet II while working at Xerox PARC, where Ethernet had been invented.
In January 1987 SynOptics announced intentions to manufacture equipment supporting 10 megabits/sec data transfer rates over unshielded twisted pair, telephone wire.
In August 1987 New York based LAN Systems, Inc. completed the equipment testing and praised SynOptics for successfully deploying a 10 Mbit/s network that supported workstations up to 330 feet from the wiring closet, because of their careful control of EMI and RFI.
Novell reported that the LattisNet equipment performed better than RG-58U coaxial cable.
This same year HP proposed a study group be formed to look into standardizing Ethernet on telephone wires. SynOptics' investor, Menlo Ventures explained its position on joining the IEEE for standardization.
In 1990 the IEEE issued an Ethernet over twisted pair standard known for transmitting 10 Mbit/s, or 10BASE-T (802.3i).
Ethernet compatibility
Of the SynOptics hubs, the 2500 series was only compatible with LattisNet twisted-pair Ethernet; the 1000 and 3000 series featured modules for LattisNet and standard 10BASE-T. In the 1000 series, the 505 modules are LattisNet and the 508 modules are 10BASE-T.
References
External links
(Advertisement)
Ethernet / 10BASE-T / LattisNet discussions
A method to identify the type of 3000-series modules
Networking hardware
Ethernet | LattisNet | Engineering | 451 |
428,937 | https://en.wikipedia.org/wiki/Geode | A geode (; ) is a geological secondary formation within sedimentary and volcanic rocks. Geodes are hollow, vaguely spherical rocks, in which masses of mineral matter (which may include crystals) are secluded. The crystals are formed by the filling of vesicles in volcanic and subvolcanic rocks by minerals deposited from hydrothermal fluids; or by the dissolution of syn-genetic concretions and partial filling by the same or other minerals precipitated from water, groundwater, or hydrothermal fluids.
Formation
Geodes can form in any cavity, but the term is usually reserved for more or less rounded formations in igneous and sedimentary rocks. They can form in gas bubbles in igneous rocks, such as vesicles in basaltic lava; or, as in the American Midwest, in rounded cavities in sedimentary formations. After rock around the cavity hardens, dissolved silicates and/or carbonates are deposited on the inside surface. Over time, this slow feed of mineral constituents from groundwater or hydrothermal solutions allows crystals to form inside the hollow chamber. Bedrock containing geodes eventually weathers and decomposes, leaving them present at the surface if they are composed of resistant material such as quartz.
Coloration
Geodes and geode slices are sometimes dyed with artificial colors.
Occurrence
Geodes are found where the geology is suitable with many of the commercially available ones coming from Brazil, Uruguay, Namibia, and Mexico. Large, amethyst-lined geodes are a feature of the basalts of the Paraná and Etendeka traps found in Brazil, Uruguay and Namibia. Geodes are common in some formations in the United States (mainly in Indiana, Iowa, Missouri, western Illinois, Kentucky, and Utah). Geodes are also abundant in the Mendip Hills in Somerset, England, where they are known locally as "potato stones". The term geode generally describes hollow formations. If the rock is completely solid inside, this would be classified as a nodule or thunderegg.
Crystal caves
'Crystal cave' is both an informal term for any large crystal-lined geode and also used for specific geoheritage locations such as the Crystal Cave (Ohio), discovered in 1887 at the Heineman Winery on Put-In-Bay, Ohio, the Cave of the Crystals (Mexico), and the Pulpi Geode, discovered in 1999 in Spain. In 1999, a mineralogist group discovered a cave filled with giant selenite (gypsum) crystals in an abandoned silver mine, Mina Rica, near Pulpi, Province of Almeria, Spain. The cavity, which measured , was, at the time, the largest crystal cave ever found. Following its discovery, the entrance to the cave was blocked by five tons of rock, with an additional police presence to prevent looters. In the summer of 2019 the cave, a significant geotourism resource and now named the 'Geoda de Pulpi', Pulpi Geode, was opened as a tourist attraction, allowing small groups (max. 12 people) to visit the caves with a tour guide.
See also
Bristol Diamonds
Coso artifact
Lithophysa
Septarian nodule
Thunderegg
References
Further reading
Pough, Fredrick H. Rocks and Minerals,
Middleton, Gerard V. (2003). Encyclopedia of Sediments and Sedimentary Rocks. Springer, , p. 221 ()
Keller, Walter David (1961). The Common Rocks and Minerals of Missouri. University of Missouri Press, , S. 67 ()
Witzke, Brian J. Geodes: A Look at Iowa's State Rock. Iowa Geological Survey
Geodes Kentucky Geological Survey (University of Kentucky)
External links
Indiana geode specimens, facts and stories
Video of a geode cracking using industrial soil pipe cutter
Australian Museum Fact sheet
Utah Geode Beds
Rocks
Mineralogy
Petrology | Geode | Physics | 781 |
29,114,360 | https://en.wikipedia.org/wiki/Biofuels%2C%20Bioproducts%20and%20Biorefining | Biofuels, Bioproducts and Biorefining () is a bimonthly peer-reviewed review and commentary journal published by John Wiley & Sons on behalf of the Society of Chemical Industry. The journal was established in 2007 and the editor in chief is Bruce E. Dale. According to the Journal Citation Reports, the journal's 2020 impact factor is 4.102.
References
External links
Wiley (publisher) academic journals
Biotechnology journals
Bimonthly journals
Chemical industry in the United Kingdom
Academic journals established in 2007
English-language journals | Biofuels, Bioproducts and Biorefining | Biology | 110 |
3,115,543 | https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Szekeres%20theorem | In mathematics, the Erdős–Szekeres theorem asserts that, given r, s, any sequence of distinct real numbers with length at least (r − 1)(s − 1) + 1 contains a monotonically increasing subsequence of length r or a monotonically decreasing subsequence of length s. The proof appeared in the same 1935 paper that mentions the Happy Ending problem.
It is a finitary result that makes precise one of the corollaries of Ramsey's theorem. While Ramsey's theorem makes it easy to prove that every infinite sequence of distinct real numbers contains a monotonically increasing infinite subsequence or a monotonically decreasing infinite subsequence, the result proved by Paul Erdős and George Szekeres goes further.
Example
For r = 3 and s = 2, the formula tells us that any permutation of three numbers has an increasing subsequence of length three or a decreasing subsequence of length two. Among the six permutations of the numbers 1,2,3:
1,2,3 has an increasing subsequence consisting of all three numbers
1,3,2 has a decreasing subsequence 3,2
2,1,3 has a decreasing subsequence 2,1
2,3,1 has two decreasing subsequences, 2,1 and 3,1
3,1,2 has two decreasing subsequences, 3,1 and 3,2
3,2,1 has three decreasing length-2 subsequences, 3,2, 3,1, and 2,1.
Alternative interpretations
Geometric interpretation
One can interpret the positions of the numbers in a sequence as x-coordinates of points in the Euclidean plane, and the numbers themselves as y-coordinates; conversely, for any point set in the plane, the y-coordinates of the points, ordered by their x-coordinates, forms a sequence of numbers (unless two of the points have equal x-coordinates). With this translation between sequences and point sets, the Erdős–Szekeres theorem can be interpreted as stating that in any set of at least rs − r − s + 2 points we can find a polygonal path of either r − 1 positive-slope edges or s − 1 negative-slope edges. In particular (taking r = s), in any set of at least n points we can find a polygonal path of at least ⌊⌋ edges with same-sign slopes. For instance, taking r = s = 5, any set of at least 17 points has a four-edge path in which all slopes have the same sign.
An example of rs − r − s + 1 points without such a path, showing that this bound is tight, can be formed by applying a small rotation to an (r − 1) by (s − 1) grid.
Permutation pattern interpretation
The Erdős–Szekeres theorem may also be interpreted in the language of permutation patterns as stating that every permutation of length at least (r - 1)(s - 1) + 1 must contain either the pattern 12⋯r or the pattern s⋯21.
Proofs
The Erdős–Szekeres theorem can be proved in several different ways; surveys six different proofs of the Erdős–Szekeres theorem, including the following two.
Other proofs surveyed by Steele include the original proof by Erdős and Szekeres as well as those of , , and .
Pigeonhole principle
Given a sequence of length (r − 1)(s − 1) + 1, label each number ni in the sequence with the pair (ai, bi), where ai is the length of the longest monotonically increasing subsequence ending with ni and bi is the length of the longest monotonically decreasing subsequence ending with ni. Each two numbers in the sequence are labeled with a different pair: if and then , and on the other hand if then . But there are only (r − 1)(s − 1) possible labels if ai is at most r − 1 and bi is at most s − 1, so by the pigeonhole principle there must exist a value of i for which ai or bi is outside this range. If ai is out of range then ni is part of an increasing sequence of length at least r, and if bi is out of range then ni is part of a decreasing sequence of length at least s.
credits this proof to the one-page paper of and calls it "the slickest and most systematic" of the proofs he surveys.
Dilworth's theorem
Another of the proofs uses Dilworth's theorem on chain decompositions in partial orders, or its simpler dual (Mirsky's theorem).
To prove the theorem, define a partial ordering on the members of the sequence, in which x is less than or equal to y in the partial order if x ≤ y as numbers and x is not later than y in the sequence. A chain in this partial order is a monotonically increasing subsequence, and an antichain is a monotonically decreasing subsequence. By Mirsky's theorem, either there is a chain of length r, or the sequence can be partitioned into at most r − 1 antichains; but in that case the largest of the antichains must form a decreasing subsequence with length at least
Alternatively, by Dilworth's theorem itself, either there is an antichain of length s, or the sequence can be partitioned into at most s − 1 chains, the longest of which must have length at least r.
Application of the Robinson–Schensted correspondence
The result can also be obtained as a corollary of the Robinson–Schensted correspondence.
Recall that the Robinson–Schensted correspondence associates to each sequence a Young tableau P whose entries are the values of the sequence. The tableau P has the following properties:
The length of the longest increasing subsequence is equal to the length of the first row of P.
The length of the longest decreasing subsequence is equal to the length of the first column of P.
Now, it is not possible to fit (r − 1)(s − 1) + 1 entries in a square box of size (r − 1)(s − 1), so that either the first row is of length at least r or the last row is of length at least s.
See also
Longest increasing subsequence problem
References
External links
Ramsey theory
Permutation patterns
Theorems in discrete geometry
Articles containing proofs
Szekeres theorem
Theorems in discrete mathematics | Erdős–Szekeres theorem | Mathematics | 1,368 |
2,100,174 | https://en.wikipedia.org/wiki/Sleep-talking | Somniloquy, commonly referred to as sleep-talking, is a parasomnia in which one speaks aloud while asleep. It can range from simple mumbling sounds to loud shouts or long, frequently inarticulate, speeches. It can occur many times during a sleep cycle and during both NREM and REM sleep stages, though, as with sleepwalking and night terrors, it most commonly occurs during delta-wave NREM sleep or temporary arousals therefrom.
When somniloquy occurs during rapid eye movement sleep, it represents a so-called "motor breakthrough" of dream speech: words spoken in a dream are spoken out loud. Depending on its frequency, this may or may not be considered pathological. All motor functions are disabled during healthy REM sleep and therefore REM somniloquy is usually considered a component of REM behavior disorder.
Presentation
Associated conditions
Sleep-talking can occur by itself (i.e., idiopathic) or as a feature of another sleep disorder such as:
Rapid eye movement behavior disorder (RBD) – loud, emotional or profane sleep talking
Sleepwalking
Night terrors – intense fear, screaming, shouting
Sleep-related eating disorder (SRED)
Causes
In 1966, researchers worked to find links between heredity and sleep-talking. Their research suggests the following:
Sleep-talking parents are more likely to have children who sleep-talk.
Sleep-talking can still occur, though much less commonly, when neither parent has a history of sleep talking.
A large portion of people begin to sleep-talk later in life without any prior history of sleep-talking during childhood or adolescence.
Sleep-talking by itself is typically harmless; however, it can wake others and cause them consternation—especially when misinterpreted as conscious speech by an observer. If the sleep-talking is dramatic, emotional, or profane it may be a sign of another sleep disorder. Sleep-talking can be monitored by a partner or by using an audio recording device; devices which remain idle until detecting a sound are ideal for this purpose.
Polysomnography (sleep recording) shows that episodes of sleep talking can occur in any stage of sleep.
Stress can also cause sleep talking. In one study, about 30% of people who had PTSD (post-traumatic stress disorder) talk in their sleep. A 1990 study showed that Vietnam War veterans having PTSD report talking more in their sleep than do people without PTSD.
Sleep-talking can also be caused by depression, sleep deprivation, day-time drowsiness, alcohol, and fever. It often occurs in association with other sleep disorders such as confusional arousals, sleep apnea, and REM sleep behavior disorder. In rare cases, adult-onset sleep-talking is linked with a psychiatric disorder or nocturnal seizure.
Prevalence
Sleep-talking is very common and is reported in 50% of young children at least once a year. A large percentage of people progressively sleep-talk less often after the age of 25. A sizable proportion of people without any episode during their childhood begin to sleep-talk in adult life. Sleep-talking may be hereditary.
In a study reporting the prevalence of sleep-talking in childhood, the authors reported that the frequency of sleep-talking differs between children. About half of the children have sleep-talking episodes at least once a year, but less than 10% of children present sleep-talking every night, whereas 20% to 25% talk in their sleep at least once a week. In addition, they did not find any difference between gender or socioeconomic class.
However, valid estimation of the prevalence of this phenomenon is difficult as the sleep-talker either does not remember or are not aware of their sleep-talking. The same uncertainty exists concerning the age of onset because early occurrences may have escaped notice. Thus, there are disparate results regarding its prevalence in the literature.
Treatment
Usually, treatment is not required for sleep-talking because it generally does not disturb sleep or cause other problems.
One behavioral treatment has shown results in the past. Le Boeuf (1979) used an automated auditory signal to treat chronic sleep-talking in a person who had talked in his sleep for 6 years. An aversive sound was produced for 5 seconds when he started talking in his sleep. Sleep-talking was rapidly eliminated, and the person demonstrated no adverse effects of treatment.
With little treatment options, there are ways in which one can limit the frequency of sleep talking episodes by focusing on sleep hygiene. Some tips include the following:
Limiting caffeine intake throughout the day
Putting electronics away within an hour before bedtime
Keeping the bedroom at a cool temperature to make it more comfortable.
Maintaining a regular bedtime schedule where one wakes up and goes to sleep at the same time everyday
Getting physical exercise for at least an hour everyday
Having a space with limited distractions for sleeping
In literature
Sleep-talking appears in Shakespeare's Macbeth, the famous sleepwalking scene. Lady Macbeth, in a "slumbery agitation", is observed by a gentlewoman and doctor to walk in her sleep and wash her hands, and utter the famous line, "Out, damned spot! out, I say!" (Act 5, Scene 1).
Sleep-talking also appears in The Childhood of King Erik Menved, a 19th-century historical romance by Danish author Bernhard Severin Ingemann. In the story, a young girl named Aasé has the prophetic power of speaking the truth in her sleep. In an 1846 English translation, Aasé is described thus:
Walt Whitman wrote a now-lost novel based on Ingemann's romance, which he titled The Sleeptalker.
In Lewis Carroll's Alice's Adventure's in Wonderland, Chapter VII, The Dormouse talks in his sleep, or at least seems to, and even sings in his sleep:
See also
Dion McGregor, noted 20th-century sleep-talker
References
Further reading
External links
OSF Healthcare
The Sleep Well
Somniloquies in the form of albums and books by Bryan Lewis Saunders
Filmmaker Adam Rosenberg's four-minute film of himself sleep-talking
Sleep disorders
Oral communication
Parasomnias | Sleep-talking | Biology | 1,265 |
14,118,401 | https://en.wikipedia.org/wiki/Oxoeicosanoid | The oxoeicosanoids are nonclassic eicosanoids, derived from arachidonic acid (AA).
For example, Lipoxygenase produces 5-HETE from AA; a dehydrogenase then produces 5-oxo-eicosatetraenoic acid, an oxoeicosanoid, from 5-HETE.
They are similar to the leukotrienes in their actions, but they act via a different receptor.
References
Eicosanoids | Oxoeicosanoid | Chemistry,Biology | 104 |
41,961,028 | https://en.wikipedia.org/wiki/The%20Martian%20%28Weir%20novel%29 | The Martian is a 2011 science fiction debut novel written by Andy Weir. The book was originally self-published on Weir's blog, in a serialized format. In 2014, the book was re-released after Crown Publishing Group purchased the exclusive publishing rights. The story follows an American astronaut, Mark Watney, as he becomes stranded alone on Mars in 2035 and must improvise in order to survive.
A film adaptation, The Martian, directed by Ridley Scott and starring Matt Damon, was released in October 2015.
Plot summary
In the year 2035, the crew of NASA's Ares 3 mission have arrived at Acidalia Planitia for a planned month-long stay on Mars. After only six sols, an intense dust and wind storm threatens to topple their Mars Ascent Vehicle (MAV), which would trap them on the planet. During the hurried evacuation, an antenna tears loose and impales astronaut Mark Watney, a botanist and engineer, also disabling his spacesuit radio. He is flung out of sight by the wind and presumed dead. As the MAV teeters dangerously, mission commander Melissa Lewis has no choice but to take off without completing the search for Watney.
However, Watney is not dead. His injury proves relatively minor, but with no long-range radio, he cannot communicate with anyone. He must rely on his own resourcefulness to survive. He begins a log of his experiences. His philosophy is to "work the problem", solving each challenge in turn as it confronts him. With food a critical, though not immediate, problem, he begins growing potatoes in the crew's Martian habitat, the Hab. He uses an iridium catalyst to separate hydrogen gas from surplus hydrazine fuel, which he then burns to generate water for the plants.
NASA eventually discovers that Watney is alive when satellite images of the landing site show evidence of his activities; NASA personnel begin devising ways to rescue him, but withhold the news of his survival from the rest of the Ares 3 crew, on their way back to Earth aboard the Hermes spacecraft, so as not to distract them.
Watney plans to drive to Schiaparelli crater where the next mission, Ares 4, will land in four years and whose MAV is already pre-positioned. He begins modifying one of the rovers for the journey, adding solar cells and an additional battery. He makes a three-week test drive to recover part of the Pathfinder lander and Sojourner rover and brings it back to the Hab, enabling him to contact NASA. Mitch Henderson, the Ares 3 flight director, convinces NASA Administrator Teddy Sanders to allow him to inform the Ares 3 crew of Watney's survival; they are thrilled, except for Lewis, who is guilt-stricken at leaving him behind.
The canvas at one of the Hab airlocks tears because of Watney's repeated use of the same airlock, which was not designed for frequent and long-term usage. This results in the depressurization of the Hab and nearly kills him. He repairs the Hab, but his plants are dead, threatening him again with eventual starvation. Setting aside safety protocols to comply with time constraints, NASA hastily prepares an uncrewed probe to send Watney supplies, but the rocket disintegrates after liftoff. A deal with the China National Space Administration provides a ready booster — planned for use with the Taiyang Shen, an uncrewed solar probe — to try again. With no time to build a probe with a soft-landing system, NASA is faced with the prospect of building a capsule whose cargo can survive crashing into the Martian surface at .
However, astrodynamicist Rich Purnell devises a "slingshot" trajectory around Earth for a gravity assist that could get Hermes back to Mars on a much-extended mission to save Watney, using the Chinese rocket booster to send a simpler resupply probe to Hermes as it passes Earth. Sanders vetoes the "Rich Purnell Maneuver", as it would entail risking the other crewmembers, but Henderson secretly emails the details to Hermes. All five of Watney's crewmates approve the plan. Once they begin the maneuver, having disabled the remote overrides, NASA has no choice but to support them. The resupply ship docks with Hermes successfully.
Watney resumes modifying the rover because the new rescue plan requires him to lift off from Mars in the Ares 4 MAV. While working on the rover, Watney accidentally shorts out the electronics of Pathfinder, losing the ability to communicate with Earth, except for spelling out Morse code with rocks for a one-way communication.
After Watney leaves for Schiaparelli, NASA discovers that a dust storm is approaching his path, but has no way to warn him. The rover's solar cells will be less and less able to recharge, endangering both the rendezvous and his immediate survival if there is not enough power to run his life-support equipment. While crossing Arabia Terra, Watney becomes aware of the darkening sky and improvises a rough measurement of the storm's shape and direction of movement, enabling him to go around it.
Surviving a rover rollover on his descent into Schiaparelli, Watney reaches the MAV and reestablishes contact with NASA. He receives instructions on the radical modifications necessary to reduce the MAV's weight to enable it to intercept Hermes during its flyby. The modifications include removing the front of the MAV, which Watney has to cover with Hab canvas. After takeoff, the canvas tears, creating extra drag and leaving the MAV too low for the rendezvous.
Lewis hastily improvises a plan to intercept the MAV by firing Hermes attitude thrusters and then blowing a hole in the front airlock with an improvised sugar-and-liquid-oxygen bomb, using the thrust from the escaping air to reduce speed. Beck, the Hermes EVA specialist, uses a Manned Maneuvering Unit, MMU, on a tether to reach Watney and bring him back to Hermes. In a final log entry, Watney expresses his joy at being rescued, reflecting on the human instinct to help those in need.
Main characters
The major characters in the novel are:
Mark Watney – The titular "Martian" and main character; Ares 3 astrobotanist and mechanical engineer.
Melissa Lewis – Commander of Ares 3, United States Navy Submarine Warfare officer, oceanographer, and geologist.
Rick Martinez – Ares 3 pilot.
Beth Johanssen – Ares 3 computer specialist.
Alex Vogel – Ares 3 astrochemist.
Dr. Chris Beck – Ares 3 flight surgeon and EVA specialist.
Dr. Venkat Kapoor – Ares program manager.
Mitch Henderson – Chief of astronaut corps.
Bruce Ng – Director of JPL.
Teddy Sanders – NASA administrator (head of NASA).
Annie Montrose – NASA public relations chief.
Mindy Park – NASA satellite imaging.
Rich Purnell – NASA astrodynamicist.
Publishing history
Andy Weir, the son of a particle physicist and electrical engineer, has a background in computer science. He began writing the book in 2009, researching related material so that it would be as realistic as possible and based on existing technology. Weir had previously used the concept of humans stranded on Mars in his webcomic Casey and Andy. Weir studied orbital mechanics, astronomy, and the history of human spaceflight. He said he knows the exact date of each day in the book. He specifically avoided physically describing the characters when not necessary for the plot.
Having been rebuffed by literary agents when trying to get prior books published, Weir decided to put the book online in serial format one chapter at a time for free at his website. At the request of fans, he made an Amazon Kindle version available at 99 cents (the minimum allowable price he could set). The Kindle edition rose to the top of Amazon's list of best-selling science-fiction titles, selling 35,000 copies in three months, more than had been previously downloaded free. This garnered the attention of publishers: Podium Publishing, an audiobook publisher, signed for the audiobook rights in January 2013. Weir sold the print rights to Crown in March 2013 for over US$100,000.
The book debuted on the New York Times Best Seller list on March 2, 2014, in the hardcover fiction category at twelfth position and remained on this list for four weeks without going above eleventh position. The trade paperback edition of the novel debuted on The New York Times Best Seller list on November 16, 2014, in the paperback trade fiction category at eighth position. It gradually rose to the top position for the week of June 28, 2015, before dropping down to number two for nine weeks, during which it was displaced by E. L. James' Grey, before returning to the top position on September 6, 2015.
The book remained continuously at the number one position for 12 weeks before it was displaced on November 22, 2015, by Nora Roberts' Stars of Fortune for two weeks. The trade paperback returned to the top position for the third and final time on December 6, 2015, for six weeks before it was finally replaced on January 24, 2016. The trade paperback's final appearance on the list occurred on April 24, 2016, 76 weeks after its debut in this category. Overall, the trade paperback edition was on the top of its New York Times best seller category for a total of 19 out of 76 weeks that the edition was listed.
Editions
The Martian was published in print by Crown on February 11, 2014. There are significant textual changes between Weir's original self-published version and the Crown edition: profanity was reduced, spelling and grammatical errors were fixed, there were many minor stylistic changes, scientific errors were corrected, and a 263-word epilogue removed. An audiobook edition, narrated by R. C. Bray and released by Podium Publishing, preceded the print release in March 2013 on Audible.com, and was later followed with an MP3 CD in association with Brilliance Audio. The audiobook was nominated and won an Audie Award (2014) in the Science Fiction category. A Classroom Edition, published by Broadway Books in May 2016, contains educational materials and removes profanity and also made further scientific corrections. Audible released a new audiobook edition, narrated by Wil Wheaton in January 2020, featuring several additional short tie-in stories written by Weir.
Tie-ins
In 2015, Andy Weir wrote a prequel short story to The Martian, titled "Diary of an AssCan".
A German publication of an interview with Andy Weir and survival tips for living on Mars was published in 2017, titled "Der Mars Survival Guide", tying into the novel and movie.
In 2024, Weir released The Martian: Lost Sols to celebrate 10 years since the first publication.
Reception
In a starred review, Publishers Weekly said that "Weir laces the technical details with enough keen wit to satisfy hard science fiction fan and general reader alike." Kirkus Reviews called The Martian "Sharp, funny and thrilling, with just the right amount of geekery". The Wall Street Journal called the book "the best pure sci-fi novel in years". Entertainment Weekly gave the novel a grade of "B", describing it as "an impressively geeky debut novel" but saying Weir "stumbles with his secondary characters".
USA Today rated The Martian three out of four stars, calling it "terrific stuff, a crackling good read" but noting that "Mark's unflappability, perhaps the book's biggest asset, is also its greatest weakness. He's a wiseacre with a tendency to steer well clear of existential matters." Amazing Stories commented, "Andy Weir's The Martian will leave you as breathless as if you'd been dropped on the Martian surface without a suit".
Awards and honors
The Martian has been translated to over 45 languages and some of those translations have won major awards. In 2015, the Japanese translation of the novel won the Seiun Award for Best Translated Long Story, the Hebrew translation won the Geffen Award for Best Translated Science Fiction Novel, and the Spanish translation won the Ignotus Awards for Best Foreign Novel.
At the 2016 Hugo Awards, Weir won the John W. Campbell Award for Best New Writer for The Martian while the screenplay adapted from the novel additionally won Best Dramatic Presentation, Long Form at the same event.
Solanum watneyi, a species of bush tomato from Australia, was named after the fictional botanist. It is a member of the same genus as the potato, Solanum.
Film adaptation
In March 2013, Twentieth Century Fox optioned the film rights, and hired screenwriter Drew Goddard to adapt and direct the film. In May 2014, it was reported that Ridley Scott was in negotiations to direct an adaptation that would star Matt Damon as Mark Watney. On September 3, 2014, Jessica Chastain joined the film as Commander Lewis. The ensemble cast also includes Kristen Wiig, Jeff Daniels, Michael Peña, Kate Mara, Sean Bean, Sebastian Stan, and Chiwetel Ejiofor. The film was released on October 2, 2015, and became the 10th-highest-grossing film of the year. The film has also been nominated for almost 200 awards and has won 40. The Martian was nominated for 9 academy awards in 2016, although the film unfortunately did not win any. These nominations include “Best Motion Picture,” “Best Adapted Screenplay,” and “Best Performance by an Actor in a Leading Role.”
In popular culture
On December 5, 2014, the Orion spacecraft took the cover page of The Martian script on the first test flight of the uncrewed Exploration Flight Test 1 (EFT-1). The script was launched atop a Delta IV Heavy on the flight lasting 4 hours and 24 minutes, landing at its target in the Pacific Ocean.
In October 2015, NASA presented a new web tool to follow Watney's trek across Mars, and details of NASA's next steps, as well as a health hazards report, for a real-world human journey to Mars.
See also
No Man Friday
Robinsonade
Robinson Crusoe on Mars
The Moon Is Hell! by John W. Campbell
A Fall of Moondust by Arthur C. Clarke
Notes
References
External links
A map of the Mark Watney's travels in The Martian
2011 American novels
Self-published books
2011 science fiction novels
Fiction set in 2035
Novels set in the 2030s
Novels first published in serial form
American science fiction novels
Space exploration novels
American novels adapted into films
Novels first published online
Novels set on Mars
Novels about NASA
Hard science fiction
Novels about survival skills
Fiction about castaways
Science fiction novels adapted into films
2011 debut novels
Works by Andy Weir
Crown Publishing Group books | The Martian (Weir novel) | Astronomy | 3,005 |
61,580 | https://en.wikipedia.org/wiki/Electrical%20resistivity%20and%20conductivity | Electrical resistivity (also called volume resistivity or specific electrical resistance) is a fundamental specific property of a material that measures its electrical resistance or how strongly it resists electric current. A low resistivity indicates a material that readily allows electric current. Resistivity is commonly represented by the Greek letter (rho). The SI unit of electrical resistivity is the ohm-metre (Ω⋅m). For example, if a solid cube of material has sheet contacts on two opposite faces, and the resistance between these contacts is , then the resistivity of the material is .
Electrical conductivity (or specific conductance) is the reciprocal of electrical resistivity. It represents a material's ability to conduct electric current. It is commonly signified by the Greek letter (sigma), but (kappa) (especially in electrical engineering) and (gamma) are sometimes used. The SI unit of electrical conductivity is siemens per metre (S/m). Resistivity and conductivity are intensive properties of materials, giving the opposition of a standard cube of material to current. Electrical resistance and conductance are corresponding extensive properties that give the opposition of a specific object to electric current.
Definition
Ideal case
In an ideal case, cross-section and physical composition of the examined material are uniform across the sample, and the electric field and current density are both parallel and constant everywhere. Many resistors and conductors do in fact have a uniform cross section with a uniform flow of electric current, and are made of a single material, so that this is a good model. (See the adjacent diagram.) When this is the case, the resistance of the conductor is directly proportional to its length and inversely proportional to its cross-sectional area, where the electrical resistivity (Greek: rho) is the constant of proportionality. This is written as:
where
The resistivity can be expressed using the SI unit ohm metre (Ω⋅m) — i.e. ohms multiplied by square metres (for the cross-sectional area) then divided by metres (for the length).
Both resistance and resistivity describe how difficult it is to make electrical current flow through a material, but unlike resistance, resistivity is an intrinsic property and does not depend on geometric properties of a material. This means that all pure copper (Cu) wires (which have not been subjected to distortion of their crystalline structure etc.), irrespective of their shape and size, have the same , but a long, thin copper wire has a much larger than a thick, short copper wire. Every material has its own characteristic resistivity. For example, rubber has a far larger resistivity than copper.
In a hydraulic analogy, passing current through a high-resistivity material is like pushing water through a pipe full of sand - while passing current through a low-resistivity material is like pushing water through an empty pipe. If the pipes are the same size and shape, the pipe full of sand has higher resistance to flow. Resistance, however, is not determined by the presence or absence of sand. It also depends on the length and width of the pipe: short or wide pipes have lower resistance than narrow or long pipes.
The above equation can be transposed to get Pouillet's law (named after Claude Pouillet):
The resistance of a given element is proportional to the length, but inversely proportional to the cross-sectional area. For example, if = , = (forming a cube with perfectly conductive contacts on opposite faces), then the resistance of this element in ohms is numerically equal to the resistivity of the material it is made of in Ω⋅m.
Conductivity, , is the inverse of resistivity:
Conductivity has SI units of siemens per metre (S/m).
General scalar quantities
If the geometry is more complicated, or if the resistivity varies from point to point within the material, the current and electric field will be functions of position. Then it is necessary to use a more general expression in which the resistivity at a particular point is defined as the ratio of the electric field to the density of the current it creates at that point:
where
The current density is parallel to the electric field by necessity.
Conductivity is the inverse (reciprocal) of resistivity. Here, it is given by:
For example, rubber is a material with large and small — because even a very large electric field in rubber makes almost no current flow through it. On the other hand, copper is a material with small and large — because even a small electric field pulls a lot of current through it.
This expression simplifies to the formula given above under "ideal case" when the resistivity is constant in the material and the geometry has a uniform cross-section. In this case, the electric field and current density are constant and parallel.
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left;"
! Derivation of the constant case from the general case
|-
|We will combine three equations.
Assume the geometry has a uniform cross-section and the resistivity is constant in the material. Then the electric field and current density are constant and parallel, and by the general definition of resistivity, we obtain
Since the electric field is constant, it is given by the total voltage across the conductor divided by the length of the conductor:
Since the current density is constant, it is equal to the total current divided by the cross sectional area:
Plugging in the values of and into the first expression, we obtain:
Finally, we apply Ohm's law, :
|}
Tensor resistivity
When the resistivity of a material has a directional component, the most general definition of resistivity must be used. It starts from the tensor-vector form of Ohm's law, which relates the electric field inside a material to the electric current flow. This equation is completely general, meaning it is valid in all cases, including those mentioned above. However, this definition is the most complicated, so it is only directly used in anisotropic cases, where the more simple definitions cannot be applied. If the material is not anisotropic, it is safe to ignore the tensor-vector definition, and use a simpler expression instead.
Here, anisotropic means that the material has different properties in different directions. For example, a crystal of graphite consists microscopically of a stack of sheets, and current flows very easily through each sheet, but much less easily from one sheet to the adjacent one. In such cases, the current does not flow in exactly the same direction as the electric field. Thus, the appropriate equations are generalized to the three-dimensional tensor form:
where the conductivity and resistivity are rank-2 tensors, and electric field and current density are vectors. These tensors can be represented by 3×3 matrices, the vectors with 3×1 matrices, with matrix multiplication used on the right side of these equations. In matrix form, the resistivity relation is given by:
where
Equivalently, resistivity can be given in the more compact Einstein notation:
In either case, the resulting expression for each electric field component is:
Since the choice of the coordinate system is free, the usual convention is to simplify the expression by choosing an -axis parallel to the current direction, so . This leaves:
Conductivity is defined similarly:
or
both resulting in:
Looking at the two expressions, and are the matrix inverse of each other. However, in the most general case, the individual matrix elements are not necessarily reciprocals of one another; for example, may not be equal to . This can be seen in the Hall effect, where is nonzero. In the Hall effect, due to rotational invariance about the -axis, and , so the relation between resistivity and conductivity simplifies to:
If the electric field is parallel to the applied current, and are zero. When they are zero, one number, , is enough to describe the electrical resistivity. It is then written as simply , and this reduces to the simpler expression.
Conductivity and current carriers
Relation between current density and electric current velocity
Electric current is the ordered movement of electric charges.
Causes of conductivity
Band theory simplified
According to elementary quantum mechanics, an electron in an atom or crystal can only have certain precise energy levels; energies between these levels are impossible. When a large number of such allowed levels have close-spaced energy values – i.e. have energies that differ only minutely – those close energy levels in combination are called an "energy band". There can be many such energy bands in a material, depending on the atomic number of the constituent atoms and their distribution within the crystal.
The material's electrons seek to minimize the total energy in the material by settling into low energy states; however, the Pauli exclusion principle means that only one can exist in each such state. So the electrons "fill up" the band structure starting from the bottom. The characteristic energy level up to which the electrons have filled is called the Fermi level. The position of the Fermi level with respect to the band structure is very important for electrical conduction: Only electrons in energy levels near or above the Fermi level are free to move within the broader material structure, since the electrons can easily jump among the partially occupied states in that region. In contrast, the low energy states are completely filled with a fixed limit on the number of electrons at all times, and the high energy states are empty of electrons at all times.
Electric current consists of a flow of electrons. In metals there are many electron energy levels near the Fermi level, so there are many electrons available to move. This is what causes the high electronic conductivity of metals.
An important part of band theory is that there may be forbidden bands of energy: energy intervals that contain no energy levels. In insulators and semiconductors, the number of electrons is just the right amount to fill a certain integer number of low energy bands, exactly to the boundary. In this case, the Fermi level falls within a band gap. Since there are no available states near the Fermi level, and the electrons are not freely movable, the electronic conductivity is very low.
In metals
A metal consists of a lattice of atoms, each with an outer shell of electrons that freely dissociate from their parent atoms and travel through the lattice. This is also known as a positive ionic lattice. This 'sea' of dissociable electrons allows the metal to conduct electric current. When an electrical potential difference (a voltage) is applied across the metal, the resulting electric field causes electrons to drift towards the positive terminal. The actual drift velocity of electrons is typically small, on the order of magnitude of metres per hour. However, due to the sheer number of moving electrons, even a slow drift velocity results in a large current density. The mechanism is similar to transfer of momentum of balls in a Newton's cradle but the rapid propagation of an electric energy along a wire is not due to the mechanical forces, but the propagation of an energy-carrying electromagnetic field guided by the wire.
Most metals have electrical resistance. In simpler models (non quantum mechanical models) this can be explained by replacing electrons and the crystal lattice by a wave-like structure. When the electron wave travels through the lattice, the waves interfere, which causes resistance. The more regular the lattice is, the less disturbance happens and thus the less resistance. The amount of resistance is thus mainly caused by two factors. First, it is caused by the temperature and thus amount of vibration of the crystal lattice. Higher temperatures cause bigger vibrations, which act as irregularities in the lattice. Second, the purity of the metal is relevant as a mixture of different ions is also an irregularity. The small decrease in conductivity on melting of pure metals is due to the loss of long range crystalline order. The short range order remains and strong correlation between positions of ions results in coherence between waves diffracted by adjacent ions.
In semiconductors and insulators
In metals, the Fermi level lies in the conduction band (see Band Theory, above) giving rise to free conduction electrons. However, in semiconductors the position of the Fermi level is within the band gap, about halfway between the conduction band minimum (the bottom of the first band of unfilled electron energy levels) and the valence band maximum (the top of the band below the conduction band, of filled electron energy levels). That applies for intrinsic (undoped) semiconductors. This means that at absolute zero temperature, there would be no free conduction electrons, and the resistance is infinite. However, the resistance decreases as the charge carrier density (i.e., without introducing further complications, the density of electrons) in the conduction band increases. In extrinsic (doped) semiconductors, dopant atoms increase the majority charge carrier concentration by donating electrons to the conduction band or producing holes in the valence band. (A "hole" is a position where an electron is missing; such holes can behave in a similar way to electrons.) For both types of donor or acceptor atoms, increasing dopant density reduces resistance. Hence, highly doped semiconductors behave metallically. At very high temperatures, the contribution of thermally generated carriers dominates over the contribution from dopant atoms, and the resistance decreases exponentially with temperature.
In ionic liquids/electrolytes
In electrolytes, electrical conduction happens not by band electrons or holes, but by full atomic species (ions) traveling, each carrying an electrical charge. The resistivity of ionic solutions (electrolytes) varies tremendously with concentration – while distilled water is almost an insulator, salt water is a reasonable electrical conductor. Conduction in ionic liquids is also controlled by the movement of ions, but here we are talking about molten salts rather than solvated ions. In biological membranes, currents are carried by ionic salts. Small holes in cell membranes, called ion channels, are selective to specific ions and determine the membrane resistance.
The concentration of ions in a liquid (e.g., in an aqueous solution) depends on the degree of dissociation of the dissolved substance, characterized by a dissociation coefficient , which is the ratio of the concentration of ions to the concentration of molecules of the dissolved substance :
The specific electrical conductivity () of a solution is equal to:
where : module of the ion charge, and : mobility of positively and negatively charged ions, : concentration of molecules of the dissolved substance, : the coefficient of dissociation.
Superconductivity
The electrical resistivity of a metallic conductor decreases gradually as temperature is lowered. In normal (that is, non-superconducting) conductors, such as copper or silver, this decrease is limited by impurities and other defects. Even near absolute zero, a real sample of a normal conductor shows some resistance. In a superconductor, the resistance drops abruptly to zero when the material is cooled below its critical temperature. In a normal conductor, the current is driven by a voltage gradient, whereas in a superconductor, there is no voltage gradient and the current is instead related to the phase gradient of the superconducting order parameter. A consequence of this is that an electric current flowing in a loop of superconducting wire can persist indefinitely with no power source.
In a class of superconductors known as type II superconductors, including all known high-temperature superconductors, an extremely low but nonzero resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. The resistance due to this effect is tiny compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen so that the resistance of the material becomes truly zero.
Plasma
Plasmas are very good conductors and electric potentials play an important role.
The potential as it exists on average in the space between charged particles, independent of the question of how it can be measured, is called the plasma potential, or space potential. If an electrode is inserted into a plasma, its potential generally lies considerably below the plasma potential, due to what is termed a Debye sheath. The good electrical conductivity of plasmas makes their electric fields very small. This results in the important concept of quasineutrality, which says the density of negative charges is approximately equal to the density of positive charges over large volumes of the plasma (), but on the scale of the Debye length there can be charge imbalance. In the special case that double layers are formed, the charge separation can extend some tens of Debye lengths.
The magnitude of the potentials and electric fields must be determined by means other than simply finding the net charge density. A common example is to assume that the electrons satisfy the Boltzmann relation:
Differentiating this relation provides a means to calculate the electric field from the density:
(∇ is the vector gradient operator; see nabla symbol and gradient for more information.)
It is possible to produce a plasma that is not quasineutral. An electron beam, for example, has only negative charges. The density of a non-neutral plasma must generally be very low, or it must be very small. Otherwise, the repulsive electrostatic force dissipates it.
In astrophysical plasmas, Debye screening prevents electric fields from directly affecting the plasma over large distances, i.e., greater than the Debye length. However, the existence of charged particles causes the plasma to generate, and be affected by, magnetic fields. This can and does cause extremely complex behavior, such as the generation of plasma double layers, an object that separates charge over a few tens of Debye lengths. The dynamics of plasmas interacting with external and self-generated magnetic fields are studied in the academic discipline of magnetohydrodynamics.
Plasma is often called the fourth state of matter after solid, liquids and gases. It is distinct from these and other lower-energy states of matter. Although it is closely related to the gas phase in that it also has no definite form or volume, it differs in a number of ways, including the following:
Resistivity and conductivity of various materials
A conductor such as a metal has high conductivity and a low resistivity.
An insulator such as glass has low conductivity and a high resistivity.
The conductivity of a semiconductor is generally intermediate, but varies widely under different conditions, such as exposure of the material to electric fields or specific frequencies of light, and, most important, with temperature and composition of the semiconductor material.
The degree of semiconductors doping makes a large difference in conductivity. To a point, more doping leads to higher conductivity. The conductivity of a water/aqueous solution is highly dependent on its concentration of dissolved salts, and other chemical species that ionize in the solution. Electrical conductivity of water samples is used as an indicator of how salt-free, ion-free, or impurity-free the sample is; the purer the water, the lower the conductivity (the higher the resistivity). Conductivity measurements in water are often reported as specific conductance, relative to the conductivity of pure water at . An EC meter is normally used to measure conductivity in a solution. A rough summary is as follows:
This table shows the resistivity (), conductivity and temperature coefficient of various materials at .
The effective temperature coefficient varies with temperature and purity level of the material. The 20 °C value is only an approximation when used at other temperatures. For example, the coefficient becomes lower at higher temperatures for copper, and the value 0.00427 is commonly specified at .
The extremely low resistivity (high conductivity) of silver is characteristic of metals. George Gamow tidily summed up the nature of the metals' dealings with electrons in his popular science book One, Two, Three...Infinity (1947):
More technically, the free electron model gives a basic description of electron flow in metals.
Wood is widely regarded as an extremely good insulator, but its resistivity is sensitively dependent on moisture content, with damp wood being a factor of at least worse insulator than oven-dry. In any case, a sufficiently high voltage – such as that in lightning strikes or some high-tension power lines – can lead to insulation breakdown and electrocution risk even with apparently dry wood.
Temperature dependence
Linear approximation
The electrical resistivity of most materials changes with temperature. If the temperature does not vary too much, a linear approximation is typically used:
where is called the temperature coefficient of resistivity, is a fixed reference temperature (usually room temperature), and is the resistivity at temperature . The parameter is an empirical parameter fitted from measurement data. Because the linear approximation is only an approximation, is different for different reference temperatures. For this reason it is usual to specify the temperature that was measured at with a suffix, such as , and the relationship only holds in a range of temperatures around the reference. When the temperature varies over a large temperature range, the linear approximation is inadequate and a more detailed analysis and understanding should be used.
Metals
In general, electrical resistivity of metals increases with temperature. Electron–phonon interactions can play a key role. At high temperatures, the resistance of a metal increases linearly with temperature. As the temperature of a metal is reduced, the temperature dependence of resistivity follows a power law function of temperature. Mathematically the temperature dependence of the resistivity of a metal can be approximated through the Bloch–Grüneisen formula:
where is the residual resistivity due to defect scattering, A is a constant that depends on the velocity of electrons at the Fermi surface, the Debye radius and the number density of electrons in the metal. is the Debye temperature as obtained from resistivity measurements and matches very closely with the values of Debye temperature obtained from specific heat measurements. n is an integer that depends upon the nature of interaction:
= 5 implies that the resistance is due to scattering of electrons by phonons (as it is for simple metals)
= 3 implies that the resistance is due to s-d electron scattering (as is the case for transition metals)
= 2 implies that the resistance is due to electron–electron interaction.
The Bloch–Grüneisen formula is an approximation obtained assuming that the studied metal has spherical Fermi surface inscribed within the first Brillouin zone and a Debye phonon spectrum.
If more than one source of scattering is simultaneously present, Matthiessen's rule (first formulated by Augustus Matthiessen in the 1860s) states that the total resistance can be approximated by adding up several different terms, each with the appropriate value of .
As the temperature of the metal is sufficiently reduced (so as to 'freeze' all the phonons), the resistivity usually reaches a constant value, known as the residual resistivity. This value depends not only on the type of metal, but on its purity and thermal history. The value of the residual resistivity of a metal is decided by its impurity concentration. Some materials lose all electrical resistivity at sufficiently low temperatures, due to an effect known as superconductivity.
An investigation of the low-temperature resistivity of metals was the motivation to Heike Kamerlingh Onnes's experiments that led in 1911 to discovery of superconductivity. For details see History of superconductivity.
Wiedemann–Franz law
The Wiedemann–Franz law states that for materials where heat and charge transport is dominated by electrons, the ratio of thermal to electrical conductivity is proportional to the temperature:
where is the thermal conductivity, is the Boltzmann constant, is the electron charge, is temperature, and is the electric conductivity. The ratio on the rhs is called the Lorenz number.
Semiconductors
In general, intrinsic semiconductor resistivity decreases with increasing temperature. The electrons are bumped to the conduction energy band by thermal energy, where they flow freely, and in doing so leave behind holes in the valence band, which also flow freely. The electric resistance of a typical intrinsic (non doped) semiconductor decreases exponentially with temperature following an Arrhenius model:
An even better approximation of the temperature dependence of the resistivity of a semiconductor is given by the Steinhart–Hart equation:
where , and are the so-called Steinhart–Hart coefficients.
This equation is used to calibrate thermistors.
Extrinsic (doped) semiconductors have a far more complicated temperature profile. As temperature increases starting from absolute zero they first decrease steeply in resistance as the carriers leave the donors or acceptors. After most of the donors or acceptors have lost their carriers, the resistance starts to increase again slightly due to the reducing mobility of carriers (much as in a metal). At higher temperatures, they behave like intrinsic semiconductors as the carriers from the donors/acceptors become insignificant compared to the thermally generated carriers.
In non-crystalline semiconductors, conduction can occur by charges quantum tunnelling from one localised site to another. This is known as variable range hopping and has the characteristic form of
where = 2, 3, 4, depending on the dimensionality of the system.
Kondo insulators
Kondo insulators are materials where the resistivity follows the formula
where , , and are constant parameters, the residual resistivity, the Fermi liquid contribution, a lattice vibrations term and the Kondo effect.
Complex resistivity and conductivity
When analyzing the response of materials to alternating electric fields (dielectric spectroscopy), in applications such as electrical impedance tomography, it is convenient to replace resistivity with a complex quantity called impedivity (in analogy to electrical impedance). Impedivity is the sum of a real component, the resistivity, and an imaginary component, the reactivity (in analogy to reactance). The magnitude of impedivity is the square root of sum of squares of magnitudes of resistivity and reactivity.
Conversely, in such cases the conductivity must be expressed as a complex number (or even as a matrix of complex numbers, in the case of anisotropic materials) called the admittivity. Admittivity is the sum of a real component called the conductivity and an imaginary component called the susceptivity.
An alternative description of the response to alternating currents uses a real (but frequency-dependent) conductivity, along with a real permittivity. The larger the conductivity is, the more quickly the alternating-current signal is absorbed by the material (i.e., the more opaque the material is). For details, see Mathematical descriptions of opacity.
Resistance versus resistivity in complicated geometries
Even if the material's resistivity is known, calculating the resistance of something made from it may, in some cases, be much more complicated than the formula above. One example is spreading resistance profiling, where the material is inhomogeneous (different resistivity in different places), and the exact paths of current flow are not obvious.
In cases like this, the formulas
must be replaced with
where and are now vector fields. This equation, along with the continuity equation for and the Poisson's equation for , form a set of partial differential equations. In special cases, an exact or approximate solution to these equations can be worked out by hand, but for very accurate answers in complex cases, computer methods like finite element analysis may be required.
Resistivity-density product
In some applications where the weight of an item is very important, the product of resistivity and density is more important than absolute low resistivity – it is often possible to make the conductor thicker to make up for a higher resistivity; and then a low-resistivity-density-product material (or equivalently a high conductivity-to-density ratio) is desirable. For example, for long-distance overhead power lines, aluminium is frequently used rather than copper (Cu) because it is lighter for the same conductance.
Silver, although it is the least resistive metal known, has a high density and performs similarly to copper by this measure, but is much more expensive. Calcium and the alkali metals have the best resistivity-density products, but are rarely used for conductors due to their high reactivity with water and oxygen (and lack of physical strength). Aluminium is far more stable. Toxicity excludes the choice of beryllium. (Pure beryllium is also brittle.) Thus, aluminium is usually the metal of choice when the weight or cost of a conductor is the driving consideration.
History
John Walsh and the conductivity of a vacuum
In a 1774 letter to Dutch-born British scientist Jan Ingenhousz, Benjamin Franklin relates an experiment by another British scientist, John Walsh, that purportedly showed this astonishing fact: Although rarified air conducts electricity better than common air, a vacuum does not conduct electricity at all.
However, to this statement a note (based on modern knowledge) was added by the editors—at the American Philosophical Society and Yale University—of the webpage hosting the letter:
See also
Charge transport mechanisms
Chemiresistor
Classification of materials based on permittivity
Conductivity near the percolation threshold
Contact resistance
Electrical resistivities of the elements (data page)
Electrical resistivity tomography
Sheet resistance
SI electromagnetism units
Skin effect
Spitzer resistivity
Dielectric strength
Notes
References
Further reading
Measuring Electrical Resistivity and Conductivity
External links
Comparison of the electrical conductivity of various elements in WolframAlpha
https://edu-physics.com/2021/01/07/resistivity-of-the-material-of-a-wire-physics-practical/
Physical quantities
Materials science | Electrical resistivity and conductivity | Physics,Materials_science,Mathematics,Engineering | 6,199 |
38,894,091 | https://en.wikipedia.org/wiki/Tricholoma%20ustale | Tricholoma ustale, commonly known as the burnt knight, is a species of mushroom in the large genus Tricholoma. It is found in Asia, Europe, and North America, though those from North America may represent one or more different species.
Taxonomy
Elias Magnus Fries described the fungus in 1818 as Agaricus ustalis. Paul Kummer gave it its current name in 1871 upon transferring it to the genus Tricholoma. Lucien Quélet's Gyrodon ustale, published in 1886, is a synonym. Marcel Bon described the variety rufoaurantiacum from France in 1984. Within the genus Tricholoma, T. ustale is classified in the section Albobrunnea of the subgenus Tricholoma.
The species name is from the Latin ustalis "burnt" and relates to the colour of the mushroom. It is commonly known as the "burnt knight". In Japan, the mushroom is known as Kakishimeji (Kaki-shimeji).
Description
The mushroom has a bell-shape to conical or convex cap that measures in diameter and is orange-red-brown. The cap margin is initially curled inward, but straightens in age as the edge become lobed wavy. The gills are somewhat crowded together and have an adnate to emarginate attachment to the stem. They are cream to pale yellow when young, aging to pale brown with brown spots. The cylindrical stem, which measures long by thick, is somewhat thicker at the base. The flesh is white but turns brown where it is bruised or otherwise injured. The roughly spherical to ellipsoid spores are typically 6.0–7.5 by 5.0–6.0 μm, and feature a hilum.
Tricholoma ezcarayense, described from Spain in 1992, is similar in appearance to T. ustale, and also grows in association with beech. It can be distinguished in the field by its less robust stature, the minute, flat scales on the cap, and the green tints present in the reddish-brown colour of the cap. It can be more reliably distinguished by microscopic characteristics, as the hyphae in its cap cuticle have abundant clamp connections, unlike T. ustale.
Toxicity
Tricholoma ustale is one of the three species most commonly implicated with mushroom poisoning in Japan (Other two are Omphalotus japonicus and Entoloma rhodopolium). Consumption of the mushroom causes gastrointestinal distress, including symptoms such as vomiting and diarrhoea. Chemical analysis of Japanese populations has revealed the toxic principles ustalic acid and several related compounds. Force-fed to mice, ustalic acid causes them to sit still in a crouched position, hesitant to move, and induces tremors and abdominal contractions. High enough concentrations of the toxin (10 milligrams per mouse) cause death. Ustalic acid, an inhibitor of the sodium-potassium pump (Na+/K+-ATPase) found in the plasma membrane of all animal cells, has been chemically synthesized. The toxicity of North American populations is unknown.
Habitat and distribution
Tricholoma ustale is an ectomycorrhizal species, and grows in association with beech. In England, it can be locally common in the southern counties.
See also
List of North American Tricholoma
List of Tricholoma species
References
ustale
Fungi described in 1818
Fungi of Asia
Fungi of Europe
Fungi of North America
Poisonous fungi
Taxa named by Elias Magnus Fries
Fungus species | Tricholoma ustale | Biology,Environmental_science | 739 |
26,174,708 | https://en.wikipedia.org/wiki/Vibrating%20feeder | A vibratory feeder is an instrument that uses vibration to "feed" material to a process or machine. Vibratory feeders use both vibration and gravity to move material. Gravity is used to determine the direction, either down, or down and to a side, and then vibration is used to move the material. They are mainly used to transport a large number of smaller objects.
A beltweigher are used only to measure the material flow rate but weigh feeder can measure the flow material and also control or regulate the flow rate by varying the belt conveyor speed.
Industries Served
Versatile and rugged vibratory bowl feeders have been extremely used for automatic feeding of small to large and differently shaped industrial parts. They are the oldest but still commonly used automation machine available for aligning and feeding machine parts, electronic parts, plastic parts, chemicals, metallic parts, glass vials, pharmaceuticals, foods, miscellaneous goods etc.
Available in standard and custom designs, vibratory bowl feeders have been largely purchased by varied industrial sectors for automating high-speed production lines and assembly systems. Some of the industries that use the service of this automation machine include:
Pharmaceutical
Automotive
Electronic
Food Processing
Fast Moving Consumable Goods (FMCG)
Packaging
Metal working
Glass
Foundry
Steel
Construction
Recycling
Pulp and paper
Plastics
Uphill, (also known as salmon tables)
With these easy-to-use and high-performing part-feeding machines, customers from varied industrial sectors have achieved lower error rates, less power consumption, better profits, better rates of efficiency and less dependency on manpower.
See also
Bowl feeders
Industrial machinery | Vibrating feeder | Engineering | 323 |
2,097,931 | https://en.wikipedia.org/wiki/Thrombospondin | Thrombospondins (TSPs) are a family of secreted glycoproteins with antiangiogenic functions. Due to their dynamic role within the extracellular matrix they are considered matricellular proteins. The first member of the family, thrombospondin 1 (THBS1), was discovered in 1971 by Nancy L. Baenziger.
Types
The thrombospondins are a family of multifunctional proteins. The family consists of thrombospondins 1-5 and can be divided into 2 subgroups: A, which contains TSP-1 and TSP-2, and B, which contains TSP-3, TSP-4 and TSP-5 (also designated cartilage oligomeric protein or COMP). TSP-1 and TSP-2 are homotrimers, consisting of three identical subunits, whereas TSP-3, TSP-4 and TSP-5 are homopentamers.
TSP-1 and TSP-2 are produced by immature astrocytes during brain development, which promotes the development of new synapses.
Thrombospondin 1
Thrombospondin 1 (TSP-1) is encoded by THBS1. It was first isolated from platelets that had been stimulated with thrombin, and so was designated 'thrombin-sensitive protein'. Since its first recognition, functions for TSP-1 have been found in multiple biological processes including angiogenesis, apoptosis, activation of TGF-beta and Immune regulation. As such, TSP-1 is designated a multifunctional protein.
TSP-1 has multiple receptors, among which CD36, CD47 and integrins are of particular note.
TSP-1 is antiangiogenic, inhibiting the proliferation and migration of endothelial cells by interactions with CD36 expressed on their surface of these cells. Inhibitory peptides and fragments of TSP1 bind to CD36, leading to the expression of FAS ligand (FasL), which activates its specific, ubiquitous receptor, Fas. This leads to the activation of caspases and apoptosis of the cell. Since tumors overexpressing TSP-1 typically grow slower, exhibit less angiogenesis, and have fewer metastases, TSP1 is an attractive target for cancer treatment. Because TSP1 is extremely large (~ monomer), not very abundant and exerts multiple actions, its clinical usefulness is questionable. However, small-molecules based on a CD36-binding peptide sequence from TSP1 are being tested. One analog, ABT-510, exhibits potent proapoptotic activity in cultured cells, while clinically it is very well tolerated with therapeutic benefits reported against several malignancies. In 2005, ABT-510 was evaluated in phase II clinical trials for the treatment of several types of cancer.
Human proteins containing this domain
ADAMTS1; ADAMTS10; ADAMTS12; ADAMTS13; ADAMTS14; ADAMTS15; ADAMTS16; ADAMTS17;
ADAMTS18; ADAMTS19; ADAMTS2; ADAMTS20; ADAMTS3; ADAMTS4; ADAMTS5; ADAMTS6;
ADAMTS7; ADAMTS8; ADAMTS9; ADAMTSL1; ADAMTSL2; ADAMTSL3; ADAMTSL4; ADAMTSL5;
BAI1; BAI2; BAI3; C6; C7; C8A; C8B; C9;
C9orf8; C9orf94; CFP; CILP; CILP2; CTGF; CYR61; HMCN1;
LIBC; NOV; PAPLN; RSPO1; RSPO3; SEMA5A; SEMA5B; SPON1;
SPON2; SSPO; THBS1; THBS2; THSD1; THSD3; THSD7A; THSD7B;
UNC5A; UNC5B; UNC5C; UNC5D; WISP1; WISP2; WISP3;
References
External links
, , , , (also known as "THBS5")
Protein domains | Thrombospondin | Biology | 895 |
4,051,863 | https://en.wikipedia.org/wiki/Influenza%20A%20virus%20subtype%20H6N2 | H6N2 is an avian influenza virus with two forms: one has a low and the other a high pathogenicity. It can cause a serious problem for poultry, and also infects ducks as well. H6N2 subtype is considered to be a non-pathogenic chicken virus, the host still unknown, but could strain from feral animals, and/or aquatic bird reservoirs. H6N2 along with H6N6 are viruses that are found to replicate in mice without preadaptation, and some have acquired the ability to bind to human-like receptors. Genetic markers for H6N2 include 22-amino acid stalk deletion in neuraminidase (NA) protein gene, increased N-glycosylation, and a D144 mutation of the Haemagglutinin (HA) protein gene. Transmission of avian influenza viruses from wild aquatic birds to domestic birds usually cause subclinical infections, and occasionally, respiratory disease and drops in egg production. Some histological features presented in chicken infected with H6N2 are fibrinous yolk peritonitis, salpingitis, oophoritis, nephritis, along with swollen kidneys as well.
Signs and symptoms
sneezing and lacrimation
prostration
anorexia and fever
sometimes swelling of the infraorbital sinuses with nasal mucous
References
Avian influenza
H6N2 | Influenza A virus subtype H6N2 | Biology | 300 |
15,581,094 | https://en.wikipedia.org/wiki/Numerical%20range | In the mathematical field of linear algebra and convex analysis, the numerical range or field of values of a complex matrix A is the set
where denotes the conjugate transpose of the vector . The numerical range includes, in particular, the diagonal entries of the matrix (obtained by choosing x equal to the unit vectors along the coordinate axes) and the eigenvalues of the matrix (obtained by choosing x equal to the eigenvectors).
In engineering, numerical ranges are used as a rough estimate of eigenvalues of A. Recently, generalizations of the numerical range are used to study quantum computing.
A related concept is the numerical radius, which is the largest absolute value of the numbers in the numerical range, i.e.
Properties
Let sum of sets denote a sumset.
General properties
The numerical range is the range of the Rayleigh quotient.
(Hausdorff–Toeplitz theorem) The numerical range is convex and compact.
for all square matrix and complex numbers and . Here is the identity matrix.
is a subset of the closed right half-plane if and only if is positive semidefinite.
The numerical range is the only function on the set of square matrices that satisfies (2), (3) and (4).
for any unitary .
.
If is Hermitian, then is on the real line. If is anti-Hermitian, then is on the imaginary line.
if and only if .
(Sub-additive) .
contains all the eigenvalues of .
The numerical range of a matrix is a filled ellipse.
is a real line segment if and only if is a Hermitian matrix with its smallest and the largest eigenvalues being and .
Normal matrices
If is normal, and , where are eigenvectors of corresponding to , respectively, then .
If is a normal matrix then is the convex hull of its eigenvalues.
If is a sharp point on the boundary of , then is a normal eigenvalue of .
Numerical radius
is a unitarily invariant norm on the space of matrices.
, where denotes the operator norm.
if (but not only if) is normal.
.
Proofs
Most of the claims are obvious. Some are not.
General properties
Normal matrices
Numerical radius
Generalisations
C-numerical range
Higher-rank numerical range
Joint numerical range
Product numerical range
Polynomial numerical hull
See also
Spectral theory
Rayleigh quotient
Workshop on Numerical Ranges and Numerical Radii
Bibliography
.
.
.
.
.
.
.
References
Matrix theory
Spectral theory
Operator theory
Linear algebra | Numerical range | Mathematics | 519 |
50,255,861 | https://en.wikipedia.org/wiki/Retention%20schedule | A retention schedule is a listing of organizational information types, or series of information in a manner which facilitates the understanding and application of the identified and approved retention period, and other information retention aspects.
Purpose
Retention schedules are an important aspect of records management. Many organizations are subject to rules and regulations (at the local, state or federal level) that govern for how long they are required to keep records before they can safely dispose of them. Holding onto records for longer than required can expose the organization to unnecessary liability, since such records are discoverable during lawsuits.
Basic information
Record/series title (name)
Description of information within record/series
Approved retention period
Appropriate security requirements
Appropriate destruction method
Further items for schedule consideration
Location of retention
Date record type/series approved
Responsible group/office/person(s) of record
Remarks related to record/series
Series number applied to a specific record/series
See also
Records management is the process of ensuring that in whatever form, records are maintained and managed economically, effectively and efficiently throughout their life cycle in the organization.
Information governance is the protection of records from access by individuals that are not supposed to access the records.
References
External links
ARMA – How do I build a Retention Schedule?
Public records
Records management
Information management
Information governance | Retention schedule | Technology | 251 |
19,823 | https://en.wikipedia.org/wiki/Maya%20numerals | The Mayan numeral system was the system to represent numbers and calendar dates in the Maya civilization. It was a vigesimal (base-20) positional numeral system. The numerals are made up of three symbols: zero (a shell), one (a dot) and five (a bar). For example, thirteen is written as three dots in a horizontal row above two horizontal bars; sometimes it is also written as three vertical dots to the left of two vertical bars. With these three symbols, each of the twenty vigesimal digits could be written.
Numbers after 19 were written vertically in powers of twenty. The Mayan used powers of twenty, just as the Hindu–Arabic numeral system uses powers of ten.
For example, thirty-three would be written as one dot, above three dots atop two bars. The first dot represents "one twenty" or "1×20", which is added to three dots and two bars, or thirteen. Therefore, (1×20) + 13 = 33.
{| class="mw-collapsible mw-collapsed" style="text-align:center;"
|+Addition (single)
|- style="font-size: 150%;"
| (1×20)
| +
| 13
| =
| 33
|-
|
|
|
|
|
|}
Upon reaching 202 or 400, another row is started (203 or 8000, then 204 or 160,000, and so on). The number 429 would be written as one dot above one dot above four dots and a bar, or (1×202) + (1×201) + 9 = 429.
{| class="mw-collapsible mw-collapsed" style="text-align:center;"
|+Addition (multiple)
|- style="font-size: 150%;"
| (1×202)
| +
| (1×201)
| +
| 9
| =
| 429
|-
|
|
|
|
|
|
|
|}
Other than the bar and dot notation, Maya numerals were sometimes illustrated by face type glyphs or pictures. The face glyph for a number represents the deity associated with the number. These face number glyphs were rarely used, and are mostly seen on some of the most elaborate monumental carvings.
There are different representations of zero in the Dresden Codex, as can be seen at page 43b (which is concerned with the synodic cycle of Mars). It has been suggested that these pointed, oblong "bread" representations are calligraphic variants of the PET logogram, approximately meaning "circular" or "rounded", and perhaps the basis of a derived noun meaning "totality" or "grouping", such that the representations may be an appropriate marker for a number position which has reached its totality.
Addition and subtraction
Adding and subtracting numbers below 20 using Mayan numerals is very simple.
Addition is performed by combining the numeric symbols at each level:
If five or more dots result from the combination, five dots are removed and replaced by a bar. If four or more bars result, four bars are removed and a dot is added to the next higher row. This also means that the value of 1 bar is 5.
Similarly with subtraction, remove the elements of the subtrahend Symbol from the minuend symbol:
If there are not enough dots in a minuend position, a bar is replaced by five dots. If there are not enough bars, a dot is removed from the next higher minuend symbol in the column and four bars are added to the minuend symbol which is being worked on.
Modified vigesimal system in the Maya calendar
The "Long Count" portion of the Maya calendar uses a variation on the strictly vigesimal numerals to show a Long Count date. In the second position, only the digits up to 17 are used, and the place value of the third position is not 20×20 = 400, as would otherwise be expected, but 18×20 = 360 so that one dot over two zeros signifies 360. Presumably, this is because 360 is roughly the number of days in a year. (The Maya had however a quite accurate estimation of 365.2422 days for the solar year at least since the early Classic era.) Subsequent positions use all twenty digits and the place values continue as 18×20×20 = 7,200 and 18×20×20×20 = 144,000, etc.
Every known example of large numbers in the Maya system uses this 'modified vigesimal' system, with the third position representing multiples of 18×20. It is reasonable to assume, but not proven by any evidence, that the normal system in use was a pure base-20 system.
Origins
Several Mesoamerican cultures used similar numerals and base-twenty systems and the Mesoamerican Long Count calendar requiring the use of zero as a place-holder. The earliest long count date (on Stela 2 at Chiappa de Corzo, Chiapas) is from 36 BC.
Since the eight earliest Long Count dates appear outside the Maya homeland, it is assumed that the use of zero and the Long Count calendar predated the Maya, and was possibly the invention of the Olmec. Indeed, many of the earliest Long Count dates were found within the Olmec heartland. However, the Olmec civilization had come to an end by the 4th century BC, several centuries before the earliest known Long Count dates—which suggests that zero was not an Olmec discovery.
Unicode
Mayan numerals codes in Unicode comprise the block 1D2E0 to 1D2F3
See also
Kaktovik numerals, a similar system from another culture, created in the late 20th century.
Notes
References
Further reading
Davidson, Luis J. “The Maya Numerals.” Mathematics in School, vol. 3, no. 4, 1974, pp. 7–7
External links
Maya numerals converter - online converter from decimal numeration to Maya numeral notation.
Anthropomorphic Maya numbers - online story of number representations.
BabelStone Mayan Numerals - free font for Unicode Mayan numeral characters.
Numerals
Mathematics of ancient history
Numerals
Numeral systems
Maya script
Vigesimal numeral systems | Maya numerals | Mathematics | 1,331 |
4,329,882 | https://en.wikipedia.org/wiki/Ammonia%20borane | Ammonia borane (also systematically named ammoniotrihydroborate), also called borazane, is the chemical compound with the formula . The colourless or white solid is the simplest molecular boron-nitrogen-hydride compound. It has attracted attention as a source for hydrogen fuel, but is otherwise primarily of academic interest.
Synthesis
Reaction of diborane with ammonia mainly gives the diammoniate salt (diammoniodihydroboronium tetrahydroborate). Ammonia borane is the main product when an adduct of borane is employed in place of diborane:
It can also be synthesized from sodium borohydride.
Properties and structure
The molecule adopts a structure similar to that of ethane, with which it is isoelectronic. The B−N distance is 1.58(2) Å. The B−H and N−H distances are 1.15 and 0.96 Å, respectively. Its similarity to ethane is tenuous since ammonia borane is a solid and ethane is a gas: their melting points differing by 284 °C. This difference is consistent with the highly polar nature of ammonia borane. The H atoms attached to boron are hydridic (negatively charged) and those attached to nitrogen are acidic (positively charged).
The structure of the solid indicates a close association of the NH and the BH centers. The closest H−H distance is 1.990 Å, which can be compared with the H−H bonding distance of 0.74 Å. This interaction is called a dihydrogen bond. The original crystallographic analysis of this compound reversed the assignments of B and N. The updated structure was arrived at with improved data using the technique of neutron diffraction that allowed the hydrogen atoms to be located with greater precision.
Uses
Ammonia borane has been suggested as a storage medium for hydrogen, e.g. for when the gas is used to fuel motor vehicles. It can be made to release hydrogen on heating, being polymerized first to , then to , which ultimately decomposes to boron nitride (BN) at temperatures above 1000 °C. It is more hydrogen-dense than liquid hydrogen and also able to exist at normal temperatures and pressures.
Ammonia borane finds some use in organic synthesis as an air-stable derivative of diborane. It can be used as a reducing agent in transfer hydrogenation reactions, often in the presence of a transition metal catalyst.
Analogous amine-boranes
Many analogues have been prepared from primary, secondary, and even tertiary amines:
Borane tert-butylamine ()
Borane trimethylamine ()
Borane isopropylamine ()
The first amine adduct of borane was derived from trimethylamine. Borane tert-butylamine complex is prepared by the reaction of sodium borohydride with t-butylammonium chloride. Generally adduct are more robust with more basic amines. Variations are also possible for the boron component, although primary and secondary boranes are less common.
See also
Tert-butylamine borane (tBuNH2→BH3)
Phosphine-borane ()
Borane dimethylsulfide ()
Borane–tetrahydrofuran ()
References
Inorganic compounds
Boranes
Boron–nitrogen compounds
Inorganic amines | Ammonia borane | Chemistry | 705 |
25,289,302 | https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20September%203%2C%202062 | A partial solar eclipse will occur at the Moon's descending node of orbit on Sunday, September 3, 2062, with a magnitude of 0.9749. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth.
The partial solar eclipse will be visible for parts of Greenland, Northern Europe, and Asia.
Eclipse details
Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2062
A partial solar eclipse on March 11.
A total lunar eclipse on March 25.
A partial solar eclipse on September 3.
A total lunar eclipse on September 18.
Metonic
Preceded by: Solar eclipse of November 16, 2058
Followed by: Solar eclipse of June 22, 2066
Tzolkinex
Preceded by: Solar eclipse of July 24, 2055
Followed by: Solar eclipse of October 15, 2069
Half-Saros
Preceded by: Lunar eclipse of August 29, 2053
Followed by: Lunar eclipse of September 9, 2071
Tritos
Preceded by: Solar eclipse of October 4, 2051
Followed by: Solar eclipse of August 3, 2073
Solar Saros 126
Preceded by: Solar eclipse of August 23, 2044
Followed by: Solar eclipse of September 13, 2080
Inex
Preceded by: Solar eclipse of September 23, 2033
Followed by: Solar eclipse of August 15, 2091
Triad
Preceded by: Solar eclipse of November 3, 1975
Followed by: Solar eclipse of July 5, 2149
Solar eclipses of 2062–2065
Saros 126
Metonic series
Tritos series
Inex series
References
External links
http://eclipse.gsfc.nasa.gov/SEplot/SEplot2051/SE2062Sep03P.GIF
2062 in science
2062 9 3
2062 9 3 | Solar eclipse of September 3, 2062 | Astronomy | 535 |
65,446,387 | https://en.wikipedia.org/wiki/Rapid%20Deployment%20Vaccine%20Collaborative | The Rapid Deployment Vaccine Collaborative (RaDVaC) is a non-profit, collaborative, open-source vaccine research organization founded in March 2020 by Preston Estep and colleagues from various fields of expertise, motivated to respond to the COVID-19 pandemic through rapid, adaptable, transparent, and accessible vaccine development. The members of RaDVaC contend that even the accelerated vaccine approvals, such as the FDA's Emergency Use Authorization, does not make vaccines available quickly enough. The core group has published a series of white papers online, detailing both the technical principles of and protocols for their research vaccine formulas, as well as dedicated materials and protocols pages. All of the organization's published work has been released under Creative Commons non-commercial licenses, including those contributing to the Open COVID Pledge. Multiple individuals involved with the project have engaged in self-experimentation to assess vaccine safety and efficacy. As of January 2022, the organization has developed and published twelve iterations of experimental intranasal, multivalent, multi-epitope peptide vaccine formulas, and according to the RaDVaC website, by early 2021 hundreds of individuals had self-administered one or more doses of the vaccines described by the group.
History
In March 2020, Preston Estep sent an email to several associates in an effort to determine whether any open-source vaccine projects were underway. Finding none, he and several colleagues formed RaDVaC in the following days, and began constructing the first generation of the RaDVaC research vaccine formula.
Self-experimentation
Several of RaDVaC's core members and numerous others have engaged in self-experimentation to assess both the safety and efficacy of the vaccine formulations. Dr. Estep self-administered the first dose on March 30, 2020. As of early 2020, the group claims that hundreds of individuals had self-administered one or more doses of one or more generations of the RaDVaC experimental vaccine.
Open-source and iterative vaccine research and development
RaDVaC considers responsive iteration a key asset in developing vaccines against an emerging disease such as COVID-19. In contrast to commercial vaccine R&D infrastructure, RaDVaC's core group adapted their vaccine designs in response to emerging research on the pathology and immunology of SARS-CoV-2 and COVID-19.
SARS-CoV-2 Peptide Vaccines
Early generations (gen. 1-6)
Included primarily B cell epitopes, both emergent from computational predictions as well as early research in SARS-CoV-2 antibody mapping.
Generation 7
First inclusion of empirical T cell response data.
Generation 8
Better characterization of T cell response.
Generation 9
Latest and most robust characterization of T cell response, especially CD8 (cytotoxic T cell).
Generation 10
Improved solubility at physiological pH by the use of derivatized chitosan (for example: trimethyl chitosan [TMC] or hydroxypropyltrimethylammonium chloride chitosan [HACC]), instead of unmodified chitosan.
Increased T helper activation combined with reduced MHC Class II restriction to more robustly activate cytotoxic T lymphocytes and B cells for antibody production.
Surface display of antigens for improved antibody response.
A smaller set of core peptides (5 peptides) combined with a list of optional peptides, providing greater functionality and improved representation of common MHC Class I alleles.
An optional epitope sequence that includes an increasingly common variant (N501Y) in the Spike Receptor Binding Motif (RBM). The RaDVaC primary protective strategy remains focused on the more highly conserved epitopes involved in membrane fusion, but groups are testing the potential of this epitope sequence to boost the systemic antibody response.
An optional dendritic cell targeting peptide for delivering T cell epitopes to dendritic cells, an important cell type in the presentation of T cell antigens.
Generation 11
The only difference between Generation 10 and Generation 11 vaccine designs is the addition to Gen. 11 of the peptide MVC2-s, which represents the Receptor Binding Domain (RBD)/Receptor Binding Motif (RBM), and has 2 mutations that are present in variants of concern and interest: the L452R mutation found in Delta, Iota, and Kappa, and the N501Y mutation found in Alpha, Beta, Gamma and Mu.
Generation 12
The Generation 12 vaccine design is very similar to Generation 11, but with one major change and some minor ones. The major change is the addition of the Omicron-specific SARS-CoV-2 Receptor Binding Motif peptide ("RBMO-sc") to the set of core peptides, and the subtraction of "MVC1-s" from the set of optional peptides. Certain T cell epitope peptides were also changed. "Orf1ab 5528T" replaced "Orf1 1636T" in the list of core peptides, because the former is bound by all of the Class I receptors that bind "Orf1 1636T" but it also binds several others. RaDVaC also eliminated "Nuc 264T-key" from the list of optional peptides because the homologous sequence in SARS-CoV-1 reportedly suppresses cytokine signaling.
Open-source clinical trial design
In April 2022, RaDVaC published a proposal for a novel vaccine clinical trial design, called a "step-up challenge trial". The proposed model is intended to validate immuno-efficacy of broad-spectrum vaccines, including pan-coronavirus vaccines, but subjecting ("challenging") study participants to multiple related pathogens with different degrees of pathogenicity.
Funding and awards
In December 2021 ACX Grants announced that RaDVaC had been awarded US$100,000 "to make open-source modular affordable vaccines." In May 2022 RaDVaC tweeted it had been awarded US$2.5 million from Balvi, a moonshot anti-covid effort established by Vitalik Buterin.
References
COVID-19 vaccine producers
Vaccine producers
Vaccines | Rapid Deployment Vaccine Collaborative | Biology | 1,267 |
2,072,472 | https://en.wikipedia.org/wiki/Principal%20curvature | In differential geometry, the two principal curvatures at a given point of a surface are the maximum and minimum values of the curvature as expressed by the eigenvalues of the shape operator at that point. They measure how the surface bends by different amounts in different directions at that point.
Discussion
At each point p of a differentiable surface in 3-dimensional Euclidean space one may choose a unit normal vector. A normal plane at p is one that contains the normal vector, and will therefore also contain a unique direction tangent to the surface and cut the surface in a plane curve, called normal section. This curve will in general have different curvatures for different normal planes at p. The principal curvatures at p, denoted k1 and k2, are the maximum and minimum values of this curvature.
Here the curvature of a curve is by definition the reciprocal of the radius of the osculating circle. The curvature is taken to be positive if the curve turns in the same direction as the surface's chosen normal, and otherwise negative. The directions in the normal plane where the curvature takes its maximum and minimum values are always perpendicular, if k1 does not equal k2, a result of Euler (1760), and are called principal directions. From a modern perspective, this theorem follows from the spectral theorem because these directions are as the principal axes of a symmetric tensor—the second fundamental form. A systematic analysis of the principal curvatures and principal directions was undertaken by Gaston Darboux, using Darboux frames.
The product k1k2 of the two principal curvatures is the Gaussian curvature, K, and the average (k1 + k2)/2 is the mean curvature, H.
If at least one of the principal curvatures is zero at every point, then the Gaussian curvature will be 0 and the surface is a developable surface. For a minimal surface, the mean curvature is zero at every point.
Formal definition
Let M be a surface in Euclidean space with second fundamental form . Fix a point p ∈ M, and an orthonormal basis X1, X2 of tangent vectors at p. Then the principal curvatures are the eigenvalues of the symmetric matrix
If X1 and X2 are selected so that the matrix is a diagonal matrix, then they are called the principal directions. If the surface is oriented, then one often requires that the pair (X1, X2) be positively oriented with respect to the given orientation.
Without reference to a particular orthonormal basis, the principal curvatures are the eigenvalues of the shape operator, and the principal directions are its eigenvectors.
Generalizations
For hypersurfaces in higher-dimensional Euclidean spaces, the principal curvatures may be defined in a directly analogous fashion. The principal curvatures are the eigenvalues of the matrix of the second fundamental form in an orthonormal basis of the tangent space. The principal directions are the corresponding eigenvectors.
Similarly, if M is a hypersurface in a Riemannian manifold N, then the principal curvatures are the eigenvalues of its second-fundamental form. If k1, ..., kn are the n principal curvatures at a point p ∈ M and X1, ..., Xn are corresponding orthonormal eigenvectors (principal directions), then the sectional curvature of M at p is given by
for all with .
Classification of points on a surface
At elliptical points, both principal curvatures have the same sign, and the surface is locally convex.
At umbilic points, both principal curvatures are equal and every tangent vector can be considered a principal direction. These typically occur in isolated points.
At hyperbolic points, the principal curvatures have opposite signs, and the surface will be locally saddle shaped.
At parabolic points, one of the principal curvatures is zero. Parabolic points generally lie in a curve separating elliptical and hyperbolic regions.
At flat umbilic points both principal curvatures are zero. A generic surface will not contain flat umbilic points. The monkey saddle is one surface with an isolated flat umbilic.
Line of curvature
The lines of curvature or curvature lines are curves which are always tangent to a principal direction (they are integral curves for the principal direction fields). There will be two lines of curvature through each non-umbilic point and the lines will cross at right angles.
In the vicinity of an umbilic the lines of curvature typically form one of three configurations star, lemon and monstar (derived from lemon-star). These points are also called Darbouxian Umbilics (D1, D2, D3) in honor of
Gaston Darboux, the first to make a systematic study in Vol. 4, p 455, of his Leçons (1896).
In these figures, the red curves are the lines of curvature for one family of principal directions, and the blue curves for the other.
When a line of curvature has a local extremum of the same principal curvature then the curve has a ridge point. These ridge points form curves on the surface called ridges. The ridge curves pass through the umbilics. For the star pattern either 3 or 1 ridge line pass through the umbilic, for the monstar and lemon only one ridge passes through.
Applications
Principal curvature directions along with the surface normal, define a 3D orientation frame at a surface point. For example, in case of a cylindrical surface, by physically touching or visually observing, we know that along one specific direction the surface is flat (parallel to the axis of the cylinder) and hence take note of the orientation of the surface. The implication of such an orientation frame at each surface point means any rotation of the surfaces over time can be determined simply by considering the change in the corresponding orientation frames. This has resulted in single surface point motion estimation and segmentation algorithms in computer vision.
See also
Earth radius#Principal sections
Euler's theorem (differential geometry)
References
Further reading
External links
Historical Comments on Monge's Ellipsoid and the Configuration of Lines of Curvature on Surfaces Immersed in R3
Curvature (mathematics)
Differential geometry of surfaces
Surfaces | Principal curvature | Physics | 1,271 |
14,044,875 | https://en.wikipedia.org/wiki/Gymnopilus%20sapineus | Gymnopilus sapineus, commonly known as the scaly rustgill or common and boring gymnopilus, is a small and widely distributed mushroom which grows in dense clusters on dead conifer wood. It has a rusty orange spore print and a bitter taste. This species does not stain blue and lacks the hallucinogen psilocybin.
Taxonomy
Speciation in Gymnopilus is not clearly defined. This is further complicated by the macroscopic morphological and ecological similarities between members of the G. sapineus complex such as G. penetrans and G. nevadensis. Michael Kuo explicates upon this by speaking of the arbitrary distinction made between G. sapineus and G. penetrans made by Elias Magnus Fries. He at first labeled G. penetrans to merely be a form of G. sapineus in 1815, but then recanted and labeled them separate in 1821.
Description
This mushroom is often mistaken for G. luteocarneus which grows on conifers and has a smoother and darker cap. Another lookalike is G. penetrans which grows in the same habitat and has minor microscopic differences.
Cap: The cap is across, is convex to flat, and is golden-yellow to brownish orange, darker at the center with a dry scaly surface which is often fibrillose and may have squamules. The cap margin is inrolled at first and curves outward as it matures, becoming almost plane and sometimes developing fibrillose cracks in age. The flesh is yellow to orange and delicate when compared to larger and firmer members of Gymnopilus, such as G. junonius.
Gills: The gills are crowded, yellow at first, turning rusty orange as the spores mature, with adnate attachment.
Microscopic features: Gymnopilus sapineus spores are rusty orange to rusty brown, elliptical, rough, and 7–10 x 4–6 μm.
Stipe: The stipe is long and 0.5–1 cm thick. It has either an equal structure, or becomes thinner near the base. It is light yellow, bruising rusty brown. The stipe has an evanescent veil which often leaves fragments on the upper part of the stipe or the margin of young caps.
Taste and odor: G. sapineus sometimes tastes bitter, and it has a mild, fungoid or sweet smell.
Toxicity: The species is nonpoisonous, but considered inedible.
Similar species
Similar species include G. aeruginosus, G. luteofolius, G. penetrans, and G. hybridus.
See also
List of Gymnopilus species
References
Further reading
Hesler, L. R. (1969). North American species of Gymnopilus. New York: Hafner. p. 117.
External links
Fungi of California - Gymnopilus sapineus
Mushroom Observer - Gymnopilus sapineus
sapineus
Taxa named by Elias Magnus Fries
Fungi described in 1815
Fungi of North America
Fungus species | Gymnopilus sapineus | Biology | 642 |
237,737 | https://en.wikipedia.org/wiki/Leading-edge%20extension | A leading-edge extension (LEX) is a small extension to an aircraft wing surface, forward of the leading edge. The primary reason for adding an extension is to improve the airflow at high angles of attack and low airspeeds, to improve handling and delay the stall. A dog tooth can also improve airflow and reduce drag at higher speeds.
Leading-edge slat
A leading-edge slat is an aerodynamic surface running spanwise just ahead of the wing leading edge. It creates a leading edge slot between the slat and wing which directs air over the wing surface, helping to maintain smooth airflow at low speeds and high angles of attack. This delays the stall, allowing the aircraft to fly at a higher angle of attack. Slats may be made fixed, or retractable in normal flight to minimize drag.
Dogtooth extension
A dogtooth is a small, sharp zig-zag break in the leading edge of a wing. It is usually used on a swept wing, to generate a vortex flow field to prevent separated flow from progressing outboard at high angle of attack. The effect is the same as a wing fence. It can also be used on straight wings in a drooped leading edge arrangement.
Many high-performance aircraft use the dogtooth design, which induces a vortex over the wing to control boundary layer spanwise extension, increasing lift and improving resistance to stall. Some of the best-known uses of the dogtooth are in the stabilizer of the F-15 Eagle and the wings of the F-4 Phantom II, F/A-18 Super Hornet, CF-105 Arrow, F-8 Crusader, and the Ilyushin Il-62. Where the dogtooth is added as an afterthought, as for example on the Hawker Hunter and some variants of the Quest Kodiak, the dogtooth is created by adding an extension to the outer section of the leading edge.
Leading-edge cuff
A leading edge cuff (or wing cuff) is a fixed aerodynamic device employed on fixed-wing aircraft to introduce a sharp discontinuity in the leading edge of the wing in the same way as a dogtooth. It also typically has a slightly drooped leading edge to improve low-speed characteristics.
Leading-edge root extension
A leading-edge root extension (LERX) is a small fillet, typically roughly triangular in shape, running forward from the leading edge of the wing root to a point along the fuselage. These are often called simply leading-edge extensions (LEX), although they are not the only kind. To avoid ambiguity, this article uses the term LERX.
On a modern fighter aircraft, LERXes induce controlled airflow over the wing at high angles of attack, so delaying the stall and consequent loss of lift. In cruising flight, the effect of the LERX is minimal. However, at high angles of attack, as often encountered in a dogfight or during takeoff and landing, the LERX generates a high-speed vortex that attaches to the top of the wing. The vortex action maintains the attachment of the airflow to the upper-wing surface well past the normal stall point at which the airflow separates from the wing surface, thus sustaining lift at very high angles.
LERX were first used on the Northrop F-5 "Freedom Fighter" which flew in 1959, and have since become commonplace on many combat aircraft. The F/A-18 Hornet has especially large examples, as does the Sukhoi Su-27 and the CAC/PAC JF-17 Thunder. The Su-27 LERX help make some advanced maneuvers possible, such as the Pugachev's Cobra, the Cobra Turn and the Kulbit.
A long, narrow sideways extension to the fuselage, attached in this position, is an example of a chine.
Leading-edge vortex controller
Leading-edge vortex controller (LEVCON) systems are a continuation of leading-edge root extension (LERX) technology, but with actuation that allows the leading edge vortices to be modified without adjusting the aircraft's attitude. Otherwise they operate on the same principles as the LERX system to create lift augmenting leading edge vortices during high angle of attack flight.
This system has been incorporated in the Russian Sukhoi Su-57 and Indian HAL LCA Navy.
The LEVCONs actuation ability also improves its performance over the LERX system in other areas.
When combined with the thrust vectoring controller (TVC), the aircraft controllability at extreme angles of attack is further increased, which assists in stunts which require supermaneuverability such as Pugachev's Cobra. Additionally, on the Sukhoi Su-57 the LEVCON system is used for increased departure-resistance in the event of TVC failure at a post-stall attitude. It can also be used for trimming the aircraft, and optimizing the lift to drag ratio during cruise.
See also
Index of aviation articles
Canard (aeronautics)
Strake (aviation)
Vortex generator
References
Aerospace engineering
Aircraft aerodynamics
Aircraft wing components
Aircraft wing design | Leading-edge extension | Engineering | 1,049 |
250,880 | https://en.wikipedia.org/wiki/Red%20Spider%20Nebula | The Red Spider Nebula (also catalogued as NGC 6537) is a planetary nebula located near the heart of the Milky Way, in the northwest of the constellation Sagittarius. The nebula has a prominent two-lobed shape, possibly due to a binary companion or magnetic fields and has an S-shaped symmetry of the lobes – the lobes opposite each other appear similar. This is believed to be due to the presence of a companion to the central white dwarf. However, the gas walls of the two lobed structures are not at all smooth, but rather are rippled in a complex way.
The central white dwarf, the remaining compact core of the original star, produces a powerful and hot (≈10,000 K) wind blowing with a speed of 300 kilometers per second, which has generated waves 100 billion kilometres high. The waves are generated by supersonic shocks formed when the local gas is compressed and heated in front of the rapidly expanding lobes. Atoms caught in the shocks radiate a visible light. These winds are what give this nebula its unique 'spider' shape and also contribute to the expansion of the nebula.
The star at the center of the Red Spider Nebula is surrounded by a dust shell making its exact properties hard to determine. Its surface temperature is probably 150,000–250,000 K, although a temperature of 340,000 K or even 500,000 K is not ruled out, making it among the hottest white dwarf stars known.
The Red Spider Nebula lies near the constellation of Sagittarius. Its distance has been variously estimated as 1,900 light-years or, more likely, 3,000–8,000 light-years.
References
External links
Red Spider Nebula at ESA/Hubble
White dwarf sends ripples through Red Spider Nebula at Space.com
NGC objects
Sagittarius (constellation)
Planetary nebulae | Red Spider Nebula | Astronomy | 374 |
10,031,887 | https://en.wikipedia.org/wiki/Cyborg%20Commando | Cyborg Commando is a post-apocalyptic role-playing game (RPG) published by New Infinities Productions, Inc. (NIPI) in 1987. The designers were well-known in the role-playing game market — Gary Gygax, Frank Mentzer and Kim Mohan, but despite this name recognition, the game was a commercial failure.
Description
The game is set in 2035 when Earth is invaded by aliens called Xenoborgs. For its defense, humanity has developed a new kind of soldier: the Cyborg Commando, a mechanical/electronical human-like structure that can be implanted with a willing human's brain.
10X system
Cyborg Commando introduced the "10X" dice-rolling mechanism: To determine the success or failure of a skill challenge or combat, the player rolls two 10-sided dice and multiplies the results. (Example: The player rolls a 2 and a 10 and multiplies them together for a result of 20.) Unlike many role-playing games of the time where higher results were more favorable, in this game, lower results are needed for success.
Publication history
Gary Gygax was the co-creator of the fantasy role-playing game Dungeons & Dragons first published by TSR in 1974. But by 1986, Gygax had lost control of TSR, and left the company to form NIPI, taking former TSR employees Frank Menzer and Kim Mohan with him. Cyborg Commando was their first game, designed by Mohan and Mentzer based on an outline by Gygax. Although the names of the creators, well-known by game players, featured heavily in the game's promotion, the grim dystopian setting and robots with human brains failed to click with Gygax's fantasy gamers. As game critic Rick Swan noted, "Its difficult rules and narrow scope severely limited its appeal." Ultimately, the game was a commercial failure.
Although two more boxed sets were planned and promoted, in the end, only three adventures for the game system were ever published:
San Francisco Knights 1987
Film at Eleven 1987
Operation Bifrost 1988
Novels
Three Cyborg Commando novels were published not long after the game with minor modifications of Cyborg Commando's skills and behaviour, which prompted a short explanation at the back of the books for game owners detailing why the changes were made.
Planet in Peril by Kim Mohan and Pamela O'Neill. Published in November 1987 by Ace/New Infinities, Inc. .
Chase into Space by Kim Mohan and Pamela O'Neill. Published in January 1988 by New Infinities, Inc. .
The Ultimate Prize by Kim Mohan and Pamela O'Neill. Published in March 1988 by New Infinities, Inc. .
Reception
Stewart Wieck reviewed Cyborg Commando in White Wolf #9 (1988), rating it an 8 out of 10 and stated that "This is not a game for the beginner. The material can be daunting even in the eyes of a seasoned gamer. Surprisingly, all this glitter is gold."
In his 1990 book The Complete Guide to Role-Playing Games, game critic Rick Swan thought the game was "a first-rate design — innovative, compelling and startling in its uncompromising approach to science-fiction gaming." However, Swan warned that "Though the rules are straightforward and clearly explained, they're also quite complicated ... The no-nonsense approach and sheer volume of detail can overwhelm even the most experienced player." Swan concluded by giving the game an excellent rating of 3.5 out of 4, saying, "For those up to the challenge, it's definitely worth checking out — Cyborg Commando is hardcore role-playing at its best."
Lynn Bryant reviewed the novels Planet in Peril and Chase into Space in Space Gamer/Fantasy Gamer No. 83, and commented that "They prove to be tightly written, well plotted, full of action, and just plain good reading. You get insight into what it is like to lose most of your body to mechanical substitutes and then have to fight a war. There is also a good sense of the outrage all the race will feel at being invaded. Even if you never look at the game, you'll enjoy the books."
Writing 27 years after Cyborg Commandos publication, game historian Shannon Appelcline noted the game's complex rules and the lack of connection to Gygax's D&D fans, and wrote, "As a result of these various factors, Cyborg Commando is today seen as one of the biggest flops in the industry."
References
Fiction about cyborgs
Gary Gygax games
Military role-playing games
Role-playing games introduced in 1987
Science fiction role-playing games | Cyborg Commando | Biology | 972 |
40,094,759 | https://en.wikipedia.org/wiki/NGC%2046 | NGC 46, occasionally referred to as PGC 5067596, is a star located approximately 1,420 light-years from the Solar System in the constellation Pisces. It was first discovered on October 22, 1852, by Irish astronomer Edward Joshua Cooper, who incorrectly identified it as a nebula.
See also
List of NGC objects (1–1000)
Pisces (constellation)
References
External links
SEDS
0046
18521022
Pisces (constellation)
Discoveries by Edward Joshua Cooper | NGC 46 | Astronomy | 102 |
2,372,007 | https://en.wikipedia.org/wiki/Takeuti%27s%20conjecture | In mathematics, Takeuti's conjecture is the conjecture of Gaisi Takeuti that a sequent formalisation of second-order logic has cut-elimination (Takeuti 1953). It was settled positively:
By Tait, using a semantic technique for proving cut-elimination, based on work by Schütte (Tait 1966);
Independently by Prawitz (Prawitz 1968) and Takahashi (Takahashi 1967) by a similar technique (Takahashi 1967), although Prawitz's and Takahashi's proofs are not limited to second-order logic, but concern higher-order logics in general;
It is a corollary of Jean-Yves Girard's syntactic proof of strong normalization for System F.
Takeuti's conjecture is equivalent to the 1-consistency of second-order arithmetic in the sense that each of the statements can be derived from each other in the weak system of primitive recursive arithmetic (PRA). It is also equivalent to the strong normalization of the Girard/Reynold's System F.
See also
Hilbert's second problem
References
Dag Prawitz, 1968. Hauptsatz for higher order logic. Journal of Symbolic Logic, 33:452–457, 1968.
William W. Tait, 1966. A nonconstructive proof of Gentzen's Hauptsatz for second order predicate logic. In Bulletin of the American Mathematical Society, 72:980–983.
Gaisi Takeuti, 1953. On a generalized logic calculus. In Japanese Journal of Mathematics, 23:39–96. An errata to this article was published in the same journal, 24:149–156, 1954.
Moto-o Takahashi, 1967. A proof of cut-elimination in simple type theory. In Japanese Mathematical Society, 10:44–45.
Proof theory
Conjectures that have been proved | Takeuti's conjecture | Mathematics | 393 |
31,118,503 | https://en.wikipedia.org/wiki/Beta%20negative%20binomial%20distribution | In probability theory, a beta negative binomial distribution is the probability distribution of a discrete random variable equal to the number of failures needed to get successes in a sequence of independent Bernoulli trials. The probability of success on each trial stays constant within any given experiment but varies across different experiments following a beta distribution. Thus the distribution is a compound probability distribution.
This distribution has also been called both the inverse Markov-Pólya distribution and the generalized Waring distribution or simply abbreviated as the BNB distribution. A shifted form of the distribution has been called the beta-Pascal distribution.
If parameters of the beta distribution are and , and if
where
then the marginal distribution of (i.e. the posterior predictive distribution) is a beta negative binomial distribution:
In the above, is the negative binomial distribution and is the beta distribution.
Definition and derivation
Denoting the densities of the negative binomial and beta distributions respectively, we obtain the PMF of the BNB distribution by marginalization:
Noting that the integral evaluates to:
we can arrive at the following formulas by relatively simple manipulations.
If is an integer, then the PMF can be written in terms of the beta function,:
.
More generally, the PMF can be written
or
.
PMF expressed with Gamma
Using the properties of the Beta function, the PMF with integer can be rewritten as:
.
More generally, the PMF can be written as
.
PMF expressed with the rising Pochammer symbol
The PMF is often also presented in terms of the Pochammer symbol for integer
Properties
Factorial Moments
The -th factorial moment of a beta negative binomial random variable is defined for and in this case is equal to
Non-identifiable
The beta negative binomial is non-identifiable which can be seen easily by simply swapping and in the above density or characteristic function and noting that it is unchanged. Thus estimation demands that a constraint be placed on , or both.
Relation to other distributions
The beta negative binomial distribution contains the beta geometric distribution as a special case when either or . It can therefore approximate the geometric distribution arbitrarily well. It also approximates the negative binomial distribution arbitrary well for large . It can therefore approximate the Poisson distribution arbitrarily well for large , and .
Heavy tailed
By Stirling's approximation to the beta function, it can be easily shown that for large
which implies that the beta negative binomial distribution is heavy tailed and that moments less than or equal to do not exist.
Beta geometric distribution
The beta geometric distribution is an important special case of the beta negative binomial distribution occurring for . In this case the pmf simplifies to
.
This distribution is used in some Buy Till you Die (BTYD) models.
Further, when the beta geometric reduces to the Yule–Simon distribution. However, it is more common to define the Yule-Simon distribution in terms of a shifted version of the beta geometric. In particular, if then .
Beta negative binomial as a Pólya urn model
In the case when the 3 parameters and are positive integers, the Beta negative binomial can also be motivated by an urn model - or more specifically a basic Pólya urn model. Consider an urn initially containing red balls (the stopping color) and blue balls. At each step of the model, a ball is drawn at random from the urn and replaced, along with one additional ball of the same color. The process is repeated over and over, until red colored balls are drawn. The random variable of observed draws of blue balls are distributed according to a . Note, at the end of the experiment, the urn always contains the fixed number of red balls while containing the random number blue balls.
By the non-identifiability property, can be equivalently generated with the urn initially containing red balls (the stopping color) and blue balls and stopping when red balls are observed.
See also
Negative binomial distribution
Dirichlet negative multinomial distribution
Notes
References
Johnson, N.L.; Kotz, S.; Kemp, A.W. (1993) Univariate Discrete Distributions, 2nd edition, Wiley (Section 6.2.3)
Kemp, C.D.; Kemp, A.W. (1956) "Generalized hypergeometric distributions, Journal of the Royal Statistical Society, Series B, 18, 202–211
Wang, Zhaoliang (2011) "One mixed negative binomial distribution with application", Journal of Statistical Planning and Inference'', 141 (3), 1153-1160
External links
Interactive graphic: Univariate Distribution Relationships
Discrete distributions
Compound probability distributions
Factorial and binomial topics | Beta negative binomial distribution | Mathematics | 965 |
73,275,266 | https://en.wikipedia.org/wiki/NGC%205486 | NGC 5486 is an irregular galaxy in the constellation Ursa Major 110 million light-years from Earth.
The galaxy is considered a member of the NGC 5485 group (LGG 373), and is near the much larger Pinwheel Galaxy.
It was discovered on 2 May 1785 by William Herschel with an 18.7-inch reflecting telescope, who described it as "F, cL" (faint, considerably large) in his catalogues of nebulae.
External links
References
Magellanic spiral galaxies
Astronomical objects discovered in 1785
Discoveries by William Herschel
Ursa Major
5486
09036
50383 | NGC 5486 | Astronomy | 125 |
40,659,364 | https://en.wikipedia.org/wiki/Arctic%20sea%20ice%20decline | Sea ice in the Arctic region has declined in recent decades in area and volume due to climate change. It has been melting more in summer than it refreezes in winter. Global warming, caused by greenhouse gas forcing is responsible for the decline in Arctic sea ice. The decline of sea ice in the Arctic has been accelerating during the early twenty-first century, with a decline rate of 4.7% per decade (it has declined over 50% since the first satellite records). Summertime sea ice will likely cease to exist sometime during the 21st century.
The region is at its warmest in at least 4,000 years. Furthermore, the Arctic-wide melt season has lengthened at a rate of five days per decade (from 1979 to 2013), dominated by a later autumn freeze-up. The IPCC Sixth Assessment Report (2021) stated that Arctic sea ice area will likely drop below 1 million km2 in at least some Septembers before 2050. In September 2020, the US National Snow and Ice Data Center reported that the Arctic sea ice in 2020 had melted to an extent of 3.74 million km2, its second-smallest extent since records began in 1979. Earth lost 28 trillion tonnes of ice between 1994 and 2017, with Arctic sea ice accounting for 7.6 trillion tonnes of this loss. The rate of ice loss has risen by 57% since the 1990s.
Sea ice loss is one of the main drivers of Arctic amplification, the phenomenon that the Arctic warms faster than the rest of the world under climate change. It is plausible that sea ice decline also makes the jet stream weaker, which would cause more persistent and extreme weather in mid-latitudes. Shipping is more often possible in the Arctic now, and will likely increase further. Both the disappearance of sea ice and the resulting possibility of more human activity in the Arctic Ocean pose a risk to local wildlife such as polar bears.
One important aspect in understanding sea ice decline is the Arctic dipole anomaly. This phenomenon appears to have slowed down the overall loss of sea ice between 2007 and 2021, but such a trend will probably not continue.
Definitions
The Arctic Ocean is the mass of water positioned approximately above latitude 65° N. Arctic Sea Ice refers to the area of the Arctic Ocean covered by ice. The Arctic sea ice minimum is the day in a given year when Arctic sea ice reaches its smallest extent, occurring at the end of the summer melting season, normally during September. Arctic Sea ice maximum is the day of a year when Arctic sea ice reaches its largest extent near the end of the Arctic cold season, normally during March. Typical data visualizations for Arctic sea ice include average monthly measurements or graphs for the annual minimum or maximum extent, as shown in the adjacent images.
Sea ice extent is defined as the area with at least 15% of sea ice cover; it is more often used as a metric than simple total sea ice area. This metric is used to address uncertainty in distinguishing open sea water from melted water on top of solid ice, which satellite detection methods have difficulty differentiating. This is primarily an issue in summer months.
Observations
A 2007 study found the decline to be "faster than forecasted" by model simulations.
A 2011 study suggested that it could be reconciled by internal variability enhancing the greenhouse gas-forced sea ice decline over the last few decades. A 2012 study, with a newer set of simulations, also projected rates of retreat that were somewhat less than that actually observed.
Satellite era
Observation with satellites shows that Arctic sea ice area, extent, and volume have been in decline for a few decades. The amount of multi-year sea ice in the Arctic has declined considerably in recent decades. In 1988, ice that was at least 4 years old accounted for 26% of the Arctic's sea ice. By 2013, ice that age was only 7% of all Arctic sea ice.
Scientists recently measured sixteen-foot (five-meter) wave heights during a storm in the Beaufort Sea in mid-August until late October 2012. This is a new phenomenon for the region, since a permanent sea ice cover normally prevents wave formation. Wave action breaks up sea ice, and thus could become a feedback mechanism, driving sea ice decline.
For January 2016, the satellite-based data showed the lowest overall Arctic sea ice extent of any January since records began in 1979. Bob Henson from Wunderground noted:
January 2016's remarkable phase transition of Arctic oscillation was driven by a rapid tropospheric warming in the Arctic, a pattern that appears to have increased surpassing the so-called stratospheric sudden warming. The previous record of the lowest extent of the Arctic Ocean covered by ice in 2012 saw a low of 1.31 million square miles (3.387 million square kilometers). This replaced the previous record set on September 18, 2007, at 1.61 million square miles (4.16 million square kilometers). The minimum extent on 18th Sept 2019 was 1.60 million square miles (4.153 million square kilometers).
A 2018 study of the thickness of sea ice found a decrease of 66% or 2.0 m over the last six decades and a shift from permanent ice to largely seasonal ice cover.
Earlier data
The overall trend indicated in the passive microwave record from 1978 through mid-1995 shows that the extent of Arctic sea ice is decreasing 2.7% per decade. Subsequent work with the satellite passive-microwave data indicates that from late October 1978 through the end of 1996 the extent of Arctic sea ice decreased by 2.9% per decade. Sea ice extent for the Northern Hemisphere showed a decrease of 3.8% ± 0.3% per decade from November 1978 to December 2012.
Future ice loss
An "ice-free" Arctic Ocean, sometimes referred to as a "blue ocean event" (BOE), is often defined as "having less than 1 million square kilometers of sea ice", because it is very difficult to melt the thick ice around the Canadian Arctic Archipelago. The IPCC AR5 defines "nearly ice-free conditions" as a sea ice extent of less than 106 km2 for at least five consecutive years.
Estimating the exact year when the Arctic Ocean will become "ice-free" is very difficult, due to the large role of interannual variability in sea ice trends. In Overland and Wang (2013), the authors investigated three different ways of predicting future sea ice levels. They noted that the average of all models used in 2013 was decades behind the observations, and only the subset of models with the most aggressive ice loss was able to match the observations. However, the authors cautioned that there is no guarantee those models would continue to match the observations, and hence that their estimate of ice-free conditions first appearing in 2040s may still be flawed. Thus, they advocated for the use of expert judgement in addition to models to help predict ice-free Arctic events, but they noted that expert judgement could also be done in two different ways: directly extrapolating ice loss trends (which would suggest an ice-free Arctic in 2020) or assuming a slower decline trend punctuated by the occasional "big melt" seasons (such as those of 2007 and 2012) which pushes back the date to 2028 or further into 2030s, depending on the starting assumptions about the timing and the extent of the next "big melt". Consequently, there has been a recent history of competing projections from climate models and from individual experts.
Climate models
A 2006 paper examined projections from the Community Climate System Model and predicted "near ice-free September conditions by 2040".
A 2009 paper from Muyin Wang and James E. Overland applied observational constraints to the projections from six CMIP3 climate models and estimated nearly ice-free Arctic Ocean around September 2037, with a chance it could happen as early as 2028. In 2012, this pair of researchers repeated the exercise with CMIP5 models and found that under the highest-emission scenario in CMIP5, Representative Concentration Pathway 8.5, ice-free September first occurs between 14 and 36 years after the baseline year of 2007, with the median of 28 years (i.e. around 2035).
In 2009, a study using 18 CMIP3 climate models found that they project ice-free Arctic a little before 2100 under a scenario of medium future greenhouse gas emissions. In 2012, a different team used CMIP5 models and their moderate emission scenario, RCP 4.5 (which represents somewhat lower emissions than the scenario in CMIP3), and found that while their mean estimate avoids ice-free Arctic before the end of the century, ice-free conditions in 2045 were within one standard deviation of the mean.
In 2013, a study compared projections from the best-performing subset of CMIP5 models with the output from all 30 models after it was constrained by the historical ice conditions, and found good agreement between these approaches. Altogether, it projected ice-free September between 2054 and 2058 under RCP 8.5, while under RCP 4.5, Arctic ice gets very close to the ice-free threshold in 2060s, but does not cross it by the end of the century, and stays at an extent of 1.7 million km2.
In 2014, IPCC Fifth Assessment Report indicated a risk of ice-free summer around 2050 under the scenario of highest possible emissions.
The Third U.S. National Climate Assessment (NCA), released May 6, 2014, reported that the Arctic Ocean is expected to be ice free in summer before mid-century. Models that best match historical trends project a nearly ice-free Arctic in the summer by the 2030s.
In 2021, the IPCC Sixth Assessment Report assessed that there is "high confidence" that the Arctic Ocean will likely become practically ice-free in September before the year 2050 under all SSP scenarios.
A paper published in 2021 shows that the CMIP6 models which perform the best at simulating Arcic sea ice trends project the first ice-free conditions around 2035 under SSP5-8.5, which is the scenario of continually accelerating greenhouse gas emissions.
By weighting multiple CMIP6 projections, the first year of an ice-free Arctic is likely to occur during 2040–2072 under the SSP3-7.0 scenario.
Impacts on the physical environment
Global climate change
Arctic sea ice maintains the cool temperature of the polar regions and it has an important albedo effect on the climate. Its bright shiny surface reflects sunlight during the Arctic summer; dark ocean surface exposed by the melting ice absorbs more sunlight and becomes warmer, which increases the total ocean heat content and helps to drive further sea ice loss during the melting season, as well as potentially delaying its recovery during the polar night. Arctic ice decline between 1979 and 2011 is estimated to have been responsible for as much radiative forcing as a quarter of emissions the same period, which is equivalent to around 10% of the cumulative increase since the start of the Industrial Revolution. When compared to the other greenhouse gases, it has had the same impact as the cumulative increase in nitrous oxide, and nearly half of the cumulative increase in methane concentrations.
The effect of Arctic sea ice decline on global warming will intensify in the future as more and more ice is lost. This feedback has been accounted for by all CMIP5 and CMIP6 models, and it is included in all warming projections they make, such as the estimated warming by 2100 under each Representative Concentration Pathway and Shared Socioeconomic Pathway. They are also capable of resolving the second-order effects of sea ice loss, such as the effect on lapse rate feedback, the changes in water vapor concentrations and regional cloud feedbacks.
Ice-free summer vs. ice-free winter
In 2021, the IPCC Sixth Assessment Report said with high confidence that there is no hysteresis and no tipping point in the loss of Arctic summer sea ice. This can be explained by the increased influence of stabilizing feedback compared to the ice albedo feedback. Specifically, thinner sea ice leads to increased heat loss in the winter, creating a negative feedback loop. This counteracts the positive ice albedo feedback. As such, sea ice would recover even from a true ice-free summer during the winter, and if the next Arctic summer is less warm, it may avoid another ice-free episode until another similarly warm year down the line. However, higher levels of global warming would delay the recovery from ice-free episodes and make them occur more often and earlier in the summer. A 2018 paper estimated that an ice-free September would occur once in every 40 years under a global warming of 1.5 degrees Celsius, but once in every 8 years under 2 degrees and once in every 1.5 years under 3 degrees.
Very high levels of global warming could eventually prevent Arctic sea ice from reforming during the Arctic winter. This is known as an ice-free winter, and it ultimately amounts to a total of loss of Arctic ice throughout the year. A 2022 assessment found that unlike an ice-free summer, it may represent an irreversible tipping point. It estimated that it is most likely to occur at around 6.3 degrees Celsius, though it could potentially occur as early as 4.5 °C or as late as 8.7 °C. Relative to today's climate, an ice-free winter would add 0.6 degrees, with a regional warming between 0.6 and 1.2 degrees.
Amplified Arctic warming
Arctic amplification and its acceleration is strongly tied to declining Arctic sea ice: modelling studies show that strong Arctic amplification only occurs during the months when significant sea ice loss occurs, and that it largely disappears when the simulated ice cover is held fixed. Conversely, the high stability of ice cover in Antarctica, where the thickness of the East Antarctic ice sheet allows it to rise nearly above the sea level, means that this continent has not experienced any net warming over the past seven decades: ice loss in the Antarctic and its contribution to sea level rise is instead driven entirely by the warming of the Southern Ocean, which had absorbed 35–43% of the total heat taken up by all oceans between 1970 and 2017.
Impacts on extreme weather
Barents Sea ice
Barents Sea is the fastest-warming part of the Arctic, and some assessments now treat Barents sea ice as a separate tipping point from the rest of the Arctic sea ice, suggesting that it could permanently disappear once the global warming exceeds 1.5 degrees. This rapid warming also makes it easier to detect any potential connections between the state of sea ice and weather conditions elsewhere than in any other area. The first study proposing a connection between floating ice decline in the Barents Sea and the neighbouring Kara Sea and more intense winters in Europe was published in 2010, and there has been extensive research into this subject since then. For instance, a 2019 paper holds BKS ice decline responsible for 44% of the 1995–2014 central Eurasian cooling trend, far more than indicated by the models, while another study from that year suggests that the decline in BKS ice reduces snow cover in the North Eurasia but increases it in central Europe. There are also potential links to summer precipitation: a connection has been proposed between the reduced BKS ice extent in November–December and greater June rainfall over South China. One paper even identified a connection between Kara Sea ice extent and the ice cover of Lake Qinghai on the Tibetan Plateau.
However, BKS ice research is often subject to the same uncertainty as the broader research into Arctic amplification/whole-Arctic sea ice loss and the jet stream, and is often challenged by the same data. Nevertheless, the most recent research still finds connections which are statistically robust, yet non-linear in nature: two separate studies published in 2021 indicate that while autumn BKS ice loss results in cooler Eurasian winters, ice loss during winter makes Eurasian winters warmer: as BKS ice loss accelerates, the risk of more severe Eurasian winter extremes diminishes while heatwave risk in the spring and summer is magnified.
Other possible impacts on weather
In 2019, it was proposed that the reduced sea ice around Greenland in autumn affects snow cover during the Eurasian winter, and this intensifies Korean summer monsoon, and indirectly affects the Indian summer monsoon.
2021 research suggested that autumn ice loss in the East Siberian Sea, Chukchi Sea and Beaufort Sea can affect spring Eurasian temperature. Autumn sea ice decline of one standard deviation in that region would reduce mean spring temperature over central Russia by nearly 0.8 °C, while increasing the probability of cold anomalies by nearly a third.
Atmospheric chemistry
A 2015 study concluded that Arctic sea ice decline accelerates methane emissions from the Arctic tundra, with the emissions for 2005-2010 being around 1.7 million tonnes higher than they would have been with the sea ice at 1981–1990 levels. One of the researchers noted, "The expectation is that with further sea ice decline, temperatures in the Arctic will continue to rise, and so will methane emissions from northern wetlands."
Cracks in Arctic sea ice expose the seawater to the air, causing mercury in the air to be absorbed into the water. This absorption leads to more mercury, a toxin, entering the food chain where it can negatively affect fish and the animals and people who consume them. Mercury is part of Earth's atmosphere due to natural causes (see mercury cycle) and due to human emissions.
Shipping
Economic implications of ice-free summers and the decline in Arctic ice volumes include a greater number of journeys across the Arctic Ocean Shipping lanes during the year. This number has grown from 0 in 1979 to 400–500 along the Bering strait and >40 along the Northern Sea Route in 2013. Traffic through the Arctic Ocean is likely to increase further. An early study by James Hansen and colleagues suggested in 1981 that a warming of 5 to 10 °C, which they expected as the range of Arctic temperature change corresponding to doubled concentrations, could open the Northwest Passage. A 2016 study concludes that Arctic warming and sea ice decline will lead to "remarkable shifts in trade flows between Asia and Europe, diversion of trade within Europe, heavy shipping traffic in the Arctic and a substantial drop in Suez traffic. Projected shifts in trade also imply substantial pressure on an already threatened Arctic ecosystem."
In August 2017, the first ship traversed the Northern Sea Route without the use of ice-breakers. Also in 2017, the Finnish icebreaker MSV Nordica set a record for the earliest crossing of the Northwest Passage. According to the New York Times, this forebodes more shipping through the Arctic, as the sea ice melts and makes shipping easier. A 2016 report by the Copenhagen Business School found that large-scale trans-Arctic shipping will become economically viable by 2040.
Impacts on wildlife
The decline of Arctic sea ice will provide humans with access to previously remote coastal zones. As a result, this will lead to an undesirable effect on terrestrial ecosystems and put marine species at risk.
Sea ice decline has been linked to boreal forest decline in North America and is assumed to culminate with an intensifying wildfire regime in this region. The annual net primary production of the Eastern Bering Sea was enhanced by 40–50% through phytoplankton blooms during warm years of early sea ice retreat.
Polar bears are turning to alternative food sources because Arctic sea ice melts earlier and freezes later each year. As a result, they have less time to hunt their historically preferred prey of seal pups, and must spend more time on land and hunt other animals. As a result, the diet is less nutritional, which leads to reduced body size and reproduction, thus indicating population decline in polar bears.
The Arctic refuge is where polar bears main habitat is to den and the melting arctic sea ice is causing a loss of species. There are only about 900 bears in the Arctic refuge national conservation area.
As arctic ice decays, microorganisms produce substances with various effects on melting and stability. Certain types of bacteria in rotten ice pores produce polymer-like substances, which may influence the physical properties of the ice. A team from the University of Washington studying this phenomenon hypothesizes that the polymers may provide a stabilizing effect to the ice. However, other scientists have found algae and other microorganisms help create a substance, cryoconite, or create other pigments that increase rotting and increase the growth of the microorganisms.
See also
Abrupt climate change
Arctic sea ice ecology and history
Measurement of sea ice
Polar vortex
Sea ice thickness
Vanishing Point (2012 film)
Soil carbon feedback
References
External links
NASA Earth Observatory | Arctic Sea Ice
Piecing together the Arctic's sea ice history back to 1850
Maps
NSIDC | Arctic Sea Ice News
Global Cryosphere Watch
Daily AMSR2 Sea Ice Maps
Environment of the Arctic
Forms of water
Hydrology
Sea ice
Articles containing video clips
Climate change and the environment | Arctic sea ice decline | Physics,Chemistry,Engineering,Environmental_science | 4,249 |
65,834,550 | https://en.wikipedia.org/wiki/Automation%20in%20construction | Automation in construction is the combination of methods, processes, and systems that allow for greater machine autonomy in construction activities. Construction automation may have multiple goals, including but not limited to, reducing jobsite injuries, decreasing activity completion times, and assisting with quality control and quality assurance. Some systems may be fielded as a direct response to increasing skilled labor shortages in some countries. Opponents claim that increased automation may lead to less construction jobs and that software leaves heavy equipment vulnerable to hackers.
Research insights on this subject are today published in several jurnals such as Automation in Construction by Elsevier.
Transportation Construction
Kratos Defense & Security Solutions fielded the world’s first Autonomous Truck-Mounted Attenuator (ATMA) in 2017, in conjunction with Royal Truck & Equipment.
Uses of Automation in Construction
Equipment control and management: Automation can be used to control and monitor construction equipment, such as cranes, excavators, and bulldozers.
Material handling: Automated systems can be used to handle, transport, and place materials such as concrete, bricks, and stones.
Surveying: Automated survey equipment and drones can be used to collect and analyze data on construction sites.
Quality control: Automated systems can be used to monitor and control the quality of materials and construction processes.
Safety management: Automated systems can be used to monitor and control safety conditions on construction sites.
Scheduling and planning: Automated systems can be used to manage schedules, resources, and costs.
Waste management: Automated systems can be used to manage and dispose of waste materials generated during construction.
3D printing: Automated 3D printing can be used to create prototypes, models, and even full-scale building components.
Benefits of Automation in Construction
The use of automation in construction has become increasingly prevalent in recent years due to its numerous benefits. Automation in construction refers to the use of machinery, software, and other technologies to perform tasks that were previously done manually by workers.
One of the most significant benefits of automation in construction is increased productivity. Automation can help speed up construction processes, reduce project completion times, and improve overall efficiency. For example, using automated machinery for tasks such as concrete pouring, bricklaying, and welding can significantly increase the speed and accuracy of these tasks, allowing for more work to be completed in a shorter amount of time.
Another benefit of automation in construction is improved safety. By automating tasks that are hazardous to workers, such as demolition or working at height, companies can reduce the risk of accidents and injuries on site. Automation can also help to reduce worker fatigue, which can be a significant factor in accidents and mistakes.
Overall, the use of automation in construction can improve productivity, reduce costs, increase safety, and improve the quality of construction projects. As technology continues to advance, the use of automation is likely to become even more prevalent in the construction industry.
References
Automation in construction
Internet of things
Machine learning
Applications of artificial intelligence
Heavy equipment
Self-driving cars | Automation in construction | Engineering | 588 |
12,040,981 | https://en.wikipedia.org/wiki/Gibbs%20and%20Canning | Gibbs and Canning Limited was an English manufacturer of terracotta and, in particular, architectural terracotta, located in Glascote, Tamworth, and founded in 1847.
The company manufactured a wide range of terracotta and faience: statues of lions and pelicans to adorn the Natural History Museum in London; architectural terracotta for banks and schools; and garden urns and planters. By the 1950s, when the factory finally closed, it was best known for more practical items, such as drainage pipes, sinks, vases and jars.
Today, there is little evidence of the factory in Glascote, but the legacy lives on in the decoration and plumbing of many buildings in Britain’s major towns and cities.
Buildings featuring Gibbs and Canning terracotta
Natural History Museum, South Kensington, London. Designed by Alfred Waterhouse. Both the interior and exterior statues, and the block-work, are Gibbs and Canning (G&C).
Royal Albert Hall, South Kensington, London. The buff, ornamental terracotta on the exterior.
142 Holborn Bars, Prudential Assurance Building, Holborn, London. Designed by Alfred Waterhouse with all the red terracotta by G&C.
Methodist Central Hall, Birmingham. Ornate, red terracotta.
Imperial Buildings, Victoria Street/Whitechapel corner, Liverpool, 1879. Cream terracotta.
Church of the Holy Name of Jesus, Manchester. Roof vaulting of hollow terracotta blocks, 1869–71.
Manchester Town Hall Designed, again by Alfred Waterhouse.
Victoria Law Courts, Birmingham. Interior buff-coloured terracotta.
References
Further reading
Streluk, A. (2006) "Gibbs & Canning of Glascote, Tamworth", Glazed Expressions, No.55 Spring
External links
Research page including details of many buildings that used Gibbs and Canning terracotta
Chemlinski Gallery - English Terracotta
Tamworth Castle - has a small display Gibbs and Canning wares and manufacturing techniques
Building materials companies of the United Kingdom
Ceramics manufacturers of England
Staffordshire pottery
Terracotta
Design companies established in 1847
Manufacturer of architectural terracotta
Manufacturing companies established in 1847
1847 establishments in England | Gibbs and Canning | Engineering | 446 |
43,743,574 | https://en.wikipedia.org/wiki/Access%20and%20Benefit%20Sharing%20Agreement | An Access and Benefit Sharing Agreement (ABSA) is an agreement that defines the fair and equitable sharing of benefits arising from the use of genetic resources. ABSAs typically arise in relation to bioprospecting where indigenous knowledge is used to focus screening efforts for commercially valuable genetic and biochemical resources. ABSAs recognise that bioprospecting frequently relies on indigenous or traditional knowledge, and that people or communities who hold such knowledge are entitled to a share of benefits arising from its commercial utilization.
History and development
The concept of ABSAs stems from the Convention on Biological Diversity which, among other objectives, seeks to ensure the fair and equitable sharing of benefits arising from genetic resources. However, the highly controversial principle of Access and Benefit Sharing of the CDB stirred up a virulent debate which left most stakeholders unsatisfied with the framework provided.
The Nagoya Protocol, a supplementary agreement to the Convention on Biological Diversity, provides a legal framework for implementing that objective. Article 5 of the Nagoya Protocol requires that benefits arising from the utilization of genetic resources, as well as from subsequent applications and commercialization, to be shared in a fair and equitable way with the party providing such resources. Article 5 states that such sharing shall be upon mutually agreed terms. An ABSA can be used to specify the terms on which the benefits will be shared in a particular case.
References
Agreements
Convention on Biological Diversity | Access and Benefit Sharing Agreement | Biology | 282 |
40,364,155 | https://en.wikipedia.org/wiki/Quarter%205-cubic%20honeycomb | In five-dimensional Euclidean geometry, the quarter 5-cubic honeycomb is a uniform space-filling tessellation (or honeycomb). It has half the vertices of the 5-demicubic honeycomb, and a quarter of the vertices of a 5-cube honeycomb. Its facets are 5-demicubes and runcinated 5-demicubes.
Related honeycombs
See also
Regular and uniform honeycombs in 5-space:
5-cube honeycomb
5-demicube honeycomb
5-simplex honeycomb
Truncated 5-simplex honeycomb
Omnitruncated 5-simplex honeycomb
Notes
References
Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] See p318
x3o3o x3o3o *b3*e - spaquinoh
Honeycombs (geometry)
6-polytopes | Quarter 5-cubic honeycomb | Physics,Chemistry,Materials_science | 255 |
54,484,831 | https://en.wikipedia.org/wiki/NGC%207603 | NGC 7603 is a spiral Seyfert galaxy in the constellation Pisces. It is listed (as Arp 92) in the Atlas of Peculiar Galaxies. It is interacting with the smaller elliptical galaxy PGC 71041 nearby.
This galaxy pair has long been a cornerstone for those who are critical of the view that the universe is expanding, and advocates for non-standard cosmology such as Halton Arp, Fred Hoyle, and others. This is due to the position of two quasars, one at each edge of the filament connecting the two galaxies, with much more redshift than either galaxy.
References
External links
SEDS – NGC 7603
Simbad – NGC 7603
VizieR – NGC 7603
Pisces (constellation)
Unbarred spiral galaxies
Seyfert galaxies
7603
092
Interacting galaxies
Markarian galaxies | NGC 7603 | Astronomy | 177 |
39,271,235 | https://en.wikipedia.org/wiki/Far%20Ultraviolet%20Camera/Spectrograph | The Far Ultraviolet Camera/Spectrograph (UVC) was one of the experiments deployed on the lunar surface by the Apollo 16 astronauts. It consisted of a telescope and camera that obtained astronomical images and spectra in the far ultraviolet region of the electromagnetic spectrum.
Instrument
The Far Ultraviolet Camera/Spectrograph was a tripod mounted, f/1.0, 75 mm electronographic Schmidt camera weighing 22 kg. It had a 20° field of view in the imaging mode, and 0.5x20° field in the spectrographic mode. Spectroscopic data were provided from 300 to 1350 Ångström, with 30 Å resolution, and images were provided in two passbands ranges, 1050–1260 Å and 1200–1550 Å. There were two corrector plates made of lithium fluoride (LiF) or calcium fluoride (CaF2), which could be selected for different bands of UV. The camera contained a cesium iodide (CsI) photocathode and used a film cartridge which was recovered and returned to earth for processing.
The experiment was placed on the Descartes Highlands region of lunar surface where Apollo 16 astronauts John Young and Charles Duke landed in April 1972. To keep it cool and eliminate solar glare, it was placed in the shadow of the lunar module. It was manually aimed by the astronauts, who would re-point the telescope at targets throughout the lunar stay.
Experiment goals
The goals of the Far Ultraviolet Camera/Spectrograph spanned across several disciplines of astronomy. Earth studies were made by studying the Earth's upper atmosphere's composition and structure, the ionosphere, the geocorona, day and night airglow, and aurorae. Heliophysics studies were made by obtaining spectra and images of the solar wind, the solar bow cloud, and other gas clouds in the solar system. Astronomical studies by obtaining direct evidence of intergalactic hydrogen, and spectra of distant galaxy clusters and within the Milky Way. Lunar studies were conducted by detecting gasses in the lunar atmosphere, and searching for possible volcanic gasses. There were also considerations to evaluate the lunar surface as a site for future astronomical observatories.
Results
The film cartridge was removed during the third and final extravehicular activity, and returned to earth. The rest of the instrument package was left on the lunar surface. A total of 178 frames of film were obtained of 11 different targets including: the Earth's upper atmosphere and aurora, various nebulae and star clusters, and the Large Magellanic Cloud.
The film was digitally scanned and saved on tape. Files from these tapes can be requested at NASA. Most of the Apollo 16 and Skylab photos have been converted to JPGs by a third party enthusiast.
Designer
The principal investigator and chief engineer of the Far Ultraviolet Camera/Spectrograph was Dr. George Robert Carruthers, who was working at the US Naval Research Lab. In 1969, Dr. Carruthers was given a patent for "Image Converter for Detecting Electromagnetic Radiation Especially in Short Wave Lengths". For this and his further work, he received the 2012 National Medal of Technology and Innovation.
Second telescope
A second spare telescope was slightly modified and later flown on Skylab 4. It was given an aluminum (Al) and magnesium fluoride (MgF2) mirror rather than rhenium. It was mounted on Skylab's Apollo Telescope Mount for usage in orbit. Among the many images and spectra that it took, it was used to study ultraviolet emission from Comet Kohoutek.
See also
Apollo Program
Ultraviolet astronomy
References
Further reading
Ultraviolet telescopes
Space photography and videography | Far Ultraviolet Camera/Spectrograph | Astronomy | 739 |
14,617 | https://en.wikipedia.org/wiki/Intel | Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California, and incorporated in Delaware. Intel designs, manufactures, and sells computer components such as CPUs and related products for business and consumer markets. It is considered one of the world's largest semiconductor chip manufacturers by revenue and ranked in the Fortune 500 list of the largest United States corporations by revenue for nearly a decade, from 2007 to 2016 fiscal years, until it was removed from the ranking in 2018. In 2020, it was reinstated and ranked 45th, being the 7th-largest technology company in the ranking.
Intel supplies microprocessors for most manufacturers of computer systems, and is one of the developers of the x86 series of instruction sets found in most personal computers (PCs). It also manufactures chipsets, network interface controllers, flash memory, graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and other devices related to communications and computing. Intel has a strong presence in the high-performance general-purpose and gaming PC market with its Intel Core line of CPUs, whose high-end models are among the fastest consumer CPUs, as well as its Intel Arc series of GPUs. The Open Source Technology Center at Intel hosts PowerTOP and LatencyTOP, and supports other open source projects such as Wayland, Mesa, Threading Building Blocks (TBB), and Xen.
Intel was founded on July 18, 1968, by semiconductor pioneers Gordon Moore (of Moore's law) and Robert Noyce, along with investor Arthur Rock, and is associated with the executive leadership and vision of Andrew Grove. The company was a key component of the rise of Silicon Valley as a high-tech center, as well as being an early developer of SRAM and DRAM memory chips, which represented the majority of its business until 1981. Although Intel created the world's first commercial microprocessor chip—the Intel 4004—in 1971, it was not until the success of the PC in the early 1990s that this became its primary business.
During the 1990s, the partnership between Microsoft Windows and Intel, known as "Wintel", became instrumental in shaping the PC landscape and solidified Intel's position on the market. As a result, Intel invested heavily in new microprocessor designs in the mid to late 1990s, fostering the rapid growth of the computer industry. During this period, it became the dominant supplier of PC microprocessors, with a market share of 90%, and was known for aggressive and anti-competitive tactics in defense of its market position, particularly against AMD, as well as a struggle with Microsoft for control over the direction of the PC industry.
Since the 2000s and especially since the late 2010s, Intel has faced increasing competition, which has led to a reduction in Intel's dominance and market share in the PC market. Nevertheless, with a 68.4% market share as of 2023, Intel still leads the x86 market by a wide margin. In addition, Intel's ability to design and manufacture its own chips is considered a rarity in the semiconductor industry, as most chip designers do not have their own production facilities and instead rely on contract manufacturers (e.g. TSMC, Foxconn and Samsung), as AMD and Nvidia do.
Industries
Operating segments
Client Computing Group51.8% of 2020 revenuesproduces PC processors and related components.
Data Center Group33.7% of 2020 revenuesproduces hardware components used in server, network, and storage platforms.
Internet of Things Group5.2% of 2020 revenuesoffers platforms designed for retail, transportation, industrial, buildings and home use.
Programmable Solutions Group2.4% of 2020 revenuesmanufactures programmable semiconductors (primarily FPGAs).
Customers
In 2023, Dell accounted for about 19% of Intel's total revenues, Lenovo accounted for 11% of total revenues, and HP Inc. accounted for 10% of total revenues. As of May 2024, the U.S. Department of Defense is another large customer for Intel. In September 2024, Intel reportedly qualified for as much as $3.5 billion in federal grants to make semiconductors for the Defense Department.
Market share
According to IDC, while Intel enjoyed the biggest market share in both the overall worldwide PC microprocessor market (73.3%) and the mobile PC microprocessor (80.4%) in the second quarter of 2011, the numbers decreased by 1.5% and 1.9% compared to the first quarter of 2011.
Intel's market share decreased significantly in the enthusiast market as of 2019, and they have faced delays for their 10 nm products. According to former Intel CEO Bob Swan, the delay was caused by the company's overly aggressive strategy for moving to its next node.
Historical market share
In the 1980s, Intel was among the world's top ten sellers of semiconductors (10th in 1987). Along with Microsoft Windows, it was part of the "Wintel" personal computer domination in the 1990s and early 2000s. In 1992, Intel became the biggest semiconductor chip maker by revenue and held the position until 2018 when Samsung Electronics surpassed it, but Intel returned to its former position the year after. Other major semiconductor companies include TSMC, GlobalFoundries, Texas Instruments, ASML, STMicroelectronics, United Microelectronics Corporation (UMC), Micron, SK Hynix, Kioxia, and SMIC.
Major competitors
Intel's competitors in PC chipsets included AMD, VIA Technologies, Silicon Integrated Systems, and Nvidia. Intel's competitors in networking include NXP Semiconductors, Infineon, Broadcom Limited, Marvell Technology Group and Applied Micro Circuits Corporation, and competitors in flash memory included Spansion, Samsung Electronics, Qimonda, Kioxia, STMicroelectronics, Micron, SK Hynix, and IBM.
The only major competitor in the x86 processor market is AMD, with which Intel has had full cross-licensing agreements since 1976: each partner can use the other's patented technological innovations without charge after a certain time. However, the cross-licensing agreement is canceled in the event of an AMD bankruptcy or takeover.
Some smaller competitors, such as VIA Technologies, produce low-power x86 processors for small factor computers and portable equipment. However, the advent of such mobile computing devices, in particular, smartphones, has led to a decline in PC sales. Since over 95% of the world's smartphones currently use processors cores designed by Arm, using the Arm instruction set, Arm has become a major competitor for Intel's processor market. Arm is also planning to make attempts at setting foot into the PC and server market, with Ampere and IBM each individually designing CPUs for servers and supercomputers. The only other major competitor in processor instruction sets is RISC-V, which is an open source CPU instruction set. The major Chinese phone and telecommunications manufacturer Huawei has released chips based on the RISC-V instruction set due to US sanctions against China.
Intel has been involved in several disputes regarding the violation of antitrust laws, which are noted below.
Carbon footprint
Intel reported total CO2e emissions (direct + indirect) for the twelve months ending December 31, 2020, at 2,882 Kt (+94/+3.4% y-o-y). Intel plans to reduce carbon emissions 10% by 2030 from a 2020 base year.
Manufacturing locations
Intel has self-reported that they have Wafer fabrication plants in the United States, Ireland, and Israel. They have also self-reported that they have assembly and testing sites mostly in China, Costa Rica, Malaysia, and Vietnam, and one site in the United States.
Corporate history
Origins
Intel was incorporated in Mountain View, California, on July 18, 1968, by Gordon E. Moore (known for "Moore's law"), a chemist; Robert Noyce, a physicist and co-inventor of the integrated circuit; and Arthur Rock, an investor and venture capitalist. Moore and Noyce had left Fairchild Semiconductor, where they were part of the "traitorous eight" who founded it. There were originally 500,000 shares outstanding of which Dr. Noyce bought 245,000 shares, Dr. Moore 245,000 shares, and Mr. Rock 10,000 shares; all at $1 per share. Rock offered $2,500,000 of convertible debentures to a limited group of private investors (equivalent to $21 million in 2022), convertible at $5 per share. Just 2 years later, Intel became a public company via an initial public offering (IPO), raising $6.8 million ($23.50 per share). Intel was one of the very first companies to be listed on the then-newly established National Association of Securities Dealers Automated Quotations (NASDAQ) stock exchange. Intel's third employee was Andy Grove, a chemical engineer, who later ran the company through much of the 1980s and the high-growth 1990s.
In deciding on a name, Moore and Noyce quickly rejected "Moore Noyce", near homophone for "more noise" – an ill-suited name for an electronics company, since noise in electronics is usually undesirable and typically associated with bad interference. Instead, they founded the company as NM Electronics on July 18, 1968, but by the end of the month had changed the name to Intel, which stood for Integrated Electronics. Since "Intel" was already trademarked by the hotel chain Intelco, they had to buy the rights for the name.
Early history
At its founding, Intel was distinguished by its ability to make logic circuits using semiconductor devices. The founders' goal was the semiconductor memory market, widely predicted to replace magnetic-core memory. Its first product, a quick entry into the small, high-speed memory market in 1969, was the 3101 Schottky TTL bipolar 64-bit static random-access memory (SRAM), which was nearly twice as fast as earlier Schottky diode implementations by Fairchild and the Electrotechnical Laboratory in Tsukuba, Japan. In the same year, Intel also produced the 3301 Schottky bipolar 1024-bit read-only memory (ROM) and the first commercial metal–oxide–semiconductor field-effect transistor (MOSFET) silicon gate SRAM chip, the 256-bit 1101.
While the 1101 was a significant advance, its complex static cell structure made it too slow and costly for mainframe memories. The three-transistor cell implemented in the first commercially available dynamic random-access memory (DRAM), the 1103 released in 1970, solved these issues. The 1103 was the bestselling semiconductor memory chip in the world by 1972, as it replaced core memory in many applications. Intel's business grew during the 1970s as it expanded and improved its manufacturing processes and produced a wider range of products, still dominated by various memory devices.
Intel created the first commercially available microprocessor, the Intel 4004, in 1971. The microprocessor represented a notable advance in the technology of integrated circuitry, as it miniaturized the central processing unit of a computer, which then made it possible for small machines to perform calculations that in the past only very large machines could do. Considerable technological innovation was needed before the microprocessor could actually become the basis of what was first known as a "mini computer" and then known as a "personal computer". Intel also created one of the first microcomputers in 1973.
Intel opened its first international manufacturing facility in 1972, in Malaysia, which would host multiple Intel operations, before opening assembly facilities and semiconductor plants in Singapore and Jerusalem in the early 1980s, and manufacturing and development centers in China, India, and Costa Rica in the 1990s. By the early 1980s, its business was dominated by DRAM chips. However, increased competition from Japanese semiconductor manufacturers had, by 1983, dramatically reduced the profitability of this market. The growing success of the IBM personal computer, based on an Intel microprocessor, was among factors that convinced Gordon Moore (CEO since 1975) to shift the company's focus to microprocessors and to change fundamental aspects of that business model. Moore's decision to sole-source Intel's 386 chip played into the company's continuing success.
By the end of the 1980s, buoyed by its fortuitous position as microprocessor supplier to IBM and IBM's competitors within the rapidly growing personal computer market, Intel embarked on a 10-year period of unprecedented growth as the primary and most profitable hardware supplier to the PC industry, part of the winning 'Wintel' combination. Moore handed over his position as CEO to Andy Grove in 1987. By launching its Intel Inside marketing campaign in 1991, Intel was able to associate brand loyalty with consumer selection, so that by the end of the 1990s, its line of Pentium processors had become a household name.
Challenges to dominance (2000s)
After 2000, growth in demand for high-end microprocessors slowed. Competitors, most notably AMD (Intel's largest competitor in its primary x86 architecture market), garnered significant market share, initially in low-end and mid-range processors but ultimately across the product range, and Intel's dominant position in its core market was greatly reduced, mostly due to controversial NetBurst microarchitecture. In the early 2000s then-CEO, Craig Barrett attempted to diversify the company's business beyond semiconductors, but few of these activities were ultimately successful.
Litigation
Bob had also for a number of years been embroiled in litigation. U.S. law did not initially recognize intellectual property rights related to microprocessor topology (circuit layouts), until the Semiconductor Chip Protection Act of 1984, a law sought by Intel and the Semiconductor Industry Association (SIA). During the late 1980s and 1990s (after this law was passed), Intel also sued companies that tried to develop competitor chips to the 80386 CPU. The lawsuits were noted to significantly burden the competition with legal bills, even if Intel lost the suits. Antitrust allegations had been simmering since the early 1990s and had been the cause of one lawsuit against Intel in 1991. In 2004 and 2005, AMD brought further claims against Intel related to unfair competition.
Reorganization and success with Intel Core (2005–2015)
In 2005, CEO Paul Otellini reorganized the company to refocus its core processor and chipset business on platforms (enterprise, digital home, digital health, and mobility).
On June 6, 2005, Steve Jobs, then CEO of Apple, announced that Apple would be using Intel's x86 processors for its Macintosh computers, switching from the PowerPC architecture developed by the AIM alliance. This was seen as a win for Intel; an analyst called the move "risky" and "foolish", as Intel's current offerings at the time were considered to be behind those of AMD and IBM.
In 2006, Intel unveiled its Core microarchitecture to widespread critical acclaim; the product range was perceived as an exceptional leap in processor performance that at a stroke regained much of its leadership of the field. In 2008, Intel had another "tick" when it introduced the Penryn microarchitecture, fabricated using the 45 nm process node. Later that year, Intel released a processor with the Nehalem architecture to positive reception.
On June 27, 2006, the sale of Intel's XScale assets was announced. Intel agreed to sell the XScale processor business to Marvell Technology Group for an estimated $600 million and the assumption of unspecified liabilities. The move was intended to permit Intel to focus its resources on its core x86 and server businesses, and the acquisition completed on November 9, 2006.
In 2008, Intel spun off key assets of a solar startup business effort to form an independent company, SpectraWatt Inc. In 2011, SpectraWatt filed for bankruptcy.
In February 2011, Intel began to build a new microprocessor manufacturing facility in Chandler, Arizona, completed in 2013 at a cost of $5 billion. The building is now the 10 nm-certified Fab 42 and is connected to the other Fabs (12, 22, 32) on Ocotillo Campus via an enclosed bridge known as the Link. The company produces three-quarters of its products in the United States, although three-quarters of its revenue come from overseas.
The Alliance for Affordable Internet (A4AI) was launched in October 2013 and Intel is part of the coalition of public and private organizations that also includes Facebook, Google, and Microsoft. Led by Sir Tim Berners-Lee, the A4AI seeks to make Internet access more affordable so that access is broadened in the developing world, where only 31% of people are online. Google will help to decrease Internet access prices so that they fall below the UN Broadband Commission's worldwide target of 5% of monthly income.
Attempts at entering the smartphone market
In April 2011, Intel began a pilot project with ZTE Corporation to produce smartphones using the Intel Atom processor for China's domestic market. In December 2011, Intel announced that it reorganized several of its business units into a new mobile and communications group that would be responsible for the company's smartphone, tablet, and wireless efforts. Intel planned to introduce Medfield – a processor for tablets and smartphones – to the market in 2012, as an effort to compete with Arm. As a 32-nanometer processor, Medfield is designed to be energy-efficient, which is one of the core features in Arm's chips.
At the Intel Developers Forum (IDF) 2011 in San Francisco, Intel's partnership with Google was announced. In January 2012, Google announced Android 2.3, supporting Intel's Atom microprocessor. In 2013, Intel's Kirk Skaugen said that Intel's exclusive focus on Microsoft platforms was a thing of the past and that they would now support all "tier-one operating systems" such as Linux, Android, iOS, and Chrome.
In 2014, Intel cut thousands of employees in response to "evolving market trends", and offered to subsidize manufacturers for the extra costs involved in using Intel chips in their tablets. In April 2016, Intel cancelled the SoFIA platform and the Broxton Atom SoC for smartphones, effectively leaving the smartphone market.
Intel custom foundry
Finding itself with excess fab capacity after the failure of the Ultrabook to gain market traction and with PC sales declining, in 2013 Intel reached a foundry agreement to produce chips for Altera using a 14 nm process. General Manager of Intel's custom foundry division Sunit Rikhi indicated that Intel would pursue further such deals in the future. This was after poor sales of Windows 8 hardware caused a major retrenchment for most of the major semiconductor manufacturers, except for Qualcomm, which continued to see healthy purchases from its largest customer, Apple.
As of July 2013, five companies were using Intel's fabs via the Intel Custom Foundry division: Achronix, Tabula, Netronome, Microsemi, and Panasonicmost are field-programmable gate array (FPGA) makers, but Netronome designs network processors. Only Achronix began shipping chips made by Intel using the 22 nm Tri-Gate process. Several other customers also exist but were not announced at the time.
The foundry business was closed in 2018 due to Intel's issues with its manufacturing.
Security and manufacturing challenges (2016–2021)
Intel continued its tick-tock model of a microarchitecture change followed by a die shrink until the 6th-generation Core family based on the Skylake microarchitecture. This model was deprecated in 2016, with the release of the 7th-generation Core family (codenamed Kaby Lake), ushering in the process–architecture–optimization model. As Intel struggled to shrink their process node from 14 nm to 10 nm, processor development slowed down and the company continued to use the Skylake microarchitecture until 2020, albeit with optimizations.
10 nm process node issues
While Intel originally planned to introduce 10 nm products in 2016, it later became apparent that there were manufacturing issues with the node. The first microprocessor under that node, Cannon Lake (marketed as 8th-generation Core), was released in small quantities in 2018. The company first delayed the mass production of their 10 nm products to 2017. They later delayed mass production to 2018, and then to 2019. Despite rumors of the process being cancelled, Intel finally introduced mass-produced 10 nm 10th-generation Intel Core mobile processors (codenamed "Ice Lake") in September 2019.
Intel later acknowledged that their strategy to shrink to 10 nm was too aggressive. While other foundries used up to four steps in 10 nm or 7 nm processes, the company's 10 nm process required up to five or six multi-pattern steps. In addition, Intel's 10 nm process is denser than its counterpart processes from other foundries. Since Intel's microarchitecture and process node development were coupled, processor development stagnated.
Security flaws
In early January 2018, it was reported that all Intel processors made since 1995 (besides Intel Itanium and pre-2013 Intel Atom) had been subject to two security flaws dubbed Meltdown and Spectre.
Renewed competition and other developments (2018–present)
Due to Intel's issues with its 10 nm process node and the company's slow processor development, the company now found itself in a market with intense competition. The company's main competitor, AMD, introduced the Zen microarchitecture and a new chiplet-based design to critical acclaim. Since its introduction, AMD, once unable to compete with Intel in the high-end CPU market, has undergone a resurgence, and Intel's dominance and market share have considerably decreased. In addition, Apple began to transition away from the x86 architecture and Intel processors to their own Apple silicon for their Macintosh computers in 2020. The transition is expected to affect Intel minimally; however, it might prompt other PC manufacturers to reevaluate their reliance on Intel and the x86 architecture.
'IDM 2.0' strategy
On March 23, 2021, CEO Pat Gelsinger laid out new plans for the company. These include a new strategy, called IDM 2.0, that includes investments in manufacturing facilities, use of both internal and external foundries, and a new foundry business called Intel Foundry Services (IFS), a standalone business unit. Unlike Intel Custom Foundry, IFS will offer a combination of packaging and process technology, and Intel's IP portfolio including x86 cores. Other plans for the company include a partnership with IBM and a new event for developers and engineers, called "Intel ON". Gelsinger also confirmed that Intel's 7 nm process is on track, and that the first products using their 7 nm process (also known as Intel 4) are Ponte Vecchio and Meteor Lake.
In January 2022, Intel reportedly selected New Albany, Ohio, near Columbus, Ohio, as the site for a major new manufacturing facility. The facility will cost at least $20 billion. The company expects the facility to begin producing chips by 2025. The same year Intel also choose Magdeburg, Germany, as a site for two new chip mega factories for €17 billion (topping Tesla's investment in Brandenburg). The start of the construction was initially planned for 2023, but this has been postponed to late 2024, while production start is planned for 2027. Including subcontractors, this would create 10,000 new jobs.
In August 2022, Intel signed a $30billion partnership with Brookfield Asset Management to fund its recent factory expansions. As part of the deal, Intel would have a controlling stake by funding 51% of the cost of building new chip-making facilities in Chandler, with Brookfield owning the remaining 49% stake, allowing the companies to split the revenue from those facilities.
On January 31, 2023, as part of $3 billion in cost reductions, Intel announced pay cuts affecting employees above midlevel, ranging from 5% upwards. It also suspended bonuses and merit pay increases, while reducing retirement plan matching. These cost reductions followed layoffs announced in the fall of 2022.
In October 2023, Intel confirmed it would be the first commercial user of high-NA EUV lithography tool, as part of its plan to regain process leadership from TSMC.
In August 2024, following a below-expectations Q2 earnings announcement, Intel announced "significant actions to reduce our costs. We plan to deliver $10 billion in cost savings in 2025, and this includes reducing our head count by roughly 15,000 roles, or 15% of our workforce."
In December 2023, Intel unveiled Gaudi3, an artificial intelligence (AI) chip for generative AI software which will launch in 2024 and compete with rival chips from Nvidia and AMD. On June 4, 2024, Intel announced AI chips for data centers, the Xeon 6 processor, aiming for better performance and power efficiency compared to its predecessor. Intel's Gaudi 2 and Gaudi 3 AI accelerators were revealed to be more cost-effective than competitors' offerings. Additionally, Intel disclosed architecture details for its Lunar Lake processors for AI PCs, which were released on September 24, 2024.
After posting $1.6 billion in loses for Q2, Intel announced in August 2024 that it intends to cut 15,000 jobs and save $10 billion in 2025. In order to reach this goal, the company will offer early retirement and voluntary departure options.
On November 1, 2024, it was announced that Intel will drop out of the Dow Jones Industrial Average on November 8 prior to the stock market open, with Nvidia taking its place.
In December 2024, Intel's CEO Pat Gelsinger was ousted amid ongoing struggles to revitalize the company, which has seen a significant decline in stock value during his tenure. Gelsinger's resignation, effective December 1, followed a board meeting where directors expressed dissatisfaction with the slow progress of his ambitious turnaround strategy. Despite efforts to enhance Intel's manufacturing capabilities and compete with rivals like AMD and Nvidia, the company faced mounting challenges, including a $16.6 billion loss and a 60% drop in share prices since Gelsinger's appointment in 2021. Following his departure, Intel appointed David Zinsner and Michelle Johnston Holthaus as interim co-CEOs while searching for a permanent successor. Gelsinger's exit underscored the turmoil at Intel as it grappled with its identity crisis and seeks to regain its position in the semiconductor industry.
Product and market history
SRAMs, DRAMs, and the microprocessor
Intel's first products were shift register memory and random-access memory integrated circuits, and Intel grew to be a leader in the fiercely competitive DRAM, SRAM, and ROM markets throughout the 1970s. Concurrently, Intel engineers Marcian Hoff, Federico Faggin, Stanley Mazor, and Masatoshi Shima invented Intel's first microprocessor. Originally developed for the Japanese company Busicom to replace a number of ASICs in a calculator already produced by Busicom, the Intel 4004 was introduced to the mass market on November 15, 1971, though the microprocessor did not become the core of Intel's business until the mid-1980s. (Note: Intel is usually given credit with Texas Instruments for the almost-simultaneous invention of the microprocessor.)
In 1983, at the dawn of the personal computer era, Intel's profits came under increased pressure from Japanese memory-chip manufacturers, and then-president Andy Grove focused the company on microprocessors. Grove described this transition in the book Only the Paranoid Survive. A key element of his plan was the notion, then considered radical, of becoming the single source for successors to the popular 8086 microprocessor.
Until then, the manufacture of complex integrated circuits was not reliable enough for customers to depend on a single supplier, but Grove began producing processors in three geographically distinct factories, and ceased licensing the chip designs to competitors such as AMD. When the PC industry boomed in the late 1980s and 1990s, Intel was one of the primary beneficiaries.
Early x86 processors and the IBM PC
Despite the ultimate importance of the microprocessor, the 4004 and its successors the 8008 and the 8080 were never major revenue contributors at Intel. As the next processor, the 8086 (and its variant the 8088) was completed in 1978, Intel embarked on a major marketing and sales campaign for that chip nicknamed "Operation Crush", and intended to win as many customers for the processor as possible. One design win was the newly created IBM PC division, though the importance of this was not fully realized at the time.
IBM introduced its personal computer in 1981, and it was rapidly successful. In 1982, Intel created the 80286 microprocessor, which, two years later, was used in the IBM PC/AT. Compaq, the first IBM PC "clone" manufacturer, produced a desktop system based on the faster 80286 processor in 1985 and in 1986 quickly followed with the first 80386-based system, beating IBM and establishing a competitive market for PC-compatible systems and setting up Intel as a key component supplier.
In 1975, the company had started a project to develop a highly advanced 32-bit microprocessor, finally released in 1981 as the Intel iAPX 432. The project was too ambitious and the processor was never able to meet its performance objectives, and it failed in the marketplace. Intel extended the x86 architecture to 32 bits instead.
386 microprocessor
During this period Andrew Grove dramatically redirected the company, closing much of its DRAM business and directing resources to the microprocessor business. Of perhaps greater importance was his decision to "single-source" the 386 microprocessor. Prior to this, microprocessor manufacturing was in its infancy, and manufacturing problems frequently reduced or stopped production, interrupting supplies to customers. To mitigate this risk, these customers typically insisted that multiple manufacturers produce chips they could use to ensure a consistent supply. The 8080 and 8086-series microprocessors were produced by several companies, notably AMD, with which Intel had a technology-sharing contract.
Grove made the decision not to license the 386 design to other manufacturers, instead, producing it in three geographically distinct factories: Santa Clara, California; Hillsboro, Oregon; and Chandler, a suburb of Phoenix, Arizona. He convinced customers that this would ensure consistent delivery. In doing this, Intel breached its contract with AMD, which sued and was paid millions of dollars in damages but could not manufacture new Intel CPU designs any longer. (Instead, AMD started to develop and manufacture its own competing x86 designs.)
As the success of Compaq's Deskpro 386 established the 386 as the dominant CPU choice, Intel achieved a position of near-exclusive dominance as its supplier. Profits from this funded rapid development of both higher-performance chip designs and higher-performance manufacturing capabilities, propelling Intel to a position of unquestioned leadership by the early 1990s.
486, Pentium, and Itanium
Intel introduced the 486 microprocessor in 1989, and in 1990 established a second design team, designing the processors code-named "P5" and "P6" in parallel and committing to a major new processor every two years, versus the four or more years such designs had previously taken. The P5 project was earlier known as "Operation Bicycle", referring to the cycles of the processor through two parallel execution pipelines. The P5 was introduced in 1993 as the Intel Pentium, substituting a registered trademark name for the former part number. (Numbers, such as 486, cannot be legally registered as trademarks in the United States.) The P6 followed in 1995 as the Pentium Pro and improved into the Pentium II in 1997. New architectures were developed alternately in Santa Clara, California and Hillsboro, Oregon.
The Santa Clara design team embarked in 1993 on a successor to the x86 architecture, codenamed "P7". The first attempt was dropped a year later but quickly revived in a cooperative program with Hewlett-Packard engineers, though Intel soon took over primary design responsibility. The resulting implementation of the IA-64 64-bit architecture was the Itanium, finally introduced in June 2001. The Itanium's performance running legacy x86 code did not meet expectations, and it failed to compete effectively with x86-64, which was AMD's 64-bit extension of the 32-bit x86 architecture (Intel uses the name Intel 64, previously EM64T). In 2017, Intel announced that the Itanium 9700 series (Kittson) would be the last Itanium chips produced.
The Hillsboro team designed the Willamette processors (initially code-named P68), which were marketed as the Pentium 4.
During this period, Intel undertook two major supporting advertising campaigns. The first campaign, the 1991 "Intel Inside" marketing and branding campaign, is widely known and has become synonymous with Intel itself. The idea of "ingredient branding" was new at the time, with only NutraSweet and a few others making attempts to do so. One of the key architects of the marketing team was the head of the microprocessor division, David House. He coined the slogan "Intel Inside". This campaign established Intel, which had been a component supplier little-known outside the PC industry, as a household name.
The second campaign, Intel's Systems Group, which began in the early 1990s, showcased manufacturing of PC motherboards, the main board component of a personal computer, and the one into which the processor (CPU) and memory (RAM) chips are plugged. The Systems Group campaign was lesser known than the Intel Inside campaign.
Shortly after, Intel began manufacturing fully configured "white box" systems for the dozens of PC clone companies that rapidly sprang up. At its peak in the mid-1990s, Intel manufactured over 15% of all PCs, making it the third-largest supplier at the time.
During the 1990s, Intel Architecture Labs (IAL) was responsible for many of the hardware innovations for the PC, including the PCI Bus, the PCI Express (PCIe) bus, and Universal Serial Bus (USB). IAL's software efforts met with a more mixed fate; its video and graphics software was important in the development of software digital video, but later its efforts were largely overshadowed by competition from Microsoft. The competition between Intel and Microsoft was revealed in testimony by then IAL Vice-president Steven McGeady at the Microsoft antitrust trial (United States v. Microsoft Corp.).
Pentium flaw
In June 1994, Intel engineers discovered a flaw in the floating-point math subsection of the P5 Pentium microprocessor. Under certain data-dependent conditions, the low-order bits of the result of a floating-point division would be incorrect. The error could compound in subsequent calculations. Intel corrected the error in a future chip revision, and under public pressure it issued a total recall and replaced the defective Pentium CPUs (which were limited to some 60, 66, 75, 90, and 100 MHz models) on customer request.
The bug was discovered independently in October 1994 by Thomas Nicely, Professor of Mathematics at Lynchburg College. He contacted Intel but received no response. On October 30, he posted a message about his finding on the Internet. Word of the bug spread quickly and reached the industry press. The bug was easy to replicate; a user could enter specific numbers into the calculator on the operating system. Consequently, many users did not accept Intel's statements that the error was minor and "not even an erratum". During Thanksgiving, in 1994, The New York Times ran a piece by journalist John Markoff spotlighting the error. Intel changed its position and offered to replace every chip, quickly putting in place a large end-user support organization. This resulted in a $475 million charge against Intel's 1994 revenue. Dr. Nicely later learned that Intel had discovered the FDIV bug in its own testing a few months before him (but had decided not to inform customers).
The "Pentium flaw" incident, Intel's response to it, and the surrounding media coverage propelled Intel from being a technology supplier generally unknown to most computer users to a household name. Dovetailing with an uptick in the "Intel Inside" campaign, the episode is considered to have been a positive event for Intel, changing some of its business practices to be more end-user focused and generating substantial public awareness, while avoiding a lasting negative impression.
Intel Core
The Intel Core line originated from the original Core brand, with the release of the 32-bit Yonah CPU, Intel's first dual-core mobile (low-power) processor. Derived from the Pentium M, the processor family used an enhanced version of the P6 microarchitecture. Its successor, the Core 2 family, was released on July 27, 2006. This was based on the Intel Core microarchitecture, and was a 64-bit design. Instead of focusing on higher clock rates, the Core microarchitecture emphasized power efficiency and a return to lower clock speeds. It also provided more efficient decoding stages, execution units, caches, and buses, reducing the power consumption of Core 2-branded CPUs while increasing their processing capacity.
In November 2008, Intel released the 1st-generation Core processors based on the Nehalem microarchitecture. Intel also introduced a new naming scheme, with the three variants now named Core i3, i5, and i7 (as well as i9 from 7th-generation onwards). Unlike the previous naming scheme, these names no longer correspond to specific technical features. It was succeeded by the Westmere microarchitecture in 2010, with a die shrink to 32 nm and included Intel HD Graphics.
In 2011, Intel released the Sandy Bridge-based 2nd-generation Core processor family. This generation featured an 11% performance increase over Nehalem. It was succeeded by Ivy Bridge-based 3rd-generation Core, introduced at the 2012 Intel Developer Forum. Ivy Bridge featured a die shrink to 22 nm, and supported both DDR3 memory and DDR3L chips.
Intel continued its tick-tock model of a microarchitecture change followed by a die shrink until the 6th-generation Core family based on the Skylake microarchitecture. This model was deprecated in 2016, with the release of the 7th-generation Core family based on Kaby Lake, ushering in the process–architecture–optimization model. From 2016 until 2021, Intel later released more optimizations on the Skylake microarchitecture with Kaby Lake R, Amber Lake, Whiskey Lake, Coffee Lake, Coffee Lake R, and Comet Lake. Intel struggled to shrink their process node from 14 nm to 10 nm, with the first microarchitecture under that node, Cannon Lake (marketed as 8th-generation Core), only being released in small quantities in 2018.
In 2019, Intel released the 10th-generation of Core processors, codenamed "Amber Lake", "Comet Lake", and "Ice Lake". Ice Lake, based on the Sunny Cove microarchitecture, was produced on the 10 nm process and was limited to low-power mobile processors. Both Amber Lake and Comet Lake were based on a refined 14 nm node, with the latter being used for desktop and high-performance mobile products and the former used for low-power mobile products.
In September 2020, 11th-generation Core mobile processors, codenamed Tiger Lake, were launched. Tiger Lake is based on the Willow Cove microarchitecture and a refined 10 nm node. Intel later released 11th-generation Core desktop processors (codenamed "Rocket Lake"), fabricated using Intel's 14 nm process and based on the Cypress Cove microarchitecture, on March 30, 2021. It replaced Comet Lake desktop processors. All 11th-generation Core processors feature new integrated graphics based on the Intel Xe microarchitecture.
Both desktop and mobile products were unified under a single process node with the release of 12th-generation Intel Core processors (codenamed "Alder Lake") in late 2021. This generation will be fabricated using Intel's 10 nm process, called Intel 7, for both desktop and mobile processors, and is based on a hybrid architecture utilizing high-performance Golden Cove cores and high-efficiency Gracemont (Atom) cores.
Transient execution CPU vulnerability
Use of Intel products by Apple Inc. (2005–2019)
On June 6, 2005, Steve Jobs, then CEO of Apple, announced that Apple would be transitioning the Macintosh from its long favored PowerPC architecture to the Intel x86 architecture because the future PowerPC road map was unable to satisfy Apple's needs. This was seen as a win for Intel, although an analyst called the move "risky" and "foolish", as Intel's current offerings at the time were considered to be behind those of AMD and IBM. The first Mac computers containing Intel CPUs were announced on January 10, 2006, and Apple had its entire line of consumer Macs running on Intel processors by early August 2006. The Apple Xserve server was updated to Intel Xeon processors from November 2006 and was offered in a configuration similar to Apple's Mac Pro.
Despite Apple's use of Intel products, relations between the two companies were strained at times. Rumors of Apple switching from Intel processors to their own designs began circulating as early as 2011. On June 22, 2020, during Apple's annual WWDC, Tim Cook, Apple's CEO, announced that it would be transitioning the company's entire Mac line from Intel CPUs to custom Apple-designed processors based on the Arm architecture over the course of the next two years. In the short term, this transition was estimated to have minimal effects on Intel, as Apple only accounted for 2% to 4% of its revenue. However, at the time it was believed that Apple's shift to its own chips might prompt other PC manufacturers to reassess their reliance on Intel and the x86 architecture. By November 2020, Apple unveiled the M1, its processor custom-designed for the Mac.
Solid-state drives (SSDs)
In 2008, Intel began shipping mainstream solid-state drives (SSDs) with up to 160 GB storage capacities. As with their CPUs, Intel develops SSD chips using ever-smaller nanometer processes. These SSDs make use of industry standards such as NAND flash, mSATA, PCIe, and NVMe. In 2017, Intel introduced SSDs based on 3D XPoint technology under the Optane brand name.
In 2021, SK Hynix acquired most of Intel's NAND memory business for $7 billion, with a remaining transaction worth $2 billion expected in 2025. Intel also discontinued its consumer Optane products in 2021. In July 2022, Intel disclosed in its Q2 earnings report that it would cease future product development within its Optane business, which in turn effectively discontinued the development of 3D XPoint as a whole.
Supercomputers
The Intel Scientific Computers division was founded in 1984 by Justin Rattner, to design and produce parallel computers based on Intel microprocessors connected in hypercube internetwork topology. In 1992, the name was changed to the Intel Supercomputing Systems Division, and development of the iWarp architecture was also subsumed. The division designed several supercomputer systems, including the Intel iPSC/1, iPSC/2, iPSC/860, Paragon and ASCI Red. In November 2014, Intel stated that it was planning to use optical fibers to improve networking within supercomputers.
Fog computing
On November 19, 2015, Intel, alongside Arm, Dell, Cisco Systems, Microsoft, and Princeton University, founded the OpenFog Consortium, to promote interests and development in fog computing. Intel's Chief Strategist for the IoT Strategy and Technology Office, Jeff Fedders, became the consortium's first president.
Self-driving cars
Intel is one of the biggest stakeholders in the self-driving car industry, having joined the race in mid 2017 after joining forces with Mobileye. The company is also one of the first in the sector to research consumer acceptance, after an AAA report quoted a 78% nonacceptance rate of the technology in the U.S.
Safety levels of autonomous driving technology, the thought of abandoning control to a machine, and psychological comfort of passengers in such situations were the major discussion topics initially. The commuters also stated that they did not want to see everything the car was doing. This was primarily a referral to the auto-steering wheel with no one sitting in the driving seat. Intel also learned that voice control regulator is vital, and the interface between the humans and machine eases the discomfort condition, and brings some sense of control back. It is important to mention that Intel included only 10 people in this study, which makes the study less credible. In a video posted on YouTube, Intel accepted this fact and called for further testing.
Programmable devices
Intel formed a new business unit called the Programmable Solutions Group (PSG) on completion of its Altera acquisition. Intel has since sold Stratix, Arria, and Cyclone FPGAs. In 2019, Intel released Agilex FPGAs: chips aimed at data centers, 5G applications, and other uses.
In October 2023, Intel announced it would be spinning off PSG into a separate company at the start of 2024, while maintaining majority ownership.
Competition, antitrust, and espionage
By the end of the 1990s, microprocessor performance had outstripped software demand for that CPU power. Aside from high-end server systems and software, whose demand dropped with the end of the "dot-com bubble", consumer systems ran effectively on increasingly low-cost systems after 2000.
Intel's strategy was to develop processors with better performance in a short time, from the appearance of one to the other, as seen with the appearance of the Pentium II in May 1997, the Pentium III in February 1999, and the Pentium 4 in the fall of 2000, making the strategy ineffective since the consumer did not see the innovation as essential, and leaving an opportunity for rapid gains by competitors, notably AMD. This, in turn, lowered the profitability of the processor line and ended an era of unprecedented dominance of the PC hardware by Intel.
Intel's dominance in the x86 microprocessor market led to numerous charges of antitrust violations over the years, including FTC investigations in both the late 1980s and in 1999, and civil actions such as the 1997 suit by Digital Equipment Corporation (DEC) and a patent suit by Intergraph. Intel's market dominance (at one time it controlled over 85% of the market for 32-bit x86 microprocessors) combined with Intel's own hardball legal tactics (such as its infamous 338 patent suit versus PC manufacturers) made it an attractive target for litigation, culminating in Intel agreeing to pay AMD $1.25 billion and grant them a perpetual patent cross-license in 2009 as well as several anti-trust judgements in Europe, Korea, and Japan.
A case of industrial espionage arose in 1995 that involved both Intel and AMD. Bill Gaede, an Argentine formerly employed both at AMD and at Intel's Arizona plant, was arrested for attempting in 1993 to sell the i486 and P5 Pentium designs to AMD and to certain foreign powers. Gaede videotaped data from his computer screen at Intel and mailed it to AMD, which immediately alerted Intel and authorities, resulting in Gaede's arrest. Gaede was convicted and sentenced to 33 months in prison in June 1996.
Corporate affairs
Business trends
The key trends for Intel are (as of the financial year ending in late December):
Leadership and corporate structure
Robert Noyce was Intel's CEO at its founding in 1968, followed by co-founder Gordon Moore in 1975. Andy Grove became the company's president in 1979 and added the CEO title in 1987 when Moore became chairman. In 1998, Grove succeeded Moore as chairman, and Craig Barrett, already company president, took over. On May 18, 2005, Barrett handed the reins of the company over to Paul Otellini, who had been the company president and COO and who was responsible for Intel's design win in the original IBM PC. The board of directors elected Otellini as president and CEO, and Barrett replaced Grove as Chairman of the Board. Grove stepped down as chairman but is retained as a special adviser. In May 2009, Barrett stepped down as chairman of the board and was succeeded by Jane Shaw. In May 2012, Intel vice chairman Andy Bryant, who had held the posts of CFO (1994) and Chief Administrative Officer (2007) at Intel, succeeded Shaw as executive chairman.
In November 2012, president and CEO Paul Otellini announced that he would step down in May 2013 at the age of 62, three years before the company's mandatory retirement age. During a six-month transition period, Intel's board of directors commenced a search process for the next CEO, in which it considered both internal managers and external candidates such as Sanjay Jha and Patrick Gelsinger. Financial results revealed that, under Otellini, Intel's revenue increased by 55.8% (US$34.2 to 53.3 billion), while its net income increased by 46.7% (US$7.5 billion to 11 billion).
On May 2, 2013, Executive Vice President and COO Brian Krzanich was elected as Intel's sixth CEO, a selection that became effective on May 16, 2013, at the company's annual meeting. Reportedly, the board concluded that an insider could proceed with the role and exert an impact more quickly, without the need to learn Intel's processes, and Krzanich was selected on such a basis. Intel's software head Renée James was selected as president of the company, a role that is second to the CEO position.
As of May 2013, Intel's board of directors consists of Andy Bryant, John Donahoe, Frank Yeary, Ambassador Charlene Barshefsky, Susan Decker, Reed Hundt, Paul Otellini, James Plummer, David Pottruck, and David Yoffie and Creative director will.i.am. The board was described by former Financial Times journalist Tom Foremski as "an exemplary example of corporate governance of the highest order" and received a rating of ten from GovernanceMetrics International, a form of recognition that has only been awarded to twenty-one other corporate boards worldwide.
On June 21, 2018, Intel announced the resignation of Brian Krzanich as CEO, with the exposure of a relationship he had with an employee. Bob Swan was named interim CEO, as the Board began a search for a permanent CEO.
On January 31, 2019, Swan transitioned from his role as CFO and interim CEO and was named by the Board as the seventh CEO to lead the company.
On January 13, 2021, Intel announced that Swan would be replaced as CEO by Pat Gelsinger, effective February 15. Gelsinger is a former Intel chief technology officer who had previously been head of VMWare.
In March 2021, Intel removed the mandatory retirement age for its corporate officers.
In October 2023, Intel announced it would be spinning off its Programmable Solutions Group business unit into a separate company at the start of 2024, while maintaining majority ownership and intending to seek an IPO within three years to raise funds.
On December 1, 2024, Pat Gelsinger retired from the position of Intel CEO and stepped down from the company’s board of directors. David Zinsner and Michelle Johnston Holthaus were named as interim co-CEO's.
Ownership
The 10 largest shareholders of Intel as of December 2023 were:
Vanguard Group (9.12% of shares)
BlackRock (8.04%)
State Street (4.45%)
Capital International (2.29%)
Geode Capital Management (2.01%)
Primecap (1.78%)
Capital Research Global Investors (1.63%)
Morgan Stanley (1.18%)
Norges Bank (1.14%)
Northern Trust (1.05%)
Board of directors
:
Frank D. Yeary (chairman), managing member of Darwin Capital
James Goetz, managing director of Sequoia Capital
Andrea Goldsmith, dean of engineering and applied science at Princeton University
Alyssa Henry, Square, Inc. executive
Omar Ishrak, chairman and former CEO of Medtronic
Risa Lavizzo-Mourey, former president and CEO of the Robert Wood Johnson Foundation
Tsu-Jae King Liu, professor at the UC Berkeley College of Engineering
Barbara G. Novick, co-founder of BlackRock
Gregory Smith, CFO of Boeing
Dion Weisler, former president and CEO of HP Inc.
Lip-Bu Tan, executive chairman of Cadence Design Systems
Employment
Prior to March 2021, Intel has a mandatory retirement policy for its CEOs when they reach age 65. Andy Grove retired at 62, while both Robert Noyce and Gordon Moore retired at 58. Grove retired as chairman and as a member of the board of directors in 2005 at age 68.
Intel's headquarters are located in Santa Clara, California, and the company has operations around the world. Its largest workforce concentration anywhere is in Washington County, Oregon (in the Portland metropolitan area's "Silicon Forest"), with 18,600 employees at several facilities. Outside the United States, the company has facilities in China, Costa Rica, Malaysia, Israel, Ireland, India, Russia, Argentina and Vietnam, in 63 countries and regions internationally. In March 2022, Intel stopped supplying the Russian market because of international sanctions during the Russo-Ukrainian War. In the U.S. Intel employs significant numbers of people in California, Colorado, Massachusetts, Arizona, New Mexico, Oregon, Texas, Washington and Utah. In Oregon, Intel is the state's largest private employer. The company is the largest industrial employer in New Mexico while in Arizona the company has 12,000 employees as of January 2020.
Intel invests heavily in research in China and about 100 researchersor 10% of the total number of researchers from Intelare located in Beijing.
In 2011, the Israeli government offered Intel $290 million to expand in the country. As a condition, Intel would employ 1,500 more workers in Kiryat Gat and between 600 and 1000 workers in the north.
In January 2014, it was reported that Intel would cut about 5,000 jobs from its workforce of 107,000. The announcement was made a day after it reported earnings that missed analyst targets.
In March 2014, it was reported that Intel would embark upon a $6 billion plan to expand its activities in Israel. The plan calls for continued investment in existing and new Intel plants until 2030. , Intel employs 10,000 workers at four development centers and two production plants in Israel.
Due to declining PC sales, in 2016 Intel cut 12,000 jobs. In 2021, Intel reversed course under new CEO Pat Gelsinger and started hiring thousands of engineers.
Diversity
Intel has a Diversity Initiative, including employee diversity groups, as well as a supplier diversity program. Like many companies with employee diversity groups, they include groups based on race and nationality as well as sexual identity and religion. In 1994, Intel sanctioned one of the earliest corporate Gay, Lesbian, Bisexual, and Transgender employee groups, and supports a Muslim employees group, a Jewish employees group, and a Bible-based Christian group.
Intel has received a 100% rating on numerous Corporate Equality Indices released by the Human Rights Campaign including the first one released in 2002. In addition, the company is frequently named one of the 100 Best Companies for Working Mothers by Working Mother magazine.
In January 2015, Intel announced the investment of $300 million over the next five years to enhance gender and racial diversity in their own company as well as the technology industry as a whole.
In February 2016, Intel released its Global Diversity & Inclusion 2015 Annual Report. The male-female mix of US employees was reported as 75.2% men and 24.8% women. For US employees in technical roles, the mix was reported as 79.8% male and 20.1% female. NPR reports that Intel is facing a retention problem (particularly for African Americans), not just a pipeline problem.
Economic impact in Oregon in 2009
In 2011, ECONorthwest conducted an economic impact analysis of Intel's economic contribution to the state of Oregon. The report found that in 2009 "the total economic impacts attributed to Intel's operations, capital spending, contributions and taxes amounted to almost $14.6 billion in activity, including $4.3 billion in personal income and 59,990 jobs". Through multiplier effects, every 10 Intel jobs supported, on average, was found to create 31 jobs in other sectors of the economy.
Supply chain
Intel has been addressing supply base reduction as an issue since the mid-1980's, adopting an "n + 1" rule of thumb, whereby the maximum number of suppliers required to maintain production levels for each component is determined, and no more than one additional supplier is engaged with for each component.
Intel Israel
Intel has been operating in the State of Israel since Dov Frohman founded the Israeli branch of the company in 1974 in a small office in Haifa. Intel Israel currently has development centers in Haifa, Jerusalem and Petah Tikva, and has a manufacturing plant in the Kiryat Gat industrial park that develops and manufactures microprocessors and communications products. Intel employed about 10,000 employees in Israel in 2013. Maxine Fesberg has been the CEO of Intel Israel since 2007 and the Vice President of Intel Global. In December 2016, Fesberg announced her resignation, her position of chief executive officer (CEO) has been filled by Yaniv Gerti since January 2017.
In June 2024, the company announced that it was stopping development on a Kiryat Gat-based factory in Israel. The site was expected to cost $25 billion, with $3.2 billion provided by the Israeli government in the form of a grant.
Key acquisitions and investments (2010–present)
In 2010, Intel purchased McAfee, a manufacturer of computer security technology, for $7.68 billion. As a condition for regulatory approval of the transaction, Intel agreed to provide rival security firms with all necessary information that would allow their products to use Intel's chips and personal computers. After the acquisition, Intel had about 90,000 employees, including about 12,000 software engineers. In September 2016, Intel sold a majority stake in its computer-security unit to TPG Capital, reversing the five-year-old McAfee acquisition.
In August 2010, Intel and Infineon Technologies announced that Intel would acquire Infineon's Wireless Solutions business. Intel planned to use Infineon's technology in laptops, smart phones, netbooks, tablets and embedded computers in consumer products, eventually integrating its wireless modem into Intel's silicon chips.
In March 2011, Intel bought most of the assets of Cairo-based SySDSoft.
In July 2011, Intel announced that it had agreed to acquire Fulcrum Microsystems Inc., a company specializing in network switches. The company used to be included on the EE Times list of 60 Emerging Startups.
In October 2011, Intel reached a deal to acquire Telmap, an Israeli-based navigation software company. The purchase price was not disclosed, but Israeli media reported values around $300 million to $350 million.
In July 2012, Intel agreed to buy 10% of the shares of ASML Holding NV for $2.1 billion and another $1 billion for 5% of the shares that need shareholder approval to fund relevant research and development efforts, as part of a EUR3.3 billion ($4.1 billion) deal to accelerate the development of 450-millimeter wafer technology and extreme ultra-violet lithography by as much as two years.
In July 2013, Intel confirmed the acquisition of Omek Interactive, an Israeli company that makes technology for gesture-based interfaces, without disclosing the monetary value of the deal. An official statement from Intel read: "The acquisition of Omek Interactive will help increase Intel's capabilities in the delivery of more immersive perceptual computing experiences." One report estimated the value of the acquisition between US$30 million and $50 million.
The acquisition of a Spanish natural language recognition startup, Indisys was announced in September 2013. The terms of the deal were not disclosed but an email from an Intel representative stated: "Intel has acquired Indisys, a privately held company based in Seville, Spain. The majority of Indisys employees joined Intel. We signed the agreement to acquire the company on May 31 and the deal has been completed." Indysis explains that its artificial intelligence (AI) technology "is a human image, which converses fluently and with common sense in multiple languages and also works in different platforms".
In December 2014, Intel bought PasswordBox.
In January 2015, Intel purchased a 30% stake in Vuzix, a smart glasses manufacturer. The deal was worth $24.8 million.
In February 2015, Intel announced its agreement to purchase German network chipmaker Lantiq, to aid in its expansion of its range of chips in devices with Internet connection capability.
In June 2015, Intel announced its agreement to purchase FPGA design company Altera for $16.7 billion, in its largest acquisition to date. The acquisition completed in December 2015.
In October 2015, Intel bought cognitive computing company Saffron Technology for an undisclosed price.
In August 2016, Intel purchased deep-learning startup Nervana Systems for over $400 million.
In December 2016, Intel acquired computer vision startup Movidius for an undisclosed price.
In March 2017, Intel announced that they had agreed to purchase Mobileye, an Israeli developer of "autonomous driving" systems for US$15.3 billion.
In June 2017, Intel Corporation announced an investment of over for its upcoming Research and Development (R&D) centre in Bangalore, India.
In January 2019, Intel announced an investment of over $11 billion on a new Israeli chip plant, as told by the Israeli Finance Minister.
In November 2021, Intel recruited some of the employees of the Centaur Technology division from VIA Technologies, a deal worth $125 million, and effectively acquiring the talent and knowhow of their x86 division. VIA retained the x86 licence and associated patents, and its Zhaoxin CPU joint-venture continues.
In December 2021, Intel said it will invest $7.1 billion to build a new chip-packaging and testing factory in Malaysia. The new investment will expand the operations of its Malaysian subsidiary across Penang and Kulim, creating more than 4,000 new Intel jobs and more than 5,000 local construction jobs.
In December 2021, Intel announced its plan to take Mobileye automotive unit via an IPO of newly issued stock in 2022, maintaining its majority ownership of the company.
In February 2022, Intel agreed to acquire Israeli chip manufacturer Tower Semiconductor for $5.4 billion. In August 2023, Intel terminated the acquisition as it failed to obtain approval from Chinese regulators within the 18-month transaction deadline.
In May 2022, Intel announced that they have acquired Finnish graphics technology firm Siru innovations. The firm founded by ex-AMD Qualcomm mobile GPU engineers, is focused on developing software and silicon building blocks for GPU's made by other companies and is set to join Intel's fledgling Accelerated Computing Systems and Graphics Group.
In May 2022, it was announced that Ericsson and Intel have pooled to launch a tech hub in California to focus on the research and development of cloud RAN technology. The hub focuses on improving Ericsson Cloud RAN and Intel technology, including improving energy efficiency and network performance, reducing time to market, and monetizing new business opportunities such as enterprise applications.
Ultrabook fund (2011)
In 2011, Intel Capital announced a new fund to support startups working on technologies in line with the company's concept for next-generation notebooks. The company is setting aside a $300 million fund to be spent over the next three to four years in areas related to ultrabooks. Intel announced the ultrabook concept at Computex in 2011. The ultrabook is defined as a thin (less than 0.8 inches [~2 cm] thick) notebook that utilizes Intel processors and also incorporates tablet features such as a touch screen and long battery life.
At the Intel Developers Forum in 2011, four Taiwan ODMs showed prototype ultrabooks that used Intel's Ivy Bridge chips. Intel plans to improve power consumption of its chips for ultrabooks, like new Ivy Bridge processors in 2013, which will only have 10W default thermal design power.
Intel's goal for Ultrabook's price is below $1000; however, according to two presidents from Acer and Compaq, this goal will not be achieved if Intel does not lower the price of its chips.
Open source support
Intel has a significant participation in the open source communities since 1999. For example, in 2006 Intel released MIT-licensed X.org drivers for their integrated graphic cards of the i965 family of chipsets. Intel released FreeBSD drivers for some networking cards, available under a BSD-compatible license, which were also ported to OpenBSD. Binary firmware files for non-wireless Ethernet devices were also released under a BSD licence allowing free redistribution. Intel ran the Moblin project until April 23, 2009, when they handed the project over to the Linux Foundation. Intel also runs the LessWatts.org campaigns.
However, after the release of the wireless products called Intel Pro/Wireless 2100, 2200BG/2225BG/2915ABG and 3945ABG in 2005, Intel was criticized for not granting free redistribution rights for the firmware that must be included in the operating system for the wireless devices to operate. As a result of this, Intel became a target of campaigns to allow free operating systems to include binary firmware on terms acceptable to the open source community. Linspire-Linux creator Michael Robertson outlined the difficult position that Intel was in releasing to open source, as Intel did not want to upset their large customer Microsoft. Theo de Raadt of OpenBSD also claimed that Intel is being "an Open Source fraud" after an Intel employee presented a distorted view of the situation at an open source conference. In spite of the significant negative attention Intel received as a result of the wireless dealings, the binary firmware still has not gained a license compatible with free software principles.
Intel has also supported other open source projects such as Blender and Open 3D Engine.
Corporate identity
Logo
Throughout its history, Intel has had three logos.
The first Intel logo, introduced in April 1969, featured the company's name stylized in all lowercase, with the letter "e" dropped below the other letters. The second logo, introduced on January 3, 2006, was inspired by the "Intel Inside" campaign, featuring a swirl around the Intel brand name. The third logo, introduced on September 2, 2020, was inspired by the previous logos. It removes the swirl as well as the classic blue color in almost all parts of the logo, except for the dot in the "i".
Intel Inside
Intel has become one of the world's most recognizable computer brands following its long-running Intel Inside campaign. The idea for "Intel Inside" came out of a meeting between Intel and one of the major computer resellers, MicroAge.
In the late 1980s, Intel's market share was being seriously eroded by upstart competitors such as AMD, Zilog, and others who had started to sell their less expensive microprocessors to computer manufacturers. This was because, by using cheaper processors, manufacturers could make cheaper computers and gain more market share in an increasingly price-sensitive market. In 1989, Intel's Dennis Carter visited MicroAge's headquarters in Tempe, Arizona, to meet with MicroAge's VP of Marketing, Ron Mion. MicroAge had become one of the largest distributors of Compaq, IBM, HP, and others and thus was a primaryalthough indirectdriver of demand for microprocessors. Intel wanted MicroAge to petition its computer suppliers to favor Intel chips. However, Mion felt that the marketplace should decide which processors they wanted. Intel's counterargument was that it would be too difficult to educate PC buyers on why Intel microprocessors were worth paying more for.
Mion felt that the public did not really need to fully understand why Intel chips were better, they just needed to feel they were better. So Mion proposed a market test. Intel would pay for a MicroAge billboard somewhere saying, "If you're buying a personal computer, make sure it has Intel inside." In turn, MicroAge would put "Intel Inside" stickers on the Intel-based computers in their stores in that area. To make the test easier to monitor, Mion decided to do the test in Boulder, Colorado, where it had a single store. Virtually overnight, the sales of personal computers in that store dramatically shifted to Intel-based PCs. Intel very quickly adopted "Intel Inside" as its primary branding and rolled it out worldwide. As is often the case with computer lore, other tidbits have been combined to explain how things evolved. "Intel Inside" has not escaped that tendency and there are other "explanations" that had been floating around.
Intel's branding campaign started with "The Computer Inside" tagline in 1990 in the U.S. and Europe. The Japan chapter of Intel proposed an "Intel in it" tagline and kicked off the Japanese campaign by hosting EKI-KON (meaning "Station Concert" in Japanese) at the Tokyo railway station dome on Christmas Day, December 25, 1990. Several months later, "The Computer Inside" incorporated the Japan idea to become "Intel Inside" which eventually elevated to the worldwide branding campaign in 1991, by Intel marketing manager Dennis Carter. A case study, "Inside Intel Inside", was put together by Harvard Business School. The five-note jingle was introduced in 1994 and by its tenth anniversary was being heard in 130 countries around the world. The initial branding agency for the "Intel Inside" campaign was DahlinSmithWhite Advertising of Salt Lake City. The Intel swirl logo was the work of DahlinSmithWhite art director Steve Grigg under the direction of Intel president and CEO Andy Grove.
The Intel Inside advertising campaign sought public brand loyalty and awareness of Intel processors in consumer computers. Intel paid some of the advertiser's costs for an ad that used the Intel Inside logo and xylo-marimba jingle.
In 2008, Intel planned to shift the emphasis of its Intel Inside campaign from traditional media such as television and print to newer media such as the Internet. Intel required that a minimum of 35% of the money it provided to the companies in its co-op program be used for online marketing. The Intel 2010 annual financial report indicated that $1.8 billion (6% of the gross margin and nearly 16% of the total net income) was allocated to all advertising with Intel Inside being part of that.
Intel jingle
The D–D–G–D–A xylophone/marimba jingle, known as the "Intel bong", used in Intel advertising was produced by Musikvergnuegen and written by Walter Werzowa, once a member of the Austrian 1980s sampling band Edelweiss. The Intel jingle was made in 1994 to coincide with the launch of the Pentium. It was modified in 1999 to coincide with the launch of the Pentium III, although it overlapped with the 1994 version which was phased out in 2004. Advertisements for products featuring Intel processors with prominent MMX branding featured a version of the jingle with an embellishment (shining sound) after the final note.
The jingle was remade a second time in 2004 to coincide with the new logo change. Again, it overlapped with the 1999 version and was not mainstreamed until the launch of the Core processors in 2006, with the melody unchanged.
Another remake of the jingle debuted with Intel's new visual identity. The company has made use of numerous variants since its rebranding in 2020 (while retaining the mainstream 2006 version).
Processor naming strategy
In 2006, Intel expanded its promotion of open specification platforms beyond Centrino, to include the Viiv media center PC and the business desktop Intel vPro.
In mid-January 2006, Intel announced that they were dropping the long running Pentium name from their processors. The Pentium name was first used to refer to the P5 core Intel processors and was done to comply with court rulings that prevent the trademarking of a string of numbers, so competitors could not just call their processor the same name, as had been done with the prior 386 and 486 processors (both of which had copies manufactured by IBM and AMD). They phased out the Pentium names from mobile processors first, when the new Yonah chips, branded Core Solo and Core Duo, were released. The desktop processors changed when the Core 2 line of processors were released. By 2009, Intel was using a good–better–best strategy with Celeron being good, Pentium better, and the Intel Core family representing the best the company has to offer.
According to spokesman Bill Calder, Intel has maintained only the Celeron brand, the Atom brand for netbooks and the vPro lineup for businesses. Since late 2009, Intel's mainstream processors have been called Celeron, Pentium, Core i3, Core i5, Core i7, and Core i9 in order of performance from lowest to highest. The 1st-generation Core products carry a 3 digit name, such as i5-750, and the 2nd-generation products carry a 4 digit name, such as the i5-2500, and from 10th-generation onwards, Intel processors will have a 5 digit name, such as i9-10900K for desktop. In all cases, a 'K' at the end of it shows that it is an unlocked processor, enabling additional overclocking abilities (for instance, 2500K). vPro products will carry the Intel Core i7 vPro processor or the Intel Core i5 vPro processor name. In October 2011, Intel started to sell its Core i7-2700K "Sandy Bridge" chip to customers worldwide.
Since 2010, "Centrino" is only being applied to Intel's WiMAX and Wi-Fi technologies.
In 2022, Intel announced that they are dropping the Pentium and Celeron naming schemes for their desktop and laptop entry level processors. The "Intel Processor" branding will be replacing the old Pentium and Celeron naming schemes starting in 2023.
In 2023, Intel announced that they will be dropping the 'i' in their future processor markings. For example, products such as Core i9, will now be called Core 9. Ultra will be added to the endings of processors that are in the higher end, such as Core Ultra 9.
Typography
Neo Sans Intel is a customized version of Neo Sans based on the Neo Sans and Neo Tech, designed by Sebastian Lester in 2004. It was introduced alongside Intel's rebranding in 2006. Previously, Intel used Helvetica as its standard typeface in corporate marketing.
Intel Clear is a global font announced in 2014 designed for to be used across all communications. The font family was designed by Red Peek Branding and Dalton Maag. Initially available in Latin, Greek and Cyrillic scripts, it replaced Neo Sans Intel as the company's corporate typeface. Intel Clear Hebrew, Intel Clear Arabic were added by Dalton Maag Ltd. Neo Sans Intel remained in logo and to mark processor type and socket on the packaging of Intel's processors.
In 2020, as part of a new visual identity, a new typeface, Intel One, was designed. It replaced Intel Clear as the font used by the company in most of its branding, however, it is used alongside Intel Clear typeface. In logo, it replaced Neo Sans Intel typeface. However, it is still used to mark processor type and socket on the packaging of Intel's processors.
Intel Brand Book
Intel Brand Book is a book produced by Red Peak Branding as part of Intel's new brand identity campaign, celebrating the company's achievements while setting the new standard for what Intel looks, feels and sounds like.
Charity
In November 2014, Intel designed a Paddington Bear statue—themed "Little Bear Blue"—one of fifty statues created by various celebrities and companies which were located around London. Created prior to the release of the film Paddington, the Intel-designed statue was located outside Framestore in Chancery Lane, London, a British visual-effects company which uses Intel technology for films including Paddington. The statues were then auctioned to raise funds for the National Society for the Prevention of Cruelty to Children (NSPCC).
Sponsorships
Intel sponsors the Intel Extreme Masters, a series of international esports tournaments. It was also a sponsor for the Formula 1 teams BMW Sauber and Scuderia Ferrari together with AMD, AT&T, Pernod Ricard, Diageo and Vodafone. In 2013, Intel became a sponsor of FC Barcelona. In 2017, Intel became a sponsor of the Olympic Games, lasting from the 2018 Winter Olympics to the 2024 Summer Olympics. In 2024, Intel and Riot Games had an annual sponsorship valued at US$5 million, and one with JD Gaming for US$3.3 million. The company also had a sponsorship with Global Esports.
Litigations and regulatory disputes
Patent infringement litigation (2006–2007)
In October 2006, a Transmeta lawsuit was filed against Intel for patent infringement on computer architecture and power efficiency technologies. The lawsuit was settled in October 2007, with Intel agreeing to pay US$150 million initially and US$20 million per year for the next five years. Both companies agreed to drop lawsuits against each other, while Intel was granted a perpetual non-exclusive license to use current and future patented Transmeta technologies in its chips for 10 years.
Antitrust allegations and litigation (2005–2023)
In September 2005, Intel filed a response to an AMD lawsuit, disputing AMD's claims, and claiming that Intel's business practices are fair and lawful. In a rebuttal, Intel deconstructed AMD's offensive strategy and argued that AMD struggled largely as a result of its own bad business decisions, including underinvestment in essential manufacturing capacity and excessive reliance on contracting out chip foundries. Legal analysts predicted the lawsuit would drag on for a number of years, since Intel's initial response indicated its unwillingness to settle with AMD. In 2008, a court date was finally set.
On November 4, 2009, New York's attorney general filed an antitrust lawsuit against Intel Corp, claiming the company used "illegal threats and collusion" to dominate the market for computer microprocessors.
On November 12, 2009, AMD agreed to drop the antitrust lawsuit against Intel in exchange for $1.25 billion. A joint press release published by the two chip makers stated "While the relationship between the two companies has been difficult in the past, this agreement ends the legal disputes and enables the companies to focus all of our efforts on product innovation and development."
An antitrust lawsuit and a class-action suit relating to cold calling employees of other companies has been settled.
Allegations by Japan Fair Trade Commission (2005)
In 2005, the local Fair Trade Commission found that Intel violated the Japanese Antimonopoly Act. The commission ordered Intel to eliminate discounts that had discriminated against AMD. To avoid a trial, Intel agreed to comply with the order.
Allegations by regulators in South Korea (2007)
In September 2007, South Korean regulators accused Intel of breaking antitrust law. The investigation began in February 2006, when officials raided Intel's South Korean offices. The company risked a penalty of up to 3% of its annual sales if found guilty. In June 2008, the Fair Trade Commission ordered Intel to pay a fine of US$25.5 million for taking advantage of its dominant position to offer incentives to major Korean PC manufacturers on the condition of not buying products from AMD.
Allegations by regulators in the United States (2008–2010)
New York started an investigation of Intel in January 2008 on whether the company violated antitrust laws in pricing and sales of its microprocessors. In June 2008, the Federal Trade Commission also began an antitrust investigation of the case. In December 2009, the FTC announced it would initiate an administrative proceeding against Intel in September 2010.
In November 2009, following a two-year investigation, New York Attorney General Andrew Cuomo sued Intel, accusing them of bribery and coercion, claiming that Intel bribed computer makers to buy more of their chips than those of their rivals and threatened to withdraw these payments if the computer makers were perceived as working too closely with its competitors. Intel has denied these claims.
On July 22, 2010, Dell agreed to a settlement with the U.S. Securities and Exchange Commission (SEC) to pay $100 million in penalties resulting from charges that Dell did not accurately disclose accounting information to investors. In particular, the SEC charged that from 2002 to 2006, Dell had an agreement with Intel to receive rebates in exchange for not using chips manufactured by AMD. These substantial rebates were not disclosed to investors, but were used to help meet investor expectations regarding the company's financial performance; "These exclusivity payments grew from 10% of Dell's operating income in FY 2003 to 38% in FY 2006, and peaked at 76% in the first quarter of FY 2007." Dell eventually did adopt AMD as a secondary supplier in 2006, and Intel subsequently stopped their rebates, causing Dell's financial performance to fall.
Allegations by the European Union (2007–2023)
In July 2007, the European Commission accused Intel of anti-competitive practices, mostly against AMD. The allegations, going back to 2003, include giving preferential prices to computer makers buying most or all of their chips from Intel, paying computer makers to delay or cancel the launch of products using AMD chips, and providing chips at below standard cost to governments and educational institutions. Intel responded that the allegations were unfounded and instead qualified its market behavior as consumer-friendly. General counsel Bruce Sewell responded that the commission had misunderstood some factual assumptions regarding pricing and manufacturing costs.
In February 2008, Intel announced that its office in Munich had been raided by European Union regulators. Intel reported that it was cooperating with investigators. Intel faced a fine of up to 10% of its annual revenue if found guilty of stifling competition. AMD subsequently launched a website promoting these allegations. In June 2008, the EU filed new charges against Intel. In May 2009, the EU found that Intel had engaged in anti-competitive practices and subsequently fined Intel €1.06 billion (US$1.44 billion), a record amount. Intel was found to have paid companies, including Acer, Dell, HP, Lenovo and NEC, to exclusively use Intel chips in their products, and therefore harmed other, less successful companies including AMD. The European Commission said that Intel had deliberately acted to keep competitors out of the computer chip market and in doing so had made a "serious and sustained violation of the EU's antitrust rules". In addition to the fine, Intel was ordered by the commission to immediately cease all illegal practices. Intel has said that they will appeal against the commission's verdict. In June 2014, the General Court, which sits below the European Court of Justice, rejected the appeal.
In 2022 the €1.06 billion fine was dropped, but was successively re-imposed in September 2023 as a €376.36 million fine.
Corporate responsibility record
Intel has been accused by some residents of Rio Rancho, New Mexico of allowing volatile organic compounds (VOCs) to be released in excess of their pollution permit. One resident claimed that a release of 1.4 tons of carbon tetrachloride was measured from one acid scrubber during the fourth quarter of 2003 but an emission factor allowed Intel to report no carbon tetrachloride emissions for all of 2003.
Another resident alleges that Intel was responsible for the release of other VOCs from their Rio Rancho site and that a necropsy of lung tissue from two deceased dogs in the area indicated trace amounts of toluene, hexane, ethylbenzene, and xylene isomers, all of which are solvents used in industrial settings but also commonly found in gasoline, retail paint thinners and retail solvents. During a sub-committee meeting of the New Mexico Environment Improvement Board, a resident claimed that Intel's own reports documented more than of VOCs were released in June and July 2006.
Intel's environmental performance is published annually in their corporate responsibility report.
Conflict-free production
In 2009, Intel announced that it planned to undertake an effort to remove conflict resources—materials sourced from mines whose profits are used to fund armed militant groups, particularly within the Democratic Republic of the Congo—from its supply chain. Intel sought conflict-free sources of the precious metals common to electronics from within the country, using a system of first- and third-party audits, as well as input from the Enough Project and other organizations. During a keynote address at Consumer Electronics Show 2014, Intel CEO at the time, Brian Krzanich, announced that the company's microprocessors would henceforth be conflict free. In 2016, Intel stated that it had expected its entire supply chain to be conflict-free by the end of the year.
In its 2012 rankings on the progress of consumer electronics companies relating to conflict minerals, the Enough Project rated Intel the best of 24 companies, calling it a "Pioneer of progress". In 2014, chief executive Brian Krzanich urged the rest of the industry to follow Intel's lead by also shunning conflict minerals.
Age discrimination complaints
Intel has faced complaints of age discrimination in firing and layoffs. Intel was sued in 1993 by nine former employees, over allegations that they were laid off because they were over the age of 40.
A group called FACE Intel (Former and Current Employees of Intel) claims that Intel weeds out older employees. FACE Intel claims that more than 90% of people who have been laid off or fired from Intel are over the age of 40. Upside magazine requested data from Intel breaking out its hiring and firing by age, but the company declined to provide any. Intel has denied that age plays any role in Intel's employment practices. FACE Intel was founded by Ken Hamidi, who was fired from Intel in 1995 at the age of 47. Hamidi was blocked in a 1999 court decision from using Intel's email system to distribute criticism of the company to employees, which overturned in 2003 in Intel Corp. v. Hamidi.
Tax dispute in India
In August 2016, Indian officials of the Bruhat Bengaluru Mahanagara Palike (BBMP) parked garbage trucks on Intel's campus and threatened to dump them for evading payment of property taxes between 2007 and 2008, to the tune of . Intel had reportedly been paying taxes as a non-air-conditioned office, when the campus in fact had central air conditioning. Other factors, such as land acquisition and construction improvements, added to the tax burden. Previously, Intel had appealed the demand in the Karnataka high court in July, during which the court ordered Intel to pay BBMP half the owed amount of plus arrears by August 28 of that year.
Product issues
Recalls
Pentium FDIV bug
Security vulnerabilities
Transient execution CPU vulnerability
Instability issues
Raptor Lake
See also
5 nm process
ASCI Red
Bumpless Build-up Layer
Comparison of ATI graphics processing units
Comparison of Intel processors
Comparison of Nvidia graphics processing units
Cyrix
Engineering sample (CPU)
Graphics processing unit (GPU)
Intel Developer Zone (Intel DZ)
Intel Driver Update Utility
Intel GMA (Graphics Media Accelerator)
Intel HD and Iris Graphics
Intel Level Up
Intel Loihi
Intel Museum
Intel Science Talent Search
List of Intel chipsets
List of Intel CPU microarchitectures
List of Intel manufacturing sites
List of mergers and acquisitions by Intel
List of semiconductor fabrication plants
Intel Management Engine
Intel-related biographical articles on Wikipedia
Bill Gaede
Bob Colwell
Justin Rattner
Sean Maloney
Notes
References
External links
1968 establishments in California
1970s initial public offerings
American companies established in 1968
Companies based in Santa Clara, California
Companies in the Dow Jones Global Titans 50
Companies listed on the Nasdaq
Computer companies established in 1968
Computer companies of the United States
Computer hardware companies
Computer memory companies
Computer storage companies
Computer systems companies
Former components of the Dow Jones Industrial Average
Foundry semiconductor companies
Graphics hardware companies
Linux companies
Manufacturing companies based in the San Francisco Bay Area
Manufacturing companies established in 1968
Mobile phone manufacturers
Motherboard companies
Multinational companies headquartered in the United States
Semiconductor companies of the United States
Software companies based in the San Francisco Bay Area
Software companies established in 1968
Software companies of the United States
Superfund sites in California
Technology companies of the United States
Technology companies based in the San Francisco Bay Area
Technology companies established in 1968 | Intel | Technology | 18,516 |
40,966,498 | https://en.wikipedia.org/wiki/Kevin%20C.%20Dittman | Kevin C. Dittman (born ca. 1960) is an American computer scientist, IT consultant and Professor of Information Technology at the Purdue University, especially known for his textbook Systems analysis and design methods written with Lonnie D. Bentley and Jeffrey L. Whitten, which is in its 7th edition.
Dittman received his BS in Computer Science from Purdue University in 1981 and his MA in Management Information Systems from the Florida Institute of Technology. He started his career in industry as programmer and analyst at an engineering company in 1981. From 1982 to 1985 he was systems analyst at a machine industry company. In 1985 he started at Lockheed Martin, where from 1985 to 1995 he was systems engineer, and from 1995 to 2011 consultant in the fields of Information Technology, Systems Engineering, Quality management, Process Management, and Project Management. In 1995 Dittman was appointed Professor of Information Technology at the Purdue University.
Selected publications
Books, a selection:
Bentley, Lonnie D., Kevin C. Dittman, and Jeffrey L. Whitten. Systems analysis and design methods. (1986, 1997, 2004).
Whitten, Jeffery L., Lonnie D. Bentley, and Kevin C. Dittman. Fundamentals of systems analysis and design methods. (2004).
Brewer, Jeffrey L., and Kevin C. Dittman. Methods of IT project management. Purdue University Press, 2013.
References
External links
Kevin C. Dittman at Purdue University Press
Year of birth missing (living people)
Living people
American computer scientists
Information systems researchers
Systems engineers
Purdue University alumni
Purdue University faculty
Place of birth missing (living people)
Florida Institute of Technology alumni | Kevin C. Dittman | Technology | 341 |
6,548,283 | https://en.wikipedia.org/wiki/Antigen%20presentation | Antigen presentation is a vital immune process that is essential for T cell immune response triggering. Because T cells recognize only fragmented antigens displayed on cell surfaces, antigen processing must occur before the antigen fragment can be recognized by a T-cell receptor. Specifically, the fragment, bound to the major histocompatibility complex (MHC), is transported to the surface of the antigen-presenting cell, a process known as presentation. If there has been an infection with viruses or bacteria, the antigen-presenting cell will present an endogenous or exogenous peptide fragment derived from the antigen by MHC molecules. There are two types of MHC molecules which differ in the behaviour of the antigens: MHC class I molecules (MHC-I) bind peptides from the cell cytosol, while peptides generated in the endocytic vesicles after internalisation are bound to MHC class II (MHC-II). Cellular membranes separate these two cellular environments - intracellular and extracellular. Each T cell can only recognize tens to hundreds of copies of a unique sequence of a single peptide among thousands of other peptides presented on the same cell, because an MHC molecule in one cell can bind to quite a large range of peptides. Predicting which (fragments of) antigens will be presented to the immune system by a certain MHC/HLA type is difficult, but the technology involved is improving.
Presentation of intracellular antigens: Class I
Cytotoxic T cells (also known as Tc, killer T cell, or cytotoxic T-lymphocyte (CTL)) express CD8 co-receptors and are a population of T cells that are specialized for inducing programmed cell death of other cells. Cytotoxic T cells regularly patrol all body cells to maintain the organismal homeostasis. Whenever they encounter signs of disease, caused for example by the presence of viruses or intracellular bacteria or a transformed tumor cell, they initiate processes to destroy the potentially harmful cell. All nucleated cells in the body (along with platelets) display class I major histocompatibility complex (MHC-I molecules). Antigens generated endogenously within these cells are bound to MHC-I molecules and presented on the cell surface. This antigen presentation pathway enables the immune system to detect transformed or infected cells displaying peptides from modified-self (mutated) or foreign proteins.
In the presentation process, these proteins are mainly degraded into small peptides by cytosolic proteases in the proteasome, but there are also other cytoplasmic proteolytic pathways. Then, peptides are distributed to the endoplasmic reticulum (ER) via the action of heat shock proteins and the transporter associated with antigen processing (TAP) which translocates the cytosolic peptides into the ER lumen in an ATP-dependent transport mechanism. There are several ER chaperones involved in MHC-I assembly, such as calnexin, calreticulin, Erp57, protein disulfide isomerase (PDI), and tapasin. Specifically, the complex of TAP, tapasin, MHS Class 1, ERp57, and calreticulin is called the peptide-loading complex (PLC). Peptides are loaded to MHC-I peptide binding groove between two alpha helices at the bottom of the α1 and α2 domains of the MHC class I molecule. After releasing from tapasin, peptide-MHC-I complexes (pMHC-I) exit the ER and are transported to the cell surface by exocytic vesicles.
Naïve anti-viral T cells (CD8+) cannot directly eliminate transformed or infected cells. They have to be activated by the pMHC-I complexes of antigen-presenting cells (APCs). Here, antigen can be presented directly (as described above) or indirectly (cross-presentation) from virus-infected and non-infected cells. After the interaction between pMHC-I and TCR, in presence of co-stimulatory signals and/or cytokines, T cells are activated, migrate to the peripheral tissues and kill the target cells (infected or damaged cells) by inducing cytotoxicity.
Cross-presentation is a special case in which MHC-I molecules are able to present extracellular antigens, usually displayed only by MHC-II molecules. This ability appears in several APCs, mainly plasmacytoid dendritic cells in tissues that stimulate CD8+ T cells directly. This process is essential when APCs are not directly infected, triggering local antiviral and anti-tumor immune responses immediately without trafficking the APCs in the local lymph nodes.
Presentation of extracellular antigens: Class II
Antigens from the extracellular space and sometimes also endogenous ones, are enclosed into endocytic vesicles and presented on the cell surface by MHC-II molecules to the helper T cells expressing CD4 molecule. Only APCs such as dendritic cells, B cells or macrophages express MHC-II molecules on their surface in substantial quantity, so expression of MHC-II molecules is more cell-specific than MHC-I.
APCs usually internalise exogenous antigens by endocytosis, but also by pinocytosis, macroautophagy, endosomal microautophagy or chaperone-mediated autophagy. In the first case, after internalisation, the antigens are enclosed in vesicles called endosomes. There are three compartments involved in this antigen presentation pathway: early endosomes, late endosomes or endolysosomes and lysosomes, where antigens are hydrolized by lysosome-associated enzymes (acid-dependent hydrolases, glycosidases, proteases, lipases). This process is favored by gradual reduction of the pH. The main proteases in endosomes are cathepsins and the result is the degradation of the antigens into oligopeptides.
MHC-II molecules are transported from the ER to the MHC class II loading compartment together with the protein invariant chain (Ii, CD74). A non classical MHC-II molecule (HLA-DO and HLA-DM) catalyses the exchange of part of the CD74 (CLIP peptide) with the peptide antigen. Peptide-MHC-II complexes (pMHC-II) are transported to the plasma membrane and the processed antigen is presented to the helper T cells in the lymph nodes.
APCs undergo a process of maturation while migrating, via chemotactic signals, to lymphoid tissues, in which they lose the phagocytic capacity and develop an increased ability to communicate with T-cells by antigen-presentation. As well as in CD8+ cytotoxic T cells, APCs need pMHC-II and additional costimulatory signals to fully activate naive T helper cells.
Alternative pathway of endogenous antigen processing and presentation over MHC-II molecules exists in medullary thymic epithelial cells (mTEC) via the process of autophagy. It is important for the process of central tolerance of T cells in particular the negative selection of autoreactive clones. Random gene expression of the whole genome is achieved via the action of AIRE and a self-digestion of the expressed molecules presented on both MHC-I and MHC-II molecules.
Presentation of native intact antigens to B cells
B-cell receptors on the surface of B cells bind to intact native and undigested antigens of a structural nature, rather than to a linear sequence of a peptide which has been digested into small fragments and presented by MHC molecules. Large complexes of intact antigen are presented in lymph nodes to B cells by follicular dendritic cells in the form of immune complexes. Some APCs expressing comparatively lower levels of lysosomal enzymes are thus less likely to digest the antigen they have captured before presenting it to B cells.
See also
Immune system
Immunology
Immunological synapse
Trogocytosis
References
External links
ImmPort - Gene summaries, ontologies, pathways, protein/protein interactions and more for genes involved in antigen processing and presentation
Immune system
HIV/AIDS | Antigen presentation | Biology | 1,742 |
39,067,533 | https://en.wikipedia.org/wiki/Autowave%20reverberator | In the theory of autowave phenomena an autowave reverberator is an autowave vortex in a two-dimensional active medium.
A reverberator appears a result of a rupture in the front of a plane autowave. Such a rupture may occur, for example, via collision of the front with a nonexcitable obstacle. In this case, depending on the conditions, either of two phenomena may arise: a spiral wave, which rotates around the obstacle, or an autowave reverberator which rotates with its tip free.
Introduction
The reverberator was one of the first autowave solutions, researchers found, and, because of this historical context, it remains by nowadays the most studied autowave object.
Up until the late 20th century, the term "auto-wave reverberator" was used very active and widely in the scientific literature, written by soviet authors, because of active developing these investigations in USSR (for more details, see "A brief history of autowave researches" in Autowave). And, inasmuch as the soviet scientific literature was very often republished in English translation (see e.g.), the term "autowave reverberator" became known also in English-speaking countries.
The reverberator is often confused with another state of the active medium, which is similar to it, - with the spiral wave. Indeed, at a superficial glance, these two autowave solutions look almost identical. Moreover, the situation is further complicated by the fact that the spiral wave may under certain circumstances become the reverberator, and the reverberator may, on the contrary, become the spiral wave!
However, it must be remembered that many features of rotating autowaves were quite thoroughly studied as long ago as the 1970s, and already at that time some significant differences in properties of a spiral wave and a reverberator were revealed. Unfortunately, all the detailed knowledge from those years remains now scattered in different publications of the 1970-1990s, which became little-known now even for the new generations of researchers, not to mention the people that are far from this research topic. Perhaps, the only book in that it were more or less completely brought together in the form of abstracts basic information about autowaves, known at the time of its publication, remains still the Proceedings „Autowave processes in systems with diffusion“, which was published in 1981 and became already a rare bibliographic edition in nowadays; its content was partially reiterated in another book in 2009.
The differences between a reverberator and a spiral wave are considered below in detail. But for the beginning it is useful to demonstrate these differences with a simple analogy. Everyone knows well the seasons of a year... Under some conditions, winter can turn into summer, and summer, on the contrary, into winter; and, moreover, these miraculous transformations occur quite regularly! However, though a winter and a summer are similar, for example, in regular alternation of day and night, you cannot think of saying that winter and summer are the same thing, can you? Nearly the same things are with reverberator and spiral waves; and therefore they should not be confused.
It is useful also to keep in mind that it is known now, in addition to the rotating-wave, quite a number of other autowave solutions, and every year the number grows continuously with increasing speed. Because of these causes (or as a result of these events), it was found during the 21st century that many of the conclusions about the properties of autowaves, - which were widely known among readers of the early papers on the subject as well as widely discussed in the press of that time, - unfortunately, proved to be a sort of erroneous hasty generalizations.
Basic information
"Historical" definition
On the question of terminology
Types of reverberator behaviour
The "classical" regimes
Various autowave regimes, such as plane waves or spiral waves can exist in an active media, but only under certain conditions on the medium properties. Using the FitzhHugh-Nagumo model for a generic active medium, Winfree constructed a diagram depicting the regions of parameter space in which the principle phenomena may be observed. Such diagrams are a common way of presenting the different dynamical regimes observed in both experimental and theoretical settings. They are sometimes called flower gardens since the paths traced by autowave tips may often resemble the petals of a flower. A flower garden for the FitzHugh-Nagumo model is shown to the right. It contains: the line ∂P, which confines the range of the model parameters under which impulses can propagate through one-dimensional medium, and plane autowaves can spread in the two-dimensional medium; the "rotor boundary" ∂R, which confines the range of the parameters under which there can be the reverberators rotating around fixed cores (i.e. performing uniform circular rotation); the meander boundary ∂M and the hyper-meander boundary ∂C, which confine the areas where two-period and more complex (possibly chaotic) regimes can exist. Rotating autowaves with large cores exist only in the areas with parameters close to the boundary ∂R.
Similar autowave regimes were also obtained for the other models — Beeler-Reuter model, Barkley model, Aliev-Panfilov model, Fenton-Karma model etc.
It was also shown that these simple autowave regimes should be common to all active media because a system of differential equations of any complexity, which describes this or that active medium, can be always simplified to two equations.
In the simplest case without drift (i.e., the regime of uniform circular rotation), the tip of a reverberator rotates around a fixed point along the circumference of a certain radius (the circular motion of the tip of the reverberator). The autowave cannot penetrate into the circle bounded by this circumference. As far as it approaches the centre of the reverberator rotation, the amplitude of the excitation pulse is reduced, and, at a relatively low excitability of the medium there is a region of finite size in the centre of reverberator, where the amplitude of the excitation pulse is zero (recall that we speak now about a homogeneous medium, for each point of which its properties are the same). This area of low amplitude in the centre of the reverberator is usually called the core of the reverberator. The existence of such a region in the center of reverberator seems, at first glance, quite incomprehensible, as it borders all the time with the excited sites. A detailed investigation of this phenomenon showed that resting area in the centre of reverberator remains of its normal excitability, and the existence of a quiescent region in the centre of the reverberator is related to the phenomenon of the critical curvature. In the case of "infinite" homogeneous medium, the core radius and the speed of the rotor rotation are determined only by the properties of the medium itself, rather than the initial conditions. The shape of the front of the rotating spiral wave in the distance from the centre of rotation is close to the evolvent of the circumference - the boundaries of its core. The certain size of the core of the reverberator is conditioned by that the excitation wave, which circulates in a closed path, should completely fit in this path without bumping into its own refractory tail.
As the critical size of the reverberator, it is understood as the minimum size of the homogeneous medium in which the reverberator can exist indefinitely. For assessing the critical size of the reverberator one uses sometimes the size of its core, assuming that adjacent to the core region of the medium should be sufficient for the existence of sustainable re-entry. However, the quantitative study of the dependence of the reverberator behaviour on conductivity of rapid transmembrane current (that characterize the excitability of the medium), it was found that the critical size of the reverberator and the size its core are its different characteristics, and the critical size of the reverberator is much greater, in many cases, than the size of its core (i.e. reverberator dies, even the case, if its core fits easily in the boundaries of the medium and its drift is absent)
Regimes of induced drift
At meander and hyper-meander, the displacement of the center of autowave rotation (i.e. its drift) is influenced by the forces generated by the very same rotating autowave.
However, in result of the scientific study of rotating autowaves was also identified a number of external conditions that force reverberator drift. It can be, for example, the heterogeneity of the active medium by any parameter. Perhaps, it is the works Biktasheva, where different types of the reverberator drift are currently represented the most completely (although there are other authors who are also involved in the study of drift of the autowave reverberator).
In particular, Biktashev offers to distinguish the following types of reverberator drift in the active medium:
Resonant drift.
Inhomogeneity induced drift.
Anisotropy induced drift.
Boundary induced drift (see also).
Interaction of spirals.
High frequency induced drift.
Note that even for such a simple question, what should be called a drift of autowaves, and what should not be called, there is still no agreement among researchers. Some researchers (mostly mathematicians) tends to consider as reverberator drift only those of its displacement, which occur under the influence of external events (and this view is determined exactly by the peculiarity of the mathematical approach to the study of autowaves). The other part of the researchers did not find significant differences between the spontaneous displacement of reverberator in result of the events generated by it itself, and its displacement as a result of external influences; and therefore these researchers tend to believe that meander and hyper-meander are also variants of drift, namely the spontaneous drift of the reverberator. There was not debate on this question of terminology in the scientific literature, but it can be found easily these features of describing the same phenomena by the different authors.
Autowave lacet
In the numerical study of reverberator using the Aliev-Panfilov model, the phenomenon of bifurcation memory was revealed, when the reverberator changes spontaneously its behaviour from meander to uniform circular rotation; this new regime was named autowave lacet.
Briefly, spontaneous deceleration of the reverberator drift by the forces generated by the reverberator itself occurs during the autowave lacet, with the velocity of its drift decreasing gradually down to zero in the result. The regime meander thus degenerates into a simple uniform circular rotation. As already mentioned, this unusual process is related to phenomenon of bifurcation memory.
When autowave lacet was discovered, the first question arose: Does the meander exist ever or the halt of the reverberator drift can be observed every time in all the cases, which are called meander, if the observation will be sufficiently long? The comparative quantitative analysis of the drift velocity of reverberator in the regimes of meander and lacet revealed a clear difference between these two types of evolution of the reverberator: while the drift velocity quickly goes to a stationary value during meander, a steady decrease in the drift velocity of the vortex can be observed during the lacet, in which can be clearly identified the phase of slow deceleration and phase of rapid deceleration of the drift velocity.
The revealing of autowave lacet may be important for cardiology. It is known that reverberators show remarkable stability of their properties, they behave "at their discretion", and their behaviour can significantly affect only the events that occur near the tip of reverberator. The fact that the behaviour of the reverberator can significantly be affected only by the events that occur near its core, results, for example, in the fact that, at a meeting with reverberator nonexcitability heterogeneity (e.g. small myocardial scar), the tip of the rotating wave "sticks" to this heterogeneity, and reverberator begins to rotate around the stationary nonexcitability obstacles. The transition from polymorphic to monomorphic tachycardia is observed on the ECG in such cases. This phenomenon is called the "anchoring" of spiral wave.
However, it was found in the simulations that spontaneous transition of polymorphic tachycardia in monomorphic one can be observed also on the ECG during the autowave lacet; in other words, the lacet may be another mechanism of transformation of polymorphic ventricular tachycardia in a monomorphic. Thus, the autowave theory predicts the existence of special type of ventricular arrhythmias, conditionally called "lacetic", which cardiologists do not still distinguish in diagnostics.
The reasons for distinguishing between variants of rotating autowaves
Recall that from 1970th to the present time it is customary to distinguish three variants rotating autowaves:
wave in the ring,
spiral wave,
autowave reverberator.
Dimensions of the core of reverberator is usually less than the minimal critical size of the circular path of circulation, which is associated with the phenomenon of critical curvature. In addition, the refractory period appears to be longer for the waves with non-zero curvature (reverberator and spiral wave) and begins to increase with decreasing the excitability of the medium before the refractory period for the plane waves (in the case of circular rotation). These and other significant differences between the reverberator and the circular rotation of excitation wave make us distinguish these two regimes of re-entry.
The figure shows the differences found in the behavior of the plane autowave circulating in the ring and reverberator. You can see that, in the same local characteristics of the excitable medium (excitability, refractoriness, etc., given by the nonlinear member), there are significant quantitative differences between dependencies of the reverberator characteristics and characteristics of the regime of one-dimensional rotation of impulse, although respective dependencies match qualitatively.
Notes
References
Books
Papers
External links
Several simple classical models of autowaves (JS + WebGL), that can be run directly in your web browser; developed by Evgeny Demidov.
Biophysics
Computational science
Biomedical cybernetics
Nonlinear systems
Mathematical modeling
Parabolic partial differential equations | Autowave reverberator | Physics,Mathematics,Biology | 3,061 |
11,196,255 | https://en.wikipedia.org/wiki/Fire%20pot | A fire pot is a container, usually earthenware, for carrying fire. Fire pots have been used since prehistoric times to transport fire from one place to another, for warmth while on the move, for cooking, in religious ceremonies and even as weapons of war.
Early times
Fire pots were vital to the development of civilization. Once humans had learned to contain, control and sustain fires, they had an invaluable tool for cooking food that would have otherwise not been edible. Fire pots were also useful for sharpening spears, hollowing out canoes, baking pottery, and many other tasks, such as staying warm.
At first, humans relied on natural fires, caused by lightning strikes or other natural occurrences, to provide them with a flame to start their own fires. Since natural fires are not very common, humans learned how to make fires by igniting tinder from sparks caused by striking stones together, or by creating friction using a bow drill.
Given the time-consuming nature of early firestarting, humans eventually began to use earthenware vessels, or fire pots, in which slow-burning fires could be kept alight indefinitely by using small quantities of fuel. Nomadic people could carry these small fires with them, using them to start larger fires for their evening camps.
Archaeologists found that fire pots were being used 10,000, or more, years ago, according to finds during the 1936-37 dig in Fells Cave
, of which is located in the valley of the Rio Chico, not far from the Strait of Magellan.
Semi-nomadic and sedentary people would have made or acquired more advanced types of fire pots, as opposed to the fully nomadic people who would have used more primitive types. Being more sedentary, people were able to more effectively work with clay, and would have kilns to bake the pottery in, instead of using the traditional fire-baking methods.
Warmth
Portable fire pots have long been used as a source of warmth.
Kangdi
A kangdi, also known as kanger or kangri is a traditional earthen fire pot from Kashmir, used to warm the hands or feet. In Kashmir, in winter, people usually wear a "Phiran" or long woolen gown over their normal dress. To keep the inside of the Phiran warm, they sometimes use a Manann, a fire-pot made of clay. But with no insulation on its clay handles, the Manann is inconvenient.
A Kangdi is an improved version of the Manann, a semi-spherical clay pot enclosed in willow rushes, with handles also made of willow rushes. The pot holds burning coals that stay warm throughout the day. Throughout Kashmir in winter, it is common to see people with one hand holding their Kangdi inside their Phiran, doing the daily chores with the other.
Cooking
The fire pot was probably invented long after people discovered the value of cooking over fire.
Once fire-proof containers became available, such as iron pots, it was natural to design fire pots that both heated and supported the cooking vessel.
Over time, these developed into stoves, used both for cooking and heating.
Adogan
An earthenware fire-pot or indigenous stove found in West Africa
, notably in Ilora and Oyo, an Adogan has a flat bottom with a carinated wall and an out-turned rim with three decorated lugs to support the cooking pot. A U-shaped hole is cut in one side to allow air to enter, and through which fuel is inserted.
Chinese hot pot
Hot pot or huoguo (Simplified Chinese: 火锅) is a traditional Chinese social meal. The literal Chinese translation is fire pot, as huo means fire, while guo refers to pot. The Chinese hot pot consists of a simmering metal pot of stock at the center of the dining table. While the hot pot is kept simmering, thinly sliced ingredients are placed into the pot and are cooked at the table. This type of cuisine is also referred to as "steamboat". In Western cooking, the fondue is used in a similar way, although usually with different ingredients.
Warfare
Small earthen pots filled with combustibles were used as early thermal weapons during the classical and medieval periods.
Containers made at first from clay, later from cast iron, known as 'carcasses', were launched by a siege engine, filled with pitch, Greek fire or other incendiary mixtures. These fire pots could cause great damage to besieged cities with largely wooden construction.
In the Siege of Petra (550–551), the Sasanians used fire pots containing sulfur, bitumen and naphtha, a composition called the "oil of Medea"—an early form of chemical weapon.
A description of how to make military fire pots is given in Lucar, 1588, cited by Martin 1994:207-217
"Make great and small earthen pots which must be but half baked, and like unto the picture in the mergent . . . . Fill every of those pots half with grosse gunpowder pressed down hard, and with one of the five several mixtures next following in this Chapter, fill up the other half of those pottes: This done, cover the mouth of every pot with a piece of canvass bound hard about the mouth of the pot, and well imbued in melted brimstone. Also tie round about the middle of every pot a packthreed, and then hang upon the same packthreed round about the pot so many Gunmatches of a finger length as you will, & when you wil throe any of these pottes among enemies, light the same gun-matches that they may so soone as the pot is broken with his fall uppon the ground, fire the mixture of the potte. Or rather put fire to the mixture at the mouth of the pot, & by so doing make the same to burn before you doe throe the potte from you, because it is a better and more surer way than the other: I meane than to fire the said mixture after the pot is broken with burning gunmatches. Moreover this is to be noted, that the small pottes do serve for to be throne out of one ship into an other in fight upon the sea, and that the great pots are to be used in service upon the land for the defence of towns, fortes, walls, and gates, and to burn such things as the enemies shall throe into ditches for to fill up the same ditches, and also to destroy enemies in their trenches and campes"
By the mid-17th century, fire pots had largely been replaced by shells filled with explosives, which may be seen as the direct descendants of military fire pots.
Religion and the arts
There is an element of mystery in fire, which at times has led to fire worship.
Fire pots have been used in religious ceremonies for thousands of years
.
It would be inappropriate, and probably impossible, to cover all religious uses of fire pots in this article, but
a few examples are relevant.
Censers
A Censer is any type of vessel made for burning incense. They range from simple earthenware bowls to intricately carved silver or gold vessels, small table top objects a few centimetres tall to as many as several metres high. In many cultures, burning incense has spiritual and religious connotations, and this influences the design and decoration of the censer.
Before a Buddhist tantric ritual, an assisting monk may swing a censer or thurible as he passes to 'purify' the room. This is a container usually made of metal that hangs from three chains. Inside it, powdered incense that has been put on a smoldering bit of charcoal burns slowly, and the smoke escapes through pierced openings in the closed lid. One tradition says that during one of the Buddha's sermons a monk heedlessly swatted a mosquito. The Tathagatha is said to have ordered that, in the future, incense ought to be lit in order to keep the flies away, so that people could more easily concentrate on Dharma teachings, but also to prevent the needless taking of lives.
Censers are used in the Roman Catholic, Anglo-Catholic, Old Catholic and Eastern Orthodox sects of the Christian religion during important rituals such as benedictions, processions and important masses.
Early Jewish symbol of God
In Genesis 15, a chapter of the Bible
, God instructs Abraham to cut a heifer, a she goat, a ram, a turtledove, and a young pigeon into halves. When it got dark, "a smoking fire pot and a flaming torch passed between the pieces", and later God made a covenant with Abraham granting him and his heirs extensive lands between the River of Egypt (either the Nile or the Wadi el Arish in the Sinai) and the Euphrates.
Texts from Mari in northern Mesopotamia from about the same period say that parties entering into a covenant would seal the agreement by cutting a donkey in half and then walking between the severed pieces. One interpretation of the ceremony described in Genesis 15 is that God made an unconditional covenant when God alone (symbolized by the fire pot, or the fire in it) passed between the two halves of the slaughtered animals.
Japanese Kodo ceremony
Kōdō
(香道 - Way of Fragrance) is the Japanese art of appreciating incense, and involves using incense within a structure of codified conduct. Participants sit near one another and take turns smelling incense from a censer as they pass it around the group. Participants comment on and make observations about the incense, and play games to guess the incense material.
Sakthi Karagam
Sakthi Karagam is a dance performed in Tamil Nadu with a fire pot on the head in the Mariamman or Durga temple rituals. Today it is danced with a pot decorated with flowers on the head and is known as 'Aatta Karagam' and symbolises joy and merriment. In earlier times, the clay pot, or Karagam, was considered the residence of the local deity during the festival
, which played a crucial role in community bonding. It is not clear whether the pot ever contained fire, or was so named because it was carried over fire by fire walkers.
Descendants of the Fire Pot
Although the fire pot and its ancestor the fire pit are still in use in their original forms, successive technical refinements have led to many modern descendants whose origin in the simple clay container might be hard to guess.
Some have been driven by the need to adapt to new fuels, such as charcoal, oil, coal, coke, kerosene, propane, electricity and microwaves. Others have been made possible through discovery of new materials such as iron, bronze, ceramics and asbestos. Always the motive would have been to improve the design, to make a device for managing fire that was cheaper, more robust, more convenient, more capable of meeting new demands. Often improvements made for industrial purposes found their way into improved cooking devices, and vice versa.
An incomplete list of fire pot descendants includes:
Brazier: A standing or hanging metal bowl or box containing the fire, with perforations for ventilation. A Hibachi is a type of brazier.
Stove: An enclosed space containing the fire, with dampers and regulators to adjust the draft and thus control the heat. A stove allows for cleaner, hotter and more efficient use of fuel than a fire pot or brazier.
Oven: An enclosed compartment of a stove, separate from the fire, used for heating, baking or drying. Ovens may have their origin in the practice of enclosing food in clay or leaves before placing it in the fire, still used in Kalua, the traditional cuisine of Hawaii. Ovens make it practical to cook slowly, heating the food throughout, and are the basis of many types of cuisine. Ovens enable pottery and today are used in many industrial processes.
Boiler: A closed vessel in which water is heated. The discovery that boilers could build up explosive pressure if too well sealed led to the invention of the steam engine, a pivotal technology in the Industrial Revolution.
Barbecue: A device for cooking on a grill over a box containing burning wood, charcoal or, more recently, propane or natural gas.
References
External links
http://www.omda.bg/uploaded_files/files/articles/THE_DOMESTICATION_OF_FIRE__1408526235.pdf
Cooking appliances
Incendiary weapons
Religious objects
Incense equipment
Fireplaces | Fire pot | Physics | 2,559 |
20,444,608 | https://en.wikipedia.org/wiki/Storage%20area%20network | A storage area network (SAN) or storage network is a computer network which provides access to consolidated, block-level data storage. SANs are primarily used to access data storage devices, such as disk arrays and tape libraries from servers so that the devices appear to the operating system as direct-attached storage. A SAN typically is a dedicated network of storage devices not accessible through the local area network (LAN).
Although a SAN provides only block-level access, file systems built on top of SANs do provide file-level access and are known as shared-disk file systems.
Newer SAN configurations enable hybrid SAN and allow traditional block storage that appears as local storage but also object storage for web services through APIs.
Storage architectures
Storage area networks (SANs) are sometimes referred to as network behind the servers and historically developed out of a centralized data storage model, but with its own data network. A SAN is, at its simplest, a dedicated network for data storage. In addition to storing data, SANs allow for the automatic backup of data, and the monitoring of the storage as well as the backup process. A SAN is a combination of hardware and software. It grew out of data-centric mainframe architectures, where clients in a network can connect to several servers that store different types of data. To scale storage capacities as the volumes of data grew, direct-attached storage (DAS) was developed, where disk arrays or just a bunch of disks (JBODs) were attached to servers. In this architecture, storage devices can be added to increase storage capacity. However, the server through which the storage devices are accessed is a single point of failure, and a large part of the LAN network bandwidth is used for accessing, storing and backing up data. To solve the single point of failure issue, a direct-attached shared storage architecture was implemented, where several servers could access the same storage device.
DAS was the first network storage system and is still widely used where data storage requirements are not very high. Out of it developed the network-attached storage (NAS) architecture, where one or more dedicated file server or storage devices are made available in a LAN. Therefore, the transfer of data, particularly for backup, still takes place over the existing LAN. If more than a terabyte of data was stored at any one time, LAN bandwidth became a bottleneck. Therefore, SANs were developed, where a dedicated storage network was attached to the LAN, and terabytes of data are transferred over a dedicated high speed and bandwidth network. Within the SAN, storage devices are interconnected. Transfer of data between storage devices, such as for backup, happens behind the servers and is meant to be transparent. In a NAS architecture data is transferred using the TCP and IP protocols over Ethernet. Distinct protocols were developed for SANs, such as Fibre Channel, iSCSI, Infiniband. Therefore, SANs often have their own network and storage devices, which have to be bought, installed, and configured. This makes SANs inherently more expensive than NAS architectures.
Components
SANs have their own networking devices, such as SAN switches. To access the SAN, so-called SAN servers are used, which in turn connect to SAN host adapters. Within the SAN, a range of data storage devices may be interconnected, such as SAN-capable disk arrays, JBODS and tape libraries.
Host layer
Servers that allow access to the SAN and its storage devices are said to form the host layer of the SAN. Such servers have host adapters, which are cards that attach to slots on the server motherboard (usually PCI slots) and run with a corresponding firmware and device driver. Through the host adapters the operating system of the server can communicate with the storage devices in the SAN.
In Fibre channel deployments, a cable connects to the host adapter through the gigabit interface converter (GBIC). GBICs are also used on switches and storage devices within the SAN, and they convert digital bits into light impulses that can then be transmitted over the Fibre Channel cables. Conversely, the GBIC converts incoming light impulses back into digital bits. The predecessor of the GBIC was called gigabit link module (GLM).
Fabric layer
The fabric layer consists of SAN networking devices that include SAN switches, routers, protocol bridges, gateway devices, and cables. SAN network devices move data within the SAN, or between an initiator, such as an HBA port of a server, and a target, such as the port of a storage device.
When SANs were first built, hubs were the only devices that were Fibre Channel capable, but Fibre Channel switches were developed and hubs are now rarely found in SANs. Switches have the advantage over hubs that they allow all attached devices to communicate simultaneously, as a switch provides a dedicated link to connect all its ports with one another. When SANs were first built, Fibre Channel had to be implemented over copper cables, these days multimode optical fibre cables are used in SANs.
SANs are usually built with redundancy, so SAN switches are connected with redundant links. SAN switches connect the servers with the storage devices and are typically non-blocking allowing transmission of data across all attached wires at the same time. SAN switches are for redundancy purposes set up in a meshed topology. A single SAN switch can have as few as 8 ports and up to 32 ports with modular extensions. So-called director-class switches can have as many as 128 ports.
In switched SANs, the Fibre Channel switched fabric protocol FC-SW-6 is used under which every device in the SAN has a hardcoded World Wide Name (WWN) address in the host bus adapter (HBA). If a device is connected to the SAN its WWN is registered in the SAN switch name server. In place of a WWN, or worldwide port name (WWPN), SAN Fibre Channel storage device vendors may also hardcode a worldwide node name (WWNN). The ports of storage devices often have a WWN starting with 5, while the bus adapters of servers start with 10 or 21.
Storage layer
The serialized Small Computer Systems Interface (SCSI) protocol is often used on top of the Fibre Channel switched fabric protocol in servers and SAN storage devices. The Internet Small Computer Systems Interface (iSCSI) over Ethernet and the Infiniband protocols may also be found implemented in SANs, but are often bridged into the Fibre Channel SAN. However, Infiniband and iSCSI storage devices, in particular, disk arrays, are available.
The various storage devices in a SAN are said to form the storage layer. It can include a variety of hard disk and magnetic tape devices that store data. In SANs, disk arrays are joined through a RAID which makes a lot of hard disks look and perform like one big storage device. Every storage device, or even partition on that storage device, has a logical unit number (LUN) assigned to it. This is a unique number within the SAN. Every node in the SAN, be it a server or another storage device, can access the storage by referencing the LUN. The LUNs allow for the storage capacity of a SAN to be segmented and for the implementation of access controls. A particular server, or a group of servers, may, for example, be only given access to a particular part of the SAN storage layer, in the form of LUNs. When a storage device receives a request to read or write data, it will check its access list to establish whether the node, identified by its LUN, is allowed to access the storage area, also identified by a LUN. LUN masking is a technique whereby the host bus adapter and the SAN software of a server restrict the LUNs for which commands are accepted. In doing so LUNs that should never be accessed by the server are masked. Another method to restrict server access to particular SAN storage devices is fabric-based access control, or zoning, which is enforced by the SAN networking devices and servers. Under zoning, server access is restricted to storage devices that are in a particular SAN zone.
Network protocols
A mapping layer to other protocols is used to form a network:
ATA over Ethernet (AoE), mapping of AT Attachment (ATA) over Ethernet
Fibre Channel Protocol (FCP), a mapping of SCSI over Fibre Channel
Fibre Channel over Ethernet (FCoE)
ESCON over Fibre Channel (FICON), used by mainframe computers
HyperSCSI, mapping of SCSI over Ethernet
iFCP or SANoIP mapping of FCP over IP
iSCSI, mapping of SCSI over TCP/IP
iSCSI Extensions for RDMA (iSER), mapping of iSCSI over InfiniBand
Network block device, mapping device node requests on UNIX-like systems over stream sockets like TCP/IP
SCSI RDMA Protocol (SRP), another SCSI implementation for remote direct memory access (RDMA) transports
Storage networks may also be built using Serial Attached SCSI (SAS) and Serial ATA (SATA) technologies. SAS evolved from SCSI direct-attached storage. SATA evolved from Parallel ATA direct-attached storage. SAS and SATA devices can be networked using SAS Expanders.
Software
The Storage Networking Industry Association (SNIA) defines a SAN as "a network whose primary purpose is the transfer of data between computer systems and storage elements". But a SAN does not just consist of a communication infrastructure, it also has a software management layer. This software organizes the servers, storage devices, and the network so that data can be transferred and stored. Because a SAN does not use direct attached storage (DAS), the storage devices in the SAN are not owned and managed by a server. A SAN allows a server to access a large data storage capacity and this storage capacity may also be accessible by other servers. Moreover, SAN software must ensure that data is directly moved between storage devices within the SAN, with minimal server intervention.
SAN management software is installed on one or more servers and management clients on the storage devices. Two approaches have developed in SAN management software: in-band and out-of-band management. In-band means that management data between server and storage devices is transmitted on the same network as the storage data. While out-of-band means that management data is transmitted over dedicated links. SAN management software will collect management data from all storage devices in the storage layer. This includes info on read and write failures, storage capacity bottlenecks and failure of storage devices. SAN management software may integrate with the Simple Network Management Protocol (SNMP).
In 1999 Common Information Model (CIM), an open standard, was introduced for managing storage devices and to provide interoperability, The web-based version of CIM is called Web-Based Enterprise Management (WBEM) and defines SAN storage device objects and process transactions. Use of these protocols involves a CIM object manager (CIMOM), to manage objects and interactions, and allows for the central management of SAN storage devices. Basic device management for SANs can also be achieved through the Storage Management Interface Specification (SMI-S), were CIM objects and processes are registered in a directory. Software applications and subsystems can then draw on this directory. Management software applications are also available to configure SAN storage devices, allowing, for example, the configuration of zones and LUNs.
Ultimately SAN networking and storage devices are available from many vendors and every SAN vendor has its own management and configuration software. Common management in SANs that include devices from different vendors is only possible if vendors make the application programming interface (API) for their devices available to other vendors. In such cases, upper-level SAN management software can manage the SAN devices from other vendors.
Filesystems support
In a SAN, data is transferred, stored and accessed on a block level. As such, a SAN does not provide data file abstraction, only block-level storage and operations. Server operating systems maintain their own file systems on their own dedicated, non-shared LUNs on the SAN, as though they were local to themselves. If multiple systems were simply to attempt to share a LUN, these would interfere with each other and quickly corrupt the data. Any planned sharing of data on different computers within a LUN requires software. File systems have been developed to work with SAN software to provide file-level access. These are known as shared-disk file system.
In media and entertainment
Video editing systems require very high data transfer rates and very low latency. SANs in media and entertainment are often referred to as serverless due to the nature of the configuration which places the video workflow (ingest, editing, playout) desktop clients directly on the SAN rather than attaching to servers. Control of data flow is managed by a distributed file system. Per-node bandwidth usage control, sometimes referred to as quality of service (QoS), is especially important in video editing as it ensures fair and prioritized bandwidth usage across the network.
Quality of service
SAN Storage QoS enables the desired storage performance to be calculated and maintained for network customers accessing the device. Some factors that affect SAN QoS are:
Bandwidth The rate of data throughput available on the system.
Latency The time delay for a read/write operation to execute.
Queue depth The number of outstanding operations waiting to execute to the underlying disks (traditional or solid-state drives).
Alternatively, over-provisioning can be used to provide additional capacity to compensate for peak network traffic loads. However, where network loads are not predictable, over-provisioning can eventually cause all bandwidth to be fully consumed and latency to increase significantly resulting in SAN performance degradation.
Storage virtualization
Storage virtualization is the process of abstracting logical storage from physical storage. The physical storage resources are aggregated into storage pools, from which the logical storage is created. It presents to the user a logical space for data storage and transparently handles the process of mapping it to the physical location, a concept called location transparency. This is implemented in modern disk arrays, often using vendor-proprietary technology. However, the goal of storage virtualization is to group multiple disk arrays from different vendors, scattered over a network, into a single storage device. The single storage device can then be managed uniformly.
See also
List of networked storage hardware platforms
List of storage area network management systems
Massive array of idle disks (MAID)
Storage hypervisor
Storage resource management (SRM)
Converged storage
References
External links
What Is a Storage Area Network (SAN)?
Introduction to Storage Area Networks Exhaustive Introduction into SAN, IBM Redbook
SAS and SATA, solid-state storage lower data center power consumption
SAN NAS Videos
Data management
Telecommunications engineering | Storage area network | Technology,Engineering | 3,007 |
48,348,939 | https://en.wikipedia.org/wiki/Gliophorus%20reginae | Gliophorus reginae is a species of agaric (gilled mushroom) in the family Hygrophoraceae. It has been given the recommended English name of jubilee waxcap. The species has a European distribution, occurring mainly in agriculturally unimproved grassland. Threats to its habitat have resulted in the species being assessed as globally "vulnerable" on the IUCN Red List of Threatened Species.
Taxonomy
The species was first described from England in 2013, as a result of molecular research, based on cladistic analysis of DNA sequences. It was formerly considered one of several colour variants of the widespread Parrot Waxcap Gliophorus psittacinus, which it otherwise resembles. The Latin epithet reginae (meaning "of the queen") was given in honour of the 2012 diamond jubilee and coronation anniversary of Queen Elizabeth II and also because of the royal purple cap colour of the basidiocarps.
Description
Basidiocarps are agaricoid, up to 70 mm (2.75 in) tall, the cap hemispherical to broadly umbonate becoming flat, up to 55 mm (2 in) across. The cap surface is smooth, highly viscid when damp, dull violet-purple, sometimes with pink, red, or brown tones. The lamellae (gills) are waxy, pale cap-coloured, sometimes with yellowish tints. The stipe (stem) is smooth, viscid, white, sometimes yellowish at the base, lacking a ring. The spore print is white, the spores (under a microscope) smooth, inamyloid, ellipsoid, measuring about 6 to 8.5 by 4 to 5.5 μm.
Similar species
The Parrot Waxcap Gliophorus psittacinus is similar, but is typically green and, if not, retains this colour at the top of the stipe. The Butterscotch Waxcap Gliophorus europerplexus is also similar, but is orange-brown to pinkish brown.
Distribution and habitat
The Jubilee Waxcap is currently known from England and Wales, Denmark, France, Slovakia, and Spain. Like most other European waxcaps, it occurs in old, agriculturally unimproved, short-sward grassland (pastures and lawns).
Recent research suggests waxcaps are neither mycorrhizal nor saprotrophic but may be associated with mosses.
Conservation
Gliophorus reginae is typical of waxcap grasslands, a declining habitat due to changing agricultural practices. As a result, the species is of global conservation concern and is listed as "vulnerable" on the IUCN Red List of Threatened Species. Gliophorus reginae also appears on the official national red list of threatened fungi in Denmark.
References
Hygrophoraceae
Fungi described in 2013
Fungi of Europe
Fungus species | Gliophorus reginae | Biology | 586 |
25,887,343 | https://en.wikipedia.org/wiki/Psilocybe%20acutipilea | Psilocybe acutipilea is a species of mushroom-forming fungus in the family Hymenogastraceae. It was discovered in October 1881 in Apiahy, Sao Paulo State, Brazil by Carlos Spegazzini, and described by him as a new species of Deconica in 1889. Gastón Guzmán transferred it to Psilocybe in 1978, but Ramirez-Cruz considered it a possible synonym of Psilocybe mexicana, but the type specimen was too moldy for them to be certain.
See also
List of Psilocybin mushrooms
Psilocybin mushrooms
References
Entheogens
Psychoactive fungi
acutipelia
Psychedelic tryptamine carriers
Fungi of North America
Fungi described in 1889
Fungus species | Psilocybe acutipilea | Biology | 149 |
18,193,060 | https://en.wikipedia.org/wiki/Ali%20Jadbabaie | Ali Jadbabaie is an Iranian-American systems scientist and decision theorist and the JR East Professor of Engineering at Massachusetts Institute of Technology. Prior to joining MIT, he was the Alfred Fitler Moore Professor of Network Science in the Department of Electrical and Systems Engineering at the University of Pennsylvania and a postdoc at the department of Electrical and Computer Engineering at Yale University under A. Stephen Morse (2001–2002). Jadbabaie is an internationally renowned expert in the control and coordination of multi-robot formations, distributed optimization, network economics, and network science. He is currently the head of the Civil and Environmental Engineering Department at MIT. Previously he served as the Associate director of the Institute for Data, Systems and Society (IDSS) at MIT and was the program Head for the Social and Engineering Systems PhD program. He was a cofounder and director of the Singh Program in Networked & Social Systems Engineering (NETS) at the University of Pennsylvania's School of Engineering and Applied Sciences.
Education
Ph.D. Control and Dynamical Systems, October 2000, California Institute of Technology, Pasadena, CA, USA
M.S. Electrical Engineering, December 1997, University of New Mexico, Albuquerque, NM, USA
B.S. Electrical Engineering, February 1995, Sharif University of Technology, Tehran, Iran
References
External links
Home page
Control theorists
Iranian roboticists
Sharif University of Technology alumni
Living people
American roboticists
Fellows of the IEEE
Year of birth missing (living people) | Ali Jadbabaie | Engineering | 297 |
11,128,048 | https://en.wikipedia.org/wiki/Mycosphaerella%20areola | Mycosphaerella areola is a plant pathogen infecting cotton.
See also
List of Mycosphaerella species
References
areola
Fungi described in 1932
Cotton diseases
Fungal plant pathogens and diseases
Fungus species | Mycosphaerella areola | Biology | 45 |
30,034,086 | https://en.wikipedia.org/wiki/Technologies%20in%20Minority%20Report | The 2002 science fiction neo-noir film Minority Report, based on the 1956 short story of the same name by Philip K. Dick, featured numerous fictional future technologies which have proven prescient based on developments around the world. Before the film's production began, director Steven Spielberg invited fifteen experts to think about technologies that would be developed by 2054, the setting of the film.
Background
After E.T., Spielberg started to consult experts and put more scientific research into his films. In 1999, he invited fifteen experts convened by the Global Business Network, its chairman Peter Schwartz, and its co-founder Stewart Brand to a hotel in Santa Monica, California for a three-day "think tank". He also invited journalist Joel Garreau to cover the event. He wanted to consult with the group to create a plausible "future reality" for the year 2054 as opposed to a more traditional "science fiction" setting. Dubbed the "think tank summit", the experts included architect Peter Calthorpe, Douglas Coupland, computer scientist Neil Gershenfeld, biomedical researcher Shaun Jones, computer scientist Jaron Lanier, and former Massachusetts Institute of Technology (MIT) architecture dean William J. Mitchell.
Production Designer Alex McDowell kept what was nicknamed the "2054 bible", an 80-page guide created in preproduction which listed all the decided upon aspects of the future world: architectural, socio-economical, political, and technological. While the discussions did not change key elements in the film's action sequences, they were influential in the creation of some of the more utopian aspects of the film, though John Underkoffler, the science and technology advisor for the film, described it as "much grayer and more ambiguous" than what was envisioned in 1999. John Underkoffler, who designed most of Anderton's interface after Spielberg told him to make it like "conducting an orchestra," said "it would be hard to identify anything [in the movie] that had no grounding in reality." For example, Underkoffler conscientiously treated his cinematic representation of the gestural interface as an actual prototype, “We worked so hard to make the gestural interface in the film real. I really did approach the project as if it were an R&D thing.” McDowell teamed up with architect Greg Lynn to work on some of the technical aspects of the production design. McDowell said that "[a] lot of those things Alex cooked up for Minority Report, like the 3-D screens, have become real."
Product placement was used to depict the predicted lack of privacy and excessive publicity in a future society. The advertisements in Minority Report were handled by Jeff Boortz of Concrete Pictures, who said "the whole idea, from a script point of view, was that the advertisements would recognize you -- not only recognize you, but recognize your state of mind. It's the kind of stuff that's going on now with digital set-top boxes and the Internet." Nokia designed the phones used by the characters, and Lexus paid the producers $5 million for the rights to design the futuristic "Mag-Lev" cars. All told money raised through product placement accounted for $25 million of the film's $102 million production budget.
Real world influence
Spielberg described his ideas for the film's technology to Roger Ebert before the film's release:
News sources have noted the future technologies depicted in the film were prescient. The Guardian published a piece titled "Why Minority Report was spot on" in June 2010, and the following month Fast Company examined seven crime fighting technologies in the film similar to ones then currently appearing. It summarized that "the police state imagined in the Tom Cruise flick feels a bit more real every day." Other major media outlets such as the Wall Street Journal have published articles dedicated to this phenomenon, and National Public Radio (NPR) published an August 2010 podcast which analyzed the film's accuracy in predicting future technologies.
Technologies realized
Multi-touch interfaces
Multi-touch interfaces, similar to Anderton's, have been put out by Microsoft (2007), Obscura Digital (2008), MIT (2009), Intel (2009), and Microsoft again, this time for their Xbox 360 (2010). A company representative, at the 2007 premiere of the then Microsoft Surface (later renamed to Microsoft PixelSense,) promised it "will feel like Minority Report," and when Microsoft released the Kinect motion sensing camera add-on for their Xbox 360 gaming console in 2010, the Kinect's technology allowed several programmers, including students at MIT, to create Minority Report inspired user interfaces.
Iris scanners
Iris scanners, a form of biometrics, already existed by the time the film appeared in theaters. Media outlets described the systems manufactured by a Manhattan company named Global Rainmakers Incorporated (GRI) (2010) as similar to that in the movie. GRI disputed the notion that its technology could be the threat to privacy it is in the film. "Minority Report is one possible outcome," a corporate official told Fast Company. "I don't think that's our company's aim, but I think what we're going to see is an environment well beyond what you see in that movie--minus the precogs, of course." The company is installing hundreds of the scanners in Bank of America locations in Charlotte, North Carolina, and has a contract to install them on several United States Air Force bases., though the technology does not work in the way depicted in the film.
Technologies in development
Companies like Hewlett-Packard (HP) have announced they were motivated to do research by the film, in HP's case to develop cloud computing.
Autonomous cars
In the film, Anderton uses vehicles which can be both driven manually and autonomously; in one scene the police remotely override the vehicle in order to bring him into custody. Spielberg commissioned Lexus to design a car specifically for Minority Report, resulting in the Lexus 2054, a fuel-cell powered autonomous vehicle which is seen being built in an automated factory in the film. Autonomous, or self-driving cars have been in development since 1984. As Artificial Intelligence and ground-sensing technologies like LIDAR began to improve in the 2000s, major automotive manufacturers such as Ford, Nissan and General Motors began developing self-driving prototypes. Google began developing a self-driving vehicle prototype, named Waymo in 2009 while Tesla Motors rolled out the autopilot feature on their Model S vehicle in 2015. In 2011, the State of Nevada became the first jurisdiction in the world to formally legalise autonomous vehicles on public roads. Countries such as the United Kingdom (2013) France (2014), Switzerland (2015) and Singapore (2016) passed laws which allowed the testing of autonomous vehicles on public roads, with a view to further changes in legislation as the technologies improves.
Insect robots
Insect robots similar to the film's spider robots are being developed by the United States Military. These insects will be capable of reconnaissance missions in dangerous areas not fit for soldiers, such as "occupied houses". They serve the same purpose in the film. According to the developer, BAE Systems, the "goal is to develop technologies that will give our soldiers another set of eyes and ears for use in urban environments and complex terrain; places where they cannot go or where it would be too dangerous."
Gesture recognition
Multiple gesture recognition technologies currently in existence or under development have been compared to the one in the film.
Personalized advertising
Most of the advertising to consumers in Minority Report occurs when they are out of their homes. The advertisements interact in various ways; an Aquafina splashes water on its customers, Guinness recommends its products to the downtrodden to recover from "a hard day at work", a cereal box from which Anderton eats has a video advertisement, and when Anderton is fleeing the PreCrime force, an American Express advertisement observes, "It looks like you need an escape, and Blue can take you there," and a Lexus ad says, "A road diverges in the desert. Lexus. The road you're on, John Anderton, is the one less traveled." The advertisements in Minority Report were handled by Jeff Boortz of Concrete Pictures, who said "the whole idea, from a script point of view, was that the advertisements would recognize you—not only recognize you, but recognize your state of mind. It's the kind of stuff that's going on now with digital set-top boxes and the Internet."
Although the advertising-oriented website ClickZ called the film's interactive advertisements "a bit farfetched" in 2002, billboards capable of facial recognition are being developed by the Japanese company NEC. These billboards will theoretically be able to recognize passers-by via facial recognition, call them by name, and deliver customer specific advertisements. Thus far, the billboards can recognize age and gender and deliver demographically appropriate advertisements, but cannot discern individuals. According to The Daily Telegraph, the billboards will "behave like those in...Minority Report," uniquely identifying and communicating to those in their vicinities. IBM is developing similar billboards, with plans to deliver customized advertisements to individuals who carry identity tags. Like NEC, the company feels they will not be obtrusive as their billboards will only advertise products in which a customer is interested. Advertisers are embracing these billboards as they attempt to reduce costs by wasting fewer advertisements on uninterested consumers.
Crime prediction software
Crime prediction software was developed by a professor from the University of Pennsylvania (2010). As in the film, the program was announced for a trial run in Washington D.C., which, if successful, would have led to a national rollout. According to Fast Company, "IBM's new Blue Crime Reduction Utilizing Statistical History (CRUSH) program feels almost directly inspired by Minority Report. Similar to the 'precogs', IBM's new system uses 'predictive analytics', mining years and years of incident reports and law enforcement data to 'forecast criminal "hot spots. Police in Memphis have already had great success with the $11-billion 'precrime' predicting tool: Since installing Blue CRUSH, the city has seen a 31% drop in serious crime."
University of Chicago researchers published work on an approach to predicting crime up to a week in advance, but based on the geographic location rather than by the perpetuator, a privacy shortcoming of the CRUSH system.
E-papers
Xerox has been trying to develop something similar to e-paper since before the film was released in theaters. Electronic paper has been announced as being developed by MIT (2005), Germany (2006), media conglomerate Hearst Corporation (2008), and the South Korean electronics manufacturer LG (2010). In 2005, when the Washington Post asked the chief executive of MIT's spin-off handling their research when "the Minority Report newspaper" would be released, he predicted "around 2015." Tech Watch's 2008 article "‘Minority Report’ e-newspaper on the way" noted that Hearst was "pushing large amounts of cash into" the technology.
Jet packs
Spielberg decided to add the jetpacks worn by the policemen as a tribute to old science-fiction serials such as Commando Cody, even though the scientists considered them unrealistic. The jet packs are the only future technology depicted in the film which originated in science fiction. They already exist and perhaps their most famous flights occurred in the pregame ceremonies before Super Bowl I in 1967 and in Los Angeles in the 1984 Summer Olympics during that event's ceremonies. They have been held back as there is currently no way to mitigate their dangers to the operator and have extremely limited range.
Notes
External links
2002 science fiction films
Films about anti-fascism
Films about technology
Fiction about genetic engineering
Science fiction studies
Minority Report (film)
Gesture recognition
2000s political films | Technologies in Minority Report | Engineering,Biology | 2,445 |
28,189,638 | https://en.wikipedia.org/wiki/Affilin | Affilins are artificial proteins designed to selectively bind antigens. Affilin proteins are structurally derived from human ubiquitin (historically also from gamma-B crystallin). Affilin proteins are constructed by modification of surface-exposed amino acids of these proteins and isolated by display techniques such as phage display and screening. They resemble antibodies in their affinity and specificity to antigens but not in structure, which makes them a type of antibody mimetic. Affilin was developed by Scil Proteins GmbH as potential new biopharmaceutical drugs, diagnostics and affinity ligands.
Structure
Two proteins, gamma-B crystallin and ubiquitin, have been described as scaffolds for Affilin proteins. Certain amino acids in these proteins can be substituted by others without losing structural integrity, a process creating regions capable of binding different antigens, depending on which amino acids are exchanged. In both types, the binding region is typically located in a beta sheet structure, whereas the binding regions of antibodies, called complementarity-determining regions, are flexible loops.
Based on gamma crystallin
Historically, Affilin molecules were based on gamma crystallin, a family of proteins found in the eye lens of vertebrates, including humans. It consists of two identical domains with mainly beta sheet structure and a total molecular mass of about 20 kDa. The eight surface-exposed amino acids 2, 4, 6, 15, 17, 19, 36, and 38 are suitable for modification.
Based on ubiquitin
Ubiquitin, as the name suggests, is a highly conserved protein occurring ubiquitously in eukaryotes. It consists of 76 amino acids in three and a half alpha helix windings and five strands constituting a beta sheet. For example, the eight surface-exposed exchangeable amino acids 2, 4, 6, 62, 63, 64, 65, and 66 are located at the beginning of the first N-terminal beta strand (2, 4, 6), at the nearby beginning of the C-terminal strand and the loop leading up to it (63–66). The resulting Affilin proteins are about 10 kDa in mass.
Properties
The molecular mass of crystallin and ubiquitin based Affilin proteins is only one eighth or one sixteenth of an IgG antibody, respectively. This leads to an improved tissue permeability, heat stability up to 90 °C (195 °F), and stability towards proteases as well as acids and bases. The latter enables Affilin proteins to pass through the intestine, but like most proteins they are not absorbed into the bloodstream. Renal clearance, another consequence of their small size, is the reason for their short plasma half-life, generally a disadvantage for potential drugs.
Production
Molecular libraries of Affilin proteins are generated by randomizing sets of amino acids by mutagenesis methods. Substituting selected amino acids at the potential binding site gives 198 ≈ 17,000,000,000 possible combinations (e.g.: eight amino acids substituted by 19 amino acids). Cysteine is excluded because of its liability to form disulfide bonds. In an Affilin protein comprising two modified ubiquitin molecules, for example, up to 14 amino acids are exchanged, resulting in 8 × 1017 combinations, but not all of these are realized in a given library.
The next step is the selection of Affilin proteins that bind the desired target protein. To this end display techniques such as phage display or ribosome display are used. The fitting species are isolated and characterized physically, chemically and pharmacologically. Subsequent dimerisation or multimerisation can increase plasma half-life and, due to avidity, affinity to the target protein. Alternatively, multispecific Affilin molecules can be generated, binding different targets simultaneously. Radionuclides or cytotoxins can be conjugated to Affilin proteins, making them potential tumour therapeutics and diagnostics. Conjugation of cytokines has also been tested in vitro.
Large-scale production of Affilin proteins is facilitated by E. coli and other organisms commonly used in biotechnology.
References
External links
Scil Proteins, the developer
Antibody mimetics | Affilin | Chemistry | 878 |
28,108,729 | https://en.wikipedia.org/wiki/HD%20170657 | HD 170657 is a star in the southern constellation Sagittarius. It is a suspected variable star that has been measured ranging in apparent visual magnitude from 6.82 down to 6.88, which is dim enough to be a challenge to view with the naked eye even under ideal conditions. The star is located at a distance of 43 light years from the Sun based on parallax. It is drifting closer with a radial velocity of −43 km/s, and is predicted to come as close as in around 266,200 years. The space velocity components of this star are = .
This is a K-type main-sequence star with a stellar classification of K2V, which indicates that, much like the Sun, it is generating energy at its core using hydrogen fusion. The star has 79% of the mass of the Sun and 75% of the Sun's radius. It is nearly eight billion years old and is spinning with a projected rotational velocity of 4.2 The star is radiating 33.6% of the luminosity of the Sun from its photosphere at an effective temperature of 5,133 K. When observed with the Spitzer Space Telescope, this star did not display an excess emission of infrared radiation, which may otherwise indicate the presence of an orbiting debris disk.
References
K-type main-sequence stars
HD, 170657
Sagittarius (constellation)
BD-18 4986
0761
170657
090790 | HD 170657 | Astronomy | 301 |
33,209,199 | https://en.wikipedia.org/wiki/Amir%20Faghri | Amir Faghri is an American professor and leader in the engineering profession as an educator, scientist, and administrator. He is currently Distinguished Professor Emeritus of Engineering and Distinguished Dean Emeritus at the University of Connecticut. He is also currently Distinguished Adjunct Professor at the University of California, Los Angeles. Faghri served as Head of the Mechanical Engineering Department from 1994 to 1998, and Dean of the School of Engineering at the University of Connecticut from 1998 to 2006. Faghri is well known for his contributions to the field of heat transfer. He is the world's leading expert in the area of heat pipes and a contributor to thermal-fluids engineering in multiphase heat transfer.
Education
Amir Faghri received his M.S. and Ph.D. degrees in Mechanical Engineering from the University of California, Berkeley in 1974 and 1976, respectively. He received his B.S. degree from Oregon State University with highest honors in 1973.
Technical contributions
Faghri is the author of six books and has published more than 350 archival technical publications, including 245 journal papers. He holds eleven U.S. patents related to heat pipes, energy storage devices, and fuel cells. Faghri serves on the editorial boards of eight scientific journals, including the International Journal of Heat and Mass Transfer, the International Communications in Heat and Mass Transfer, Heat Transfer Engineering, and Frontiers in Heat and Mass Transfer in roles such as executive editor, editor-in-chief, founding editor, and honorary editorial board member. In the 1980s, he unraveled complex thermal problems, including convection in the presence of phase-change materials for thermal energy storage in space. This breakthrough led to a more rational design of cooling systems for NASA and the U.S. Air Force. In the 1990s, he developed high heat flux miniature heat pipes for commercial cooling of laptop computer chips, which have been a principal contributor to the ubiquitous presence of heat pipes for cooling microprocessors in present-day laptop computers.
Career
Amir Faghri started his academic career at Aryamehr University (now Sharif University of Technology) in 1976. He was one of the founding faculty members and administrators who established Isfahan University of Technology in 1977. He served on the faculty of IUT until 1980 as the founding director of its Energy Division (now separated into the Mechanical Engineering and Chemical Engineering Departments). In 1981, he joined the University of California, Berkeley as a visiting professor to teach thermal and energy courses. Following Berkeley, Faghri joined the faculty of Wright State University in 1982 and was promoted to Brage Golding Distinguished Professor in 1989. He developed a nationally recognized heat transfer group and laboratory at WSU, interacting extensively with NASA and the U.S. Air Force.
Faghri joined the University of Connecticut in 1994 and served as Head of the Mechanical Engineering Department from 1994 to 1998, United Technologies Corporation Chair Professor in Thermal-Fluids Engineering from 2004 to 2010, and Dean of the School of Engineering from 1998 to 2006. During his tenure as Dean, he attracted corporate and alumni support to establish 17 endowed professorships, including 11 chair professorships. He increased total enrollment by 106%, increased the number of merit scholarships by approximately 200%, and added three new buildings/facilities. He also doubled the number of undergraduate degree programs from 6 to 12. Faghri was the founder of the Connecticut Global Fuel Cell Center (renamed later as the Center for Clean Energy Engineering) at the University of Connecticut. Faghri joined the University of California, Los Angeles in 2022. Faghri was elected as a director of the public Company RBC Bearings Incorporated in May 2022.
Honors and awards
Amir Faghri has received many national and international honors and awards, including the American Institute of Aeronautics and Astronautics Thermophysics Award in 1998, the American Society of Mechanical Engineers Heat Transfer Memorial Award in 1998, the ASME James Harry Potter Gold Medal in 2005, and the ASME/AIChE Max Jakob Memorial Award in 2010, which is considered the highest honor in the field of heat transfer. Faghri has been a longtime fellow of the ASME and an elected member of the Connecticut Academy of Science and Engineering. He was also inducted to the Oregon State University Council of Distinguished Engineers in 1999. He was the recipient of the George Grover Medal that was given at the 19th International Heat Pipe Conference in Pisa, Italy in June 2018. In 2019, he is elected as a Honorary Member of the American Society of Mechanical Engineers to recognize his distinguished service that contributes significantly to the attainment of the goals of the engineering profession. He has achieved the Number 1 lifetime ranking among all scholars worldwide in heat pipes by ScholarGPS.
Books
Faghri has authored six books:
Faghri, A., and Zhang, Y., (2020), Fundamentals of Multiphase Heat Transfer and Flow, , Springer Nature Switzerland AG.
Faghri, A (2016), Heat Pipe Science and Technology, Second edition, Global Digital Press.
Faghri, A., Zhang, Y., and Howell, J. R. (2010). Advanced Heat and Mass Transfer. Global Digital Press, Columbia, MO.
Faghri, A., and Zhang, Y. (2006). Transport Phenomena in Multiphase Systems. Elsevier Academic Press.
Faghri, A. (1995). Heat Pipe Science and Technology. Taylor & Francis Inc.
Faghri, A. (1991). Thermal Science Measurements. Kendall/Hunt Publishing Company.
His signature work, Heat Pipe Science and Technology, ranks as the most widely cited book on the subject of heat pipes by Google Scholar. His textbook, Transport Phenomena in Multiphase Systems, presented, for the first time, a unified fundamental treatise on all three forms of phase change — boiling and evaporation, melting and solidification, and sublimation and vapor deposition. His latest textbook, Advanced Heat and Mass Transfer, covers the subject of heat and mass transfer with a focus on the recent advances in the field.
References
External links
Biographical information at the University of Connecticut
ScienceDirect - In Celebration of Professor Amir Faghri on his 60th Birthday
ASME Honors Professor Amir Faghri of the University of Connecticut
Thermal-Fluids Central: e-Books - Amir Faghri
Google Books - Transport Phenomena in Multiphase Systems
Thermal-Fluids Central: e-Books - Advanced Heat and Mass Transfer
Thermal-Fluids Central: Journals - Frontier in Heat and Mass Transfer - An International Journal
University of California, Los Angeles - University of California, Los Angeles
Living people
1951 births
University of Connecticut faculty
UC Berkeley College of Engineering alumni
Fellows of the American Society of Mechanical Engineers
Thermodynamicists
American fluid dynamicists
Iranian expatriate academics | Amir Faghri | Physics,Chemistry | 1,368 |
49,780,017 | https://en.wikipedia.org/wiki/Tripod%20Beta | Tripod Beta is an incident and accident analysis methodology made available by the Stichting Tripod Foundation via the Energy Institute. The methodology is designed to help an accident investigator analyse the causes of an incident or accident in conjunction with conducting the investigation. This helps direct the investigation as the investigator will be able to see where more information is needed about what happened, or how or why the incident occurred.
Early development
Tripod Beta was developed by Shell International Exploration and Production B.V. as the result of Shell-funded academic research in the 1980s and 1990s. Such research contributed towards the development of the Swiss cheese model of accident causation, and in the late 1990s and early 2000s, towards the development of the Hearts and Minds safety culture toolkit.
The research was based on the following hypotheses
Accidents happen because controls fail (now known as the Swiss Cheese model)
The underlying causes of controls failing are due to underlying causes in the way we manage
Those underlying causes, metaphorically comparable with 'pathogens' are present long before an accident occurs
Those 'imperfections' are known by some of the people before the incident occurs
People are usually well intended, trying to get their task done despite the imperfections in the system.
If we can identify those failures and take action to remove them we will reduce the probability of accidents
The early research focused on a predictive tool to identify underlying causes of incidents before they occurred rather than an incident investigation methodology This would later become the basis for Tripod Delta.
The incident investigation methodology whilst always part of the research came later around 1990. initial Tripod Investigation followed a tabular approach as graphical program was not yet available
Following the 1988 Piper Alpha disaster and Lord Cullen report in 1990, Shell International created a team to look at Safety management systems and Safety Cases. That team worked until 2004 they developed a number of approaches, the EP forum (later the Oil and Gas Producers Association) guidance on Safety cases was founded on work by that team. The team worked closely with Leiden and Manchester Universities to the understanding of accident causation that had been developed in the 1984–2000 research program.
In 1992 Microsoft released windows version 3.1. That gave the team the ability for the first time to create graphical representations of the theories developed. Two software-based tools were developed: Bow Tie and Tripod Beta, respectively.
Stichting Tripod Foundation
In 1998, following publicity of Tripod Beta, Shell International Exploration and Production B.V. transferred copyright of the Tripod Beta methodology to the Stichting Tripod Foundation, a charitable body under Dutch law. The Foundation's purpose is to promote best practice in industry through the sensible usage of Tripod technologies to aid in the understanding and prevention of accidents and incidents. In 2012 the Foundation partnered with the Energy Institute in the UK in order to help achieve this. The Energy Institute currently publishes the official guide on using the Tripod Beta methodology. The Stichting Tripod Foundation also accredits approved training courses, and assesses the competence of users of the Tripod methodology. Users who are assessed as competent in Tripod Beta are accredited as 'Tripod Practitioners'.
The methodology
Tripod Beta is a methodology that can be conducted via pen and paper or using specialized software.
The methodology combines a number of theories of accident causation into generating a single model (a 'Tripod tree') of an accident or incident, most notably the Swiss cheese model (barrier-based risk management) and human factors-oriented theories such as GEMS (Generic Error-modelling system). as well as the worldwide accepted as a 'mainstream model 'GOP' (Gap, Outcome and Power) by Martin Fishbein and Icek Ajzen, expanding on the 'Theory of Reasoned Action' (TRA) (WIKI)...
A Tripod tree is divided into three sections.
What happened unexpectedly?
An Event in terms of Tripod Beta is the unexpected, unwanted or adverse outcome of a willfully carried out and intended process. The sequence such Events in an incident are shown in the tree as a series of 'trios', a simple logic (AND) gate that tells how the combination of two events led to an outcome. The outcome can then become an event that can combine with another event to cause a subsequent outcome, and so on.
As the sequence of trios goes forward in time, the tree ends when the last incident occurs, but if relevant can also take into account what happened after the incident (such as emergency response).
Potential events may also be investigated; Such Events that did not 'materialize' either because a 'barrier' prevented it from happening, or by sheer 'randomness' which is less likely.
As the sequence goes backwards in time, the tree usually begins with the last 'normal' Event, i.e. an event that was a normal part of ( business) operations.
This represents a logical place at which to start investigating an incident, as everything that happened after this was unusual and therefore worth investigating 'what went wrong?'.
A trio has three elements: the Event (the outcome, a change in state to an object, causing an effect such as an injury), the object (the person or thing that was changed (damaged), and the agent of change (the energy, 'driving' force or hazard that caused change or damage to the object). A logic test is used to ensure the correct identification of these elements: 'Agent of change' acts upon 'object' and results in 'event'. For example, 'Fire' acts upon 'Person' and results in 'Person burnt by fire'.
The Tripod practitioner first models the incident by constructing a series of Trios that explain 'what happened'.
Trees usually have between two and five trios via interconnecting nodes, where either an Event turns into an 'Agent of Change'in a subsequent trio, or an Object at the same time turns into an Event if affected by another Agent of Change.
How did it happen?
In Tripod theory, accidents are managed through the usage of 'Barriers'. Barriers are ( intended) functions of a (safety) management system, such as automated trips, relief valves, etc. that prevent an Agent of Change or hazard from causing an unexpected change or incident. Barriers are often people's actions (interventions) conducting critical tasks (such as responding to alarms) often described by rules and procedures but not necessarily.
Incidents are therefore 'allowed' to happen by the ineffectiveness at this particular point in time of one or more of these barriers.
Once the Tripod practitioner has created a series of Trios the next step is to identify the barriers that should have been in place to prevent the incident occurring. This is done for each individual Trio. Only barriers that could have actually mitigated or prevented the next event are considered. Predominantly, 'Failed Barriers' are considered. These are the barriers that should have prevented the incident but failed for various reasons. For example, a barrier to prevent injury in a car is a seat belt; however, this barrier may fail because the driver did not wear a seat belt, or the seat belt mechanism itself was faulty.
'Missing Barriers' (barriers that should have been in place according to 'best practice' but had not been established by the organisation), 'Inadequate Barriers' (barriers that functioned as intended but could not achieve the required function to prevent the incident; for example, a seat belt will only prevent serious injury under certain circumstances) and 'Effective Barriers' (barriers that succeeded in preventing the subsequent event) are also considered. If the analysis is modelling a 'Potential Event', unless the event was only prevented through sheer luck, there will be one or more Effective Barrier within the incident trajectory. For example, a seat belt functions to prevent the death of the driver.
Why did it happen?
Once the investigator has identified the sequence of events, and the Failed-, Missing-, and Inadequate- Barriers, the next step is to understand the causes of these being ineffective when needed.
Immediate causes
In Tripod theory, barriers fail because of human action or inaction. This may be human action directly related to the barrier functionality (such as the driver not wearing the seat belt), but may also be indirect, such as a failure during the design or installation of the barrier, or the failure of management to consider implementing the barrier. This human action or inaction is called the 'Immediate Cause'. This is the substandard act or human error. Often, when (non-Tripod) investigations determine that the cause of an accident was due to human error, in Tripod-terms this would relate to the immediate cause only.
Preconditions
The reasons for substandard acts and human error cannot always be definitively known, however it is known that human errors have situation or psychological precursors. These 'Preconditions' are aspects of the working environment that are likely to have contributed towards the substandard action or inaction. For example, typical Preconditions may be: fatigue due to improper work-life balance; perception that a guard is not required, loss of situation awareness, improper motivation, poor supervision; rushing in order to complete a job quickly; noisy or dark environment; confusing procedures, incorrect understanding of work objective, etc.
Through interviews and investigation the investigator is able to identify a number of Preconditions that likely contributed towards the substandard action.
Underlying causes
In Tripod theory, Preconditions represent aspects of the working environment that organisations should try to manage, usually via good leadership, safety culture, and a well-documented and implemented (safety) management system. For example: fatigue of the workforce can be managed by adequate shift rotas, and policies on shift length and overtime; rushing in order to complete a job quickly can be managed by leaders not sending conflicting messages that prioritize productivity over safety, etc. These weaknesses or failures of leadership, culture or management systems are the underlying causes of accidents and incidents. They help create, or fail to correct, the Preconditions.
The investigator looks for evidence of management system-level failures that created or failed to control the Preconditions. For example, this may be ambiguously worded, or lack of, written policy, unclear management-level responsibilities, apparent lack of visibility of leadership, ineffective risk management processes, etc. Tripod Beta encourages the investigator to consider these aspects of the incident.
Importantly, Tripod Beta placed great emphasis on identifying the Underlying Causes of accidents and incidents because, whilst many aspects of an accident (such as the sequence of events, Barriers and Preconditions) may be quite specific to a particular accident or incident, Underlying Causes will be non-specific to an accident and likely will be the cause of, or potential cause of, many different accidents and incidents, even those that seem completely unrelated.
Recommendations
The outcome of a Tripod Beta analysis are usually a number of recommendations for improvements within the organisation in order to prevent the same or other incidents occurring. Recommendations may or may not be formed by the person investigating.
Recommendations focus only on two aspects of the Tripod analysis: the Barriers and the Underlying Causes.
It is important to strengthen or reinstate the barriers so that the particular operation that was investigated can continue. Recommendations for improving Barriers are to prevent the same (or similar) incident happening and may involve fixing equipment or putting in place extra checks and additional independent barriers where barriers overly rely on human performance.
As Underlying Causes can be causal in many different types of incident, tackling the Underlying Causes may have the greater benefit in the long-term at preventing multiple incidents. Recommendations to tackle Underlying Causes are often aimed at management system level and are sometimes much harder to implement.
Recommendations are not made for other aspects of the incident (such as the Immediate Causes) as such recommendations will be unlikely to be effective at preventing further incidents. For example, recommendations for improving Immediate Causes (the substandard actions) often focus on retraining or punishing the person involved, which will be unlikely to prevent other people making the same error in future.
See also
Root cause analysis
References
Hazard analysis
Safety engineering | Tripod Beta | Engineering | 2,471 |
2,132,175 | https://en.wikipedia.org/wiki/Thorn%20Electrical%20Industries | Thorn Electrical Industries Limited was a British electrical engineering company. It was listed on the London Stock Exchange, but merged with EMI Group to form Thorn EMI in 1979. It was de-merged in 1996 and became a constituent of the FTSE 100 Index, but was acquired by the Japanese Nomura Group only two years later. It is now owned by Terra Firma Capital Partners.
History
Sir Jules Thorn founded the company with his business partner Alfred Deutsch in March 1928 as The Electric Lamp Service Company Ltd. Thorn had worked in England as a travelling salesman for company Olso, an Austrian manufacturer of gas mantles. When Olso went bankrupt, Thorn decided to stay in England. Deutsch, an Austrian engineer, visited Thorn in 1928 and was persuaded to stay to help organize the company's production process.
Thorn acquired the Atlas Lamp Works company in 1932 and began making light bulbs in Edmonton, north London. The company grew rapidly to become Thorn Lighting, one of the world's largest producers of lamps, luminaires and lighting components.
The name changed again to Thorn Electrical Industries in November 1936. The company later began to diversify by buying the electronics firm Ferguson Radio Corporation in the late 1950s and Ultra Electronics in 1961.
Thorn took over Glover and Main, a local Edmonton company in 1965, a gas-appliance manufacturer. Thorn manufactured television sets in Australia.
The company also owned Thorn Benham which made electrical catering equipment.
It had a joint venture in the 70's with General Telephone and Electronics of America (GTE) to try and break into the UK telephone equipment market. GT&E was later replaced by Ericcson of Sweden who wanted a foothold in the UK equipment market and who eventually bought out Thorn's interest.
The Thorn Group's notable brands over the years included Radio Rentals, DER (both TV rental), Rumbelows (electrical goods), Tricity (cookers and fridges), Kenwood (food mixers), Thorn Kidde (fire protection), TMD
(microwave equipment) and Mazda (light bulbs).
Merger with EMI
Thorn merged with the EMI Group in October 1979, to form Thorn EMI.
On 16 August 1996, Thorn EMI shareholders voted in favour of de-merging Thorn. The electronics and rentals divisions were divested as Thorn plc.
Post demerger
Future Rentals, a subsidiary of the Nomura Group, acquired Thorn in 1998. It subsequently passed to Terra Firma Capital Partners which set up the BrightHouse chain. The remainder of the company was sold to a private buyer in June 2007.
Big Brown Box was launched in Australia in 2008 by Thorn, and was later sold to Appliances Online, a subsidiary of Winning Appliances, in 2011. The site was an online retailer of AV equipment, consumer electronics, and appliances.
References
See also
Thorn Lighting
Electronics companies of the United Kingdom
Electronics industry in London
Manufacturing companies based in London
Companies formerly listed on the London Stock Exchange
1928 establishments in England
1998 disestablishments in England
British companies established in 1928
Manufacturing companies established in 1928
Electronics companies established in 1928
British companies disestablished in 1998
Radio manufacturers | Thorn Electrical Industries | Engineering | 636 |
22,627,438 | https://en.wikipedia.org/wiki/Alnespirone | Alnespirone (S-20,499) is a selective 5-HT1A receptor full agonist of the azapirone chemical class. It has antidepressant and anxiolytic effects.
See also
8-OH-DPAT
Azapirone
References
Serotonin receptor agonists
Imides
Resorcinol ethers
Tertiary amines
Azapirones
Chromanes
Lactams
Cyclopentanes
Cyclic ethers
2-Phenoxyethanamines
Propyl compounds
Methoxy compounds
Spiro compounds | Alnespirone | Chemistry | 119 |
23,916,899 | https://en.wikipedia.org/wiki/Order%20isomorphism | In the mathematical field of order theory, an order isomorphism is a special kind of monotone function that constitutes a suitable notion of isomorphism for partially ordered sets (posets). Whenever two posets are order isomorphic, they can be considered to be "essentially the same" in the sense that either of the orders can be obtained from the other just by renaming of elements. Two strictly weaker notions that relate to order isomorphisms are order embeddings and Galois connections.
The idea of isomorphism can be understood for finite orders in terms of Hasse diagrams. Two finite orders are isomorphic exactly when a single Hasse diagram (up to relabeling of its elements) expresses them both, in other words when every Hasse diagram of either can be converted to a Hasse diagram of the other by simply relabeling the vertices.
Definition
Formally, given two posets and , an order isomorphism from to is a bijective function from to with the property that, for every and in , if and only if . That is, it is a bijective order-embedding.
It is also possible to define an order isomorphism to be a surjective order-embedding. The two assumptions that cover all the elements of and that it preserve orderings, are enough to ensure that is also one-to-one, for if then (by the assumption that preserves the order) it would follow that and , implying by the definition of a partial order that .
Yet another characterization of order isomorphisms is that they are exactly the monotone bijections that have a monotone inverse.
An order isomorphism from a partially ordered set to itself is called an order automorphism.
When an additional algebraic structure is imposed on the posets and , a function from to must satisfy additional properties to be regarded as an isomorphism. For example, given two partially ordered groups (po-groups) and , an isomorphism of po-groups from to is an order isomorphism that is also a group isomorphism, not merely a bijection that is an order embedding.
Examples
The identity function on any partially ordered set is always an order automorphism.
Negation is an order isomorphism from to (where is the set of real numbers and denotes the usual numerical comparison), since −x ≥ −y if and only if x ≤ y.
The open interval (again, ordered numerically) does not have an order isomorphism to or from the closed interval : the closed interval has a least element, but the open interval does not, and order isomorphisms must preserve the existence of least elements.
By Cantor's isomorphism theorem, every unbounded countable dense linear order is isomorphic to the ordering of the rational numbers. Explicit order isomorphisms between the quadratic algebraic numbers, the rational numbers, and the dyadic rational numbers are provided by Minkowski's question-mark function.
Order types
If is an order isomorphism, then so is its inverse function.
Also, if is an order isomorphism from to and is an order isomorphism from to , then the function composition of and is itself an order isomorphism, from to .
Two partially ordered sets are said to be order isomorphic when there exists an order isomorphism from one to the other. Identity functions, function inverses, and compositions of functions correspond, respectively, to the three defining characteristics of an equivalence relation: reflexivity, symmetry, and transitivity. Therefore, order isomorphism is an equivalence relation. The class of partially ordered sets can be partitioned by it into equivalence classes, families of partially ordered sets that are all isomorphic to each other. These equivalence classes are called order types.
See also
Permutation pattern, a permutation that is order-isomorphic to a subsequence of another permutation
Notes
References
.
.
.
.
Morphisms
Order theory | Order isomorphism | Mathematics | 799 |
35,598,338 | https://en.wikipedia.org/wiki/Neofunctionalization | Neofunctionalization, one of the possible outcomes of functional divergence, occurs when one gene copy, or paralog, takes on a totally new function after a gene duplication event. Neofunctionalization is an adaptive mutation process; meaning one of the gene copies must mutate to develop a function that was not present in the ancestral gene. In other words, one of the duplicates retains its original function, while the other accumulates molecular changes such that, in time, it can perform a different task.
The process
The process of neofunctionalization begins with a gene duplication event, which is thought to occur as a defense mechanism against the accumulation of deleterious mutations. Following the gene duplication event there are two identical copies of the ancestral gene performing exactly the same function. This redundancy allows one the copies to take on a new function. In the event that the new function is advantageous, natural selection positively selects for it and the new mutation becomes fixed in the population.
The occurrence of neofunctionalization can most often be attributed to changes in the coding region or changes in the regulatory elements of a gene. It is much more rare to see major changes in protein function, such as subunit structure or substrate and ligand affinity, as a result of neofunctionalization.
Selective constraints
Neofunctionalization is also commonly referred to as "mutation during non-functionality" or "mutation during redundancy". Regardless of if the mutation arises after non-functionality of a gene or due to redundant gene copies, the important aspect is that in both scenarios one copy of the duplicated gene is freed from selective constraints and by chance acquires a new function which is then improved by natural selection. This process is thought to occur very rarely in evolution for two major reasons. The first reason is that functional changes typically require a large number of amino acid changes; which has a low probability of occurrence. Secondly, because deleterious mutations occur much more frequently than advantageous mutations in evolution. This makes the likelihood that gene function is lost over time (i.e. pseudogenization) far greater than the likelihood of the emergence of a new gene function.
Walsh discovered that the relative probability of neofunctionalization is determined by the selective advantage and the relative rate of advantageous mutations. This was proven in his derivation of the relative probability of neofunctionalization to pseudogenization, which is given by: where ρ is the ratio of advantageous mutation rate to null mutation rate and S is the population selection 4NeS (Ne: effective population size S: selection intensity).
Classical model
In 1936, Muller originally proposed neofunctionalization as a possible outcome of a gene duplication event. In 1970, Ohno suggested that neofunctionalization was the only evolutionary mechanism that gave rise to new gene functions in a population. He also believed that neofunctionalization was the only alternative to pseudogenization. Ohta (1987) was among the first to suggest that other mechanisms may exist for the preservation of duplicated genes in the population. Today, subfunctionalization is a widely accepted alternative fixation process for gene duplicates in the population and is currently the only other possible outcome of functional divergence.
Neosubfunctionalization
Neosubfunctionalization occurs when neofunctionalization is the result of subfunctionalization. In other words, once a gene duplication event occurs forming paralogs that after an evolutionary period subfunctionalize, one gene copy continues on this evolutionary journey and accumulates mutations that give rise to a new function. Some believe that neofunctionalization is the end stage for all subfunctionalized genes. For instance, according to Rastogi and Liberles "Neofunctionalization is the terminal fate of all duplicate gene copies retained in the genome and subfunctionlization merely exist as a transient state to preserve the duplicate gene copy." The results of their study become punctuated as population size increases.
Examples
The evolution of the antifreeze protein in the Antarctic zoarcid fish Lycodichthys dearborni provides a prime example of neofunctionalization after gene duplication. In the case of the Antarctic zoarcid fish type III antifreeze protein gene (AFPIII; ) diverged from a paralogous copy of sialic acid synthase (SAS) gene. The ancestral SAS gene was found to have both sialic acid synthase and rudimentary ice-binding functionalities. After duplication one of the paralogs began to accumulate mutations that lead to the replacement of SAS domains of the gene allowing for further development and optimization of the antifreeze functionality. The new gene is now capable of noncolligative freezing-point depression, and thus is neofunctionalized. This specialization allows Antarctic zoarcid fish to survive in the frigid temperatures of the Antarctic Seas.
Another example concerns the light-sensitive opsin proteins in vertebrate eyes that allow them to see different wavelengths of light. Extant vertebrates typically have four cone opsin classes (LWS, SWS1, SWS2, and Rh2) as well as one rod opsin class (rhodopsin, Rh1), all of which were inherited from early vertebrate ancestors. These five classes of vertebrate visual opsins emerged through a series of gene duplications beginning with LWS and ending with Rh1.
Model limitations
Limitations exist in neofunctionalization as a model for functional divergence primarily because:
the amount of nucleotide changes giving rise to a new function must be very minimal; making the probability for pseudogenization much higher than neofunctionalization after a gene duplication event.
After a gene duplication event both copies may be subjected to selective pressure equivalent to that constraining the ancestral gene; meaning that neither copy is available for neofunctionalization.
In many cases positive Darwinian selection presents a more parsimonious explanation for the divergence of multigene families.
See also
Subfunctionalization
References
Genetics | Neofunctionalization | Biology | 1,265 |
3,453,760 | https://en.wikipedia.org/wiki/Cable%20Internet%20access | In telecommunications, cable Internet access, shortened to cable Internet, is a form of broadband internet access which uses the same infrastructure as cable television. Like digital subscriber line (DSL) and fiber to the premises, cable Internet access provides network edge connectivity (last mile access) from the Internet service provider to an end user. It is integrated into the cable television infrastructure analogously to DSL, which uses the existing telephone network. Cable TV networks and telecommunications networks are the two predominant forms of residential Internet access. Recently, both have seen increased competition from fiber deployments, wireless, and satellite internet access.
Hardware and bit rates
Broadband cable Internet access requires a cable modem at the customer's premises and a cable modem termination system (CMTS) at a cable operator facility, typically a cable television headend. The two are connected via coaxial cable to a hybrid fibre-coaxial (HFC) network. While access networks are referred to as last-mile technologies, cable Internet systems can typically operate where the distance between the modem and the termination system is up to . If the HFC network is large, the cable modem termination system can be grouped into hubs for efficient management. Several standards have been used for cable internet, but the most common is Data Over Cable Service Interface Specification (DOCSIS).
A cable modem at the customer is connected via coaxial cable to an optical node, and thus into an HFC network. An optical node serves many modems as the modems are connected with coaxial cable to a coaxial cable "trunk" via distribution "taps" on the trunk, which then connects to the node, possibly using amplifiers along the trunk. The optical node converts the radio frequency (RF) signal in the coaxial cable trunk into light pulses to be sent through optical fibers in the HFC network. At the other end of the network, an optics platform or headend platform converts the light pulses into RF signals in coaxial cables again using transmitter and receiver modules, and the cable modem termination system (CMTS) connects to these coaxial cables. An example of an optics platform is the Arris CH3000.
There are two coaxial cables at the CMTS for each node: one for the downstream (download speed signal), and the other for the upstream (upload speed signal). The CMTS then connects to the ISP's IP (Internet Protocol) network.
Downstream, the direction toward the user, bit rates can be as high as 1 Gbit/s. Upstream traffic, originating at the user, ranges from 384 kbit/s to more than 50 Mbit/s, although maximum effective range seems to be unknown. One downstream channel can handle hundreds of cable modems. As the system grows, the CMTS can be upgraded with more downstream and upstream ports, and grouped into hub CMTSs for efficient management.
Most DOCSIS cable modems restrict upload and download rates, with customizable limits. These limits are set in configuration files which are downloaded to the modem using the Trivial File Transfer Protocol, when the modem first establishes a connection to the provider's equipment. Some users have attempted to override the bandwidth cap and gain access to the full bandwidth of the system by uploading their own configuration file to the cable modem, a process called uncapping.
Shared bandwidth
In most residential broadband technologies, such as cable Internet, DSL, satellite internet, or wireless broadband, a population of users share the available bandwidth. Some technologies share only their core network, while some including cable internet and passive optical network (PON) also share the access network. This arrangement allows the network operator to take advantage of statistical multiplexing, a bandwidth sharing technique which is employed to distribute bandwidth fairly, in order to provide an adequate level of service at an acceptable price. However, the operator has to monitor usage patterns and scale the network appropriately, to ensure that customers receive adequate service even during peak-usage times. If the network operator does not provide enough bandwidth for a particular neighborhood, the connection would become saturated and speeds would drop if many people are using the service at the same time, or drop out completely. Operators have been known to use a bandwidth cap, or other bandwidth throttling technique; users' download speed is limited during peak times, if they have downloaded a large amount of data that day.
See also
Cable modem
Digital cable
Internet service provider
Network service provider
Internet access
Triple play (telecommunications) - single coaxial cable connection for internet, TV and telephone service
References
Internet access
Digital cable | Cable Internet access | Technology | 942 |
72,198,733 | https://en.wikipedia.org/wiki/Bishop%27s%20graph | In mathematics, a bishop's graph is a graph that represents all legal moves of the chess piece the bishop on a chessboard. Each vertex represents a square on the chessboard and each edge represents a legal move of the bishop; that is, there is an edge between two vertices (squares) if they occupy a common diagonal. When the chessboard has dimensions , then the induced graph is called the bishop's graph.
Properties
The fact that the chessboard has squares of two colors, say red and black, such that squares that are horizontally or vertically adjacent have opposite colors, implies that the bishop's graph has two connected components, whose vertex sets are the red and the black squares, respectively. The reason is that the bishop's diagonal moves do not allow it to change colors, but by one or more moves a bishop can get from any square to any other of the same color. The two components are isomorphic if the board has a side of even length, but not if both sides are odd.
A component of the bishop's graph can be treated as a rook's graph on a diamond if the original board is square and has sides of odd length, because if the red squares (say) are turned 45 degrees, the bishop's moves become horizontal and vertical, just like those of the rook.
Domination
A square is said to be attacked by a bishop if the bishop can get to that square in exactly one move. A dominating set is an arrangement of bishops such that every square is attacked or occupied by one of those bishops. An independent dominating set is a dominating set in which no bishop attacks any other. The minimum number of bishops needed to dominate a square board of side n is exactly n, and this is also the smallest number of bishops that can form an independent dominating set.
By contrast, a total domination set, which is a dominating set for which every square, including those occupied by bishops, is attacked by one of the bishops, requires more bishops; on the square board of side n ≥ 3, the least size of a total dominating set is about 1/3 larger than a minimum dominating set.
References
Graph theory
Mathematical chess problems
Chess-related lists | Bishop's graph | Mathematics | 443 |
67,140,927 | https://en.wikipedia.org/wiki/Pollinator-mediated%20selection | Pollinator-mediated selection is an evolutionary process occurring in flowering plants, in which the foraging behavior of pollinators differentially selects for certain floral traits. Flowering plant are a diverse group of plants that produce seeds. Their seeds differ from those of gymnosperms in that they are enclosed within a fruit. These plants display a wide range of diversity when it comes to the phenotypic characteristics of their flowers, which attracts a variety of pollinators that participate in biotic interactions with the plant. Since many plants rely on pollen vectors, their interactions with them influence floral traits and also favor efficiency since many vectors are searching for floral rewards like pollen and nectar. Examples of pollinator-mediated selected traits could be those involving the size, shape, color and odor of flowers, corolla tube length and width, size of inflorescence, floral rewards and amount, nectar guides, and phenology. Since these types of traits are likely to be involved in attracting pollinators, they may very well be the result of selection by the pollinators themselves.
Having a floral display that either attracts a variety of pollinators or is efficient in the exchanges that occur during pollination can have advantages for the reproductive success of plants. Thus, pollinator behavior is important to understand in relation to the evolution of flowering plants and in some cases pollinator behavior is thought to lead to specialized pollination syndromes where floral traits have co-evolved with their pollinators in a way that are a direct response to the selection occurring from their pollen vectors. However, many flowering plants don't display morphology that excludes all pollinators except the one they co-evolved with. The most effective pollinator principle posits that floral traits reflect the adaptation to the pollinator that is efficient at transferring the most pollen. Selection might actually favor some degree of generalization while some flowers can also retain particular traits that allow them to adapt to a certain type of pollinator, but will ultimately be molded by the pollinators that are the most effective and visit the most frequently. This leads to shifts in pollination syndromes and to some genera having a high diversity of pollination syndromes among species, suggesting that pollinators are a primary selective force driving diversity and speciation.
Pollinator-mediated selection requires isolation and therefore cannot function in sympatry. Floral isolation is a consequence of pollinator behavior that reduces inter-lineage pollen transfer, which reduces gene flow and increases the possibility for a transition to different syndromes. Isolation with no gene flow between populations allows for the development of distinct species, thus speciation is a result of reproductive isolation and can be driven by pollinator-mediated selection.
See also
Fertilisation of Orchids (1862)
Pollination
Pollination syndrome
Floral biology
Flower constancy
References
Pollination
Evolutionary biology
Selection | Pollinator-mediated selection | Biology | 574 |
38,088,227 | https://en.wikipedia.org/wiki/Softlab | Softlab GmbH was a software development and information technology consulting company who developed and deployed a software application called Maestro I, which was the first integrated development environment in the history of computing. Founded in Munich, Germany, in 1971, Softlab became a part of the BMW Group in 1992. In 2008, BMW merged Softlab and some subsidiary companies of Softlab Group into a new company named Cirquent.
See also
Christiane Floyd
Notes
Companies based in Munich
Defunct companies of Germany
Software companies of Germany | Softlab | Technology | 101 |
26,662,554 | https://en.wikipedia.org/wiki/Aureole%20effect | The aureole effect or water aureole is an optical phenomenon similar to Heiligenschein, creating sparkling light and dark rays radiating from the shadow of the viewer's head. This effect is seen only over a rippling water surface. The waves act as lenses to focus and defocus sunlight: focused sunlight produces the lighter rays, while defocused sunlight produces the darker rays. Suspended particles in the water help make the aureole effect more pronounced. The effect extends a greater angular distance from the viewer's shadow when the viewer is higher above the water, and can sometimes be seen from a plane.
Although the focused (light) ray cones are actually more or less parallel to each other, the rays from the aureole effect appear to be radiating from the shadow of the viewer’s head due to perspective effects. The viewer's line of sight is parallel and lies within the cones, so from the viewer's perspective the rays seem to be radiating from the antisolar point, within the viewer's shadow.
As in similar antisolar optical effects (such as a glory or Heiligenschein), each observer will see an aureole effect radiating only from their own head’s shadow. Similarly, if a photographer holds their camera at arm's length, the aureole effect appearing in the picture will be seen radiating from the shadow of the camera, although the photographer would still see it around their head's shadow while taking the picture. This happens because the aureole effect always appears directly opposite the sun, centered at the antisolar point. The antisolar point itself is located within the shadow of the viewer, whatever this is: the eyes of the viewer or the camera's lens. As a matter of fact, when aureole effects are photographed from a plane, it is possible to tell where the photographer was seated.
See also
Gegenschein
Retroreflection
Subparhelic circle
Sylvanshine
References
Atmospheric optical phenomena | Aureole effect | Physics | 412 |
45,196,262 | https://en.wikipedia.org/wiki/Carbonyl%20allylation | In organic chemistry, carbonyl allylation describes methods for adding an allyl anion to an aldehyde or ketone to produce a homoallylic alcohol. The carbonyl allylation was first reported in 1876 by Alexander Zaitsev and employed an allylzinc reagent.
Enantioselective versions
In 1978, Hoffmann reported the first asymmetric carbonyl allylation using a chiral allylmetal reagent, an allylborane derived from camphor. Such methods utilize preformed allyl metal reagents. The approach is well developed using allyl boranes
(13) As illustrated by the Keck allylation, catalytic enantioselective additions of achiral allylmetal reagents to carbonyl compounds also are possible by organostannane additions.
Allylic boronate and -borane reagents have also been developed for enantioselective addition to carbonyls—in this class of reactions, the allylic boron reagent confers stereochemical control
(13)
Catalysis
In 1991, Yamamoto disclosed the first catalytic enantioselective method for carbonyl allylation, which employed a chiral boron Lewis acid-catalyst in combination with allyltrimethylsilane. Numerous other catalytic enantioselective methods for carbonyl allylation followed. Catalytic variants of the Nozaki-Hiyama-Kishi reaction represent an alternative method for asymmetric carbonyl allylation, but stoichiometric metallic reductants are required.
Whereas the aforementioned asymmetric carbonyl allylations rely on preformed allylmetal reagents, the Krische allylation exploits allyl acetate for enantioselective carbonyl allylation. Selected methods for asymmetric carbonyl allylation are summarized below.
Use in total synthesis
Carbonyl allylation has been employed in the synthesis of polyketide natural products and other oxygenated molecules with a contiguous array of stereocenters. For example, allylstannanation of a threose-derived aldehyde affords the macrolide antascomicin B, which structurally resembles FK506 and rapamycin, and is a potent binder of FKBP12. The Krische allylation was used to prepare the polyketide (+)-SCH 351448, a macrodiolide ionophore bearing 14 stereogenic centers.
Older primary literature
References
Organometallic chemistry
Carbon-carbon bond forming reactions | Carbonyl allylation | Chemistry | 517 |
1,860,707 | https://en.wikipedia.org/wiki/Solomon%20W.%20Golomb | Solomon Wolf Golomb ( ; May 30, 1932 – May 1, 2016) was an American mathematician, engineer, and professor of electrical engineering at the University of Southern California, best known for his works on mathematical games. Most notably, he invented Cheskers (a hybrid between chess and checkers) in 1948. He also fully described polyominoes in 1953. He specialized in problems of combinatorial analysis, number theory, coding theory, and communications. Pentomino boardgames, based on his work, would go on to inspire Tetris.
Achievements
Golomb, a graduate of the Baltimore City College high school, received his bachelor's degree from Johns Hopkins University and master's and doctorate degree in mathematics from Harvard University in 1957 with a dissertation on "Problems in the Distribution of the Prime Numbers".
While working at the Glenn L. Martin Company he became interested in communications theory and began his work on shift register sequences. He spent his Fulbright year at the University of Oslo and then joined the Jet Propulsion Laboratory at Caltech, where he researched military and space communications. He joined the faculty of USC in 1963 and was awarded full tenure two years later.
Golomb pioneered the identification of the characteristics and merits of maximum length shift register sequences, also known as pseudorandom or pseudonoise sequences, which have extensive military, industrial and consumer applications. Today, millions of cordless and cellular phones employ pseudorandom direct-sequence spread spectrum implemented with shift register sequences. His efforts made USC a center for communications research.
Golomb was the inventor of Golomb coding, a form of entropy encoding. Golomb rulers, used in astronomy and in data encryption, are also named for him, as is one of the main generation techniques of Costas arrays, the Lempel-Golomb generation method.
He was a regular columnist, writing Golomb's Puzzle Column in the IEEE Information Society Newsletter. He was also a frequent contributor to Scientific American'''s Mathematical Games column (The column did much to publicize his discoveries about polyominoes and pentominoes) and a frequent participant in Gathering 4 Gardner conferences. Among his contributions to recreational mathematics are Rep-tiles. He also contributed a puzzle to each issue of the Johns Hopkins Magazine, a monthly publication of his undergraduate alma mater, for a column called "Golomb's Gambits", and was a frequent contributor to Word Ways: The Journal of Recreational Linguistics''.
Awards
Golomb was a member of both the National Academy of Engineering and the National Academy of Sciences.
In 1985, he received the Shannon Award of the Information Theory Society of the IEEE.
In 1992, he received the medal of the U.S. National Security Agency for his research, and he has also been the recipient of the Lomonosov Medal of the Russian Academy of Science and the Kapitsa Medal of the Russian Academy of Natural Sciences.
In 2000, he was awarded the IEEE Richard W. Hamming Medal for his exceptional contributions to information sciences and systems. He was singled out as a major figure of coding and information theory for over four decades, specifically for his ability to apply advanced mathematics to problems in digital communications.
In 2012, he became a fellow of the American Mathematical Society. That same year, it was announced that he had been selected to receive the National Medal of Science. In 2014, he was elected as a fellow of the Society for Industrial and Applied Mathematics "for contributions to coding theory, data encryption, communications, and mathematical games."
In 2013, he was awarded the National Medal of Science 2011.
In 2016, he was awarded the Benjamin Franklin Medal in Electrical Engineering "for pioneering work in space communications and the design of digital spread spectrum signals, transmissions that provide security, interference suppression, and precise location for cryptography; missile guidance; defense, space, and cellular communications; radar; sonar; and GPS."
Selected books
This book contains some previously hard-to-find works of Solomon Golomb.
See also
Golomb graph
Golomb sequence
Polyomino
References
External links
Biography of Dr. Golomb at the USC Electrical Engineering Department's website
1932 births
2016 deaths
20th-century American mathematicians
21st-century American mathematicians
Combinatorial game theorists
Recreational mathematicians
Mathematics popularizers
Harvard Graduate School of Arts and Sciences alumni
American information theorists
Johns Hopkins University alumni
American number theorists
University of Southern California faculty
Baltimore City College alumni
Tetris
Chess variant inventors
National Medal of Science laureates
Fellows of the American Mathematical Society
Members of the United States National Academy of Engineering
Members of the United States National Academy of Sciences
20th-century American Jews
Fellows of the Society for Industrial and Applied Mathematics
Burials at Mount Sinai Memorial Park Cemetery
Mathematicians from Maryland
21st-century American Jews
Benjamin Franklin Medal (Franklin Institute) laureates | Solomon W. Golomb | Mathematics | 979 |
51,445,651 | https://en.wikipedia.org/wiki/Semantic%20triple | A semantic triple, or RDF triple or simply triple, is the atomic data entity in the Resource Description Framework (RDF) data model. As its name indicates, a triple is a sequence of three entities that codifies a statement about semantic data in the form of subject–predicate–object expressions (e.g., "Bob is 35", or "Bob knows John").
Subject, predicate and object
This format enables knowledge to be represented in a machine-readable way. Particularly, every part of an RDF triple is individually addressable via unique URIs—for example, the statement "Bob knows John" might be represented in RDF as:
http://example.name#BobSmith12 http://xmlns.com/foaf/0.1/knows http://example.name#JohnDoe34.
Given this precise representation, semantic data can be unambiguously queried and reasoned about.
The components of a triple, such as the statement "The sky has the color blue", consist of a subject ("the sky"), a predicate ("has the color"), and an object ("blue"). This is similar to the classical notation of an entity–attribute–value model within object-oriented design, where this example would be expressed as an entity (sky), an attribute (color) and a value (blue).
From this basic structure, triples can be composed into more complex models, by using triples as objects or subjects of other triples—for example, Mike → said → (triples → can be → objects).
Given their particular, consistent structure, a collection of triples is often stored in purpose-built databases called triplestores.
Difference to relational databases
A relational database is the classical form for information storage, working with different tables, which consist of rows. The query language SQL is able to retrieve information from such a database. In contrast, RDF triple storage works with logical predicates. No tables nor rows are needed, but the information is stored in a text file. A RDF-triple storage can be converted into an SQL database and the other way around. If the knowledge is highly unstructured and dedicated tables aren't flexible enough, semantic triples are used over classic relational storage.
In contrast to a traditional SQL database, an RDF triple storage isn't created with a table editor. The preferred tool is a knowledge editor, for example Protégé. Protégé looks similar to an object-oriented modeling application used for software engineering, but it's focused on natural language information. The RDF triples are aggregated into a knowledge base, which allows external parsers to run requests. Possible applications include the creation of non-player characters within video games.
Limitations
One concern about triple storage is its lack of database scalability. This problem is especially pertinent if millions of triples are stored and retrieved in a database. The seek time is larger than for classical SQL-based databases.
A more complex issue is a knowledge model's inability to predict future states. Even if all the domain knowledge is available as logical predicates, the model fails in answering what-if questions. For example, suppose in the RDF format a room with a robot and table is described. The robot knows what the location of the table is, is aware of the distance to the table and knows also that a table is a type of furniture. Before the robot can plan its next action, it needs temporal reasoning capabilities. Thus, the knowledge model should answer hypothetical questions in advance before an action is taken.
See also
Named graphs and quads, an extension to semantic triples to also include a context node as a fourth element.
Graph database
Link relation
References
External links
Semantic Web
Data modeling
Resource Description Framework
Knowledge representation | Semantic triple | Engineering | 789 |
11,569,477 | https://en.wikipedia.org/wiki/Stereum%20sanguinolentum | Stereum sanguinolentum is a species of fungus in the Stereaceae family. A plant pathogen, it causes red heart rot, a red discoloration on conifers, particularly spruces and Douglas-firs. Fruit bodies, which are produced either on dead wood or on dead branches of living trees, form a thin leathery crust on the wood surface. Fresh fruit bodies will bleed a red-colored liquid if injured, reflected in the common names bleeding Stereum or the bleeding conifer parchment. It can be the host of the parasitic jelly fungus Naematelia encephala (synonym Tremella encephala)
Taxonomy
The species was first described scientifically by Albertini and Schweinitz in 1805 as Thelephora sanguinolenta. Other genera to which it has been transferred throughout its taxonomical history include Phlebomorpha, Auricularia, Merulius, and Haematostereum. The fungus is commonly known as the "bleeding Stereum" or the "bleeding conifer parchment".
Description
The fruit body of Stereum sanguinolentum manifests itself as a thin (typically less than 1 mm thick) leathery crust on the surface of the host wood. Often, the upper edge is curled to form a narrow shelf (usually less than 10 mm thick). When present, these shelves can be fused to or overlap neighboring shelves. The surface of the fruit body consists of a layer of fine felt-like hairs, sometimes pressed flat against the surface. The color ranges from beige to buff to dark brown in mature specimens; the margins are lighter-colored. Fresh fruit bodies that are injured exude a red liquid, or will bruise a red color if touched. The fruit bodies dry to a greyish-brown color. The spores are ellipsoid to cylindrical, amyloid, and typically measure 7–10 by 3–4.5 μm.
Stereum sanguinolentum can be parasitized by the jelly fungus Naematelia encephala (synonym Tremella encephala).
Symptoms
Stereum sanguinolentum is a basidiomycete that causes both brown rot and white rot on conifers. The primary symptom is the red streaking discoloration. It is a white-rot basidiomycete that causes an extensive decay resulting from wounds, logging extractions, bark peeking, or branch pruning. Stereum sanguinolentum forms territorial clones while spreading by vegetative growth between spatially separated resource units; Armillaria spp, Heterobasidion annosum, Phellinus weirii, Inonotus tomentosus, and Phellinus noxius all work with Stereum sanguinolentum to attack the host. These pathogens combine to form territorial clones that can cover up to several hectares and survive for hundreds of years while infecting trees.
White rot causes a gradual decrease in cellulose as the decay continues to affect the tree. The white rot fungi consume the segments of cellulose that are released during the decay as quickly as they are produced. In white rot, which is also known as “wound rot of spruce”, the spores create open wounds on the host.
In brown rot, the cellulose is degraded. The rapid decrease in cellulose chain length implies that the catalyst that facilitates the depolymerization readily gains access to cellulose chains.
Life cycle
Stereum sanguinolentum is an amphithallic basidiomycete. Monospore intrabasidiome pairings are always compatible when reproducing, making it easy for the fungus to spread. The monobasidiospore and trama isolates are plurinucleate and bear clamp connections, and are often dikaryotic. Basidiospores are heterokikaryotic indicating that they are amphithallic. The mycelia that spreads the fungi grow from the heterodikaryotic spores that originate from the basidiospores. Either mating between homokaryons originating from the monokaryotic basidiospores, or by the parasexual process, results in recombination.
Stereum sanguinolentum is an extremely fast colonizer of newly dead or wounded conifer sapwood. Being amphithallic allows this cycle to have selective advantages upon such organisms by enhancing survival and dispersal.
Dispersal occurs by basidiospores only and the most common dispersal mechanism is wind. The wind-blown basidiospores are produced parthenogenically (i.e., reproduction from an ovum without fertilization). In white rot, the infection occurs from spores landing near the wounds or the transmission of mycelial fragments by wood sap. The rot extension spreads extremely fast in the first few years after infection but spreads even quicker if the infected injury is at the root collar rather than at the stem.
Habitat, distribution, and ecology
The fungus causes a brown heart rot, resulting in wood that is a light brown to red-brown color, and dry, with a stringy texture. A cross-section of infected wood reveals a circular infection around the center of the log. It enters plants through open wounds caused by mechanical damage or by grazing wildlife. Fragments of mycelia can be spread by wood wasps (genus Sirex). The rot spreads up to per year. It has also been recorded on balsam fir, Douglas fir, and western hemlock. The fungus is geographically widespread, and has been recorded from North America, Europe, east Africa, New Zealand, and Australia.
Management
The halos caused by Stereum sanguinolentum can be prevented by taking care during the harvesting of trees to assure that no injuries occur. If injuries occur, the wounds can be treated with wound dressing.
References
External links
Fungal conifer pathogens and diseases
Fungi described in 1805
Fungi of Africa
Fungi of Australia
Fungi of Europe
Fungi of New Zealand
Fungi of North America
Stereaceae
Taxa named by Johannes Baptista von Albertini
Taxa named by Lewis David de Schweinitz
Fungus species | Stereum sanguinolentum | Biology | 1,280 |
2,083,523 | https://en.wikipedia.org/wiki/Nosgoth | Nosgoth was a free-to-play multiplayer action game, developed by Psyonix and published by Square Enix for Microsoft Windows through digital distribution. It was a spin-off from the Legacy of Kain series of action-adventure games, and took place in its eponymous fictional universe. Nosgoth employed a player versus player system in which each match consisted of two rounds. Teams were composed of characters assigned to one of two races: vampires, designed around hack and slash combat; and humans, whose gameplay was styled after third-person shooters. Between rounds, teams would switch to control the opposing race, and the team which accumulated the most points by fighting their counterparts won the match.
Initially announced in June 2013 following internet leaks, Nosgoth was the first Legacy of Kain-associated game to debut in almost ten years, preceded by 2003's Legacy of Kain: Defiance. Though once intended for release as part of a single-player project, Legacy of Kain: Dead Sun, it was reconceptualized and continued development as a standalone title following Dead Sun'''s cancellation. When officially revealed in September 2013, Nosgoth attracted negative reception for its conceptual departure from the traditional single-player and story-driven Legacy of Kain formula. Initially beginning its closed alpha and opening up its servers to registered players in late 2013, the game's open beta began in January 2015 and ended in May 2016.
On April 8, 2016, Square Enix Europe announced that Nosgoth's servers would shut down on May 31, 2016. Nosgoth's official forums also shut down on June 14, 2016.
GameplayNosgoth was an asymmetrical online multiplayer action game. Gameplay focuses on melee Vampire characters against the ranged Human characters, and consists of two modes: Deathmatch and Flashpoint. In Deathmatch, the Vampires and Humans fight against each other to get a higher kill-count than the opposing team. If the kill-count reaches a threshold, the team that reached it wins; if time runs out, the team with the higher kills wins. Flashpoint is an offshoot of traditional Domination modes that tasks Humans with capturing a series of control points on the map. Two points are initially available, chosen at random, and when one point is captured, another appears. Humans have a limited amount of time to complete their mission, but each successful capture adds time to the clock, extending the match. Vampires win if they can prevent captures long enough for the match timer to run out.
Both team types use a character class-based system, meaning there are different vampires and humans that have varying different roles. The gameplay is fast-paced and intense, as Vampires rely on surprise attacks. Both teams possess abilities, which can be activated having cooldown periods when used, or are passive/innately active. Vampires have the ability to fly and climb buildings, humans are grounded and do not get to experience this trait. The Humans rely on weaponry, and projectile-based ranged attacks, such as crossbows. They have ammunition that can be depleted, which can then be refilled at ammunition stations. Vampires being melee, will have very different ways to attack, such as pouncing on top of a human and clawing at them, making them unable to act as they are knocked down, or being grabbed by a flying-based Vampire and dropped into more Vampires.
Plot
After the execution of Raziel in Legacy of Kain: Soul Reaver, Kain used the Chronoplast - a time-streaming device - to advance to his return, seemingly abandoning his empire. With the resulting power vacuum, the vampire clans of the empire fell against each other in a bloody civil war that nearly wiped out Raziel's clan. With the vampires occupied by their own internal struggles, humanity slowly became resurgent as escapees and free humans united to rebuild their shattered civilization.
Reconstructing cities and defences, and building themselves into a legitimate fighting force, the humans came to occupy a swath of southern Nosgoth. Attacks on key territories under vampire control eventually alerted the vampires to the danger posed by their former slaves and prompted the co-operation of the vampire clans in the face of the new threat - and a new war began for control of land of Nosgoth.
Development
After the release of Legacy of Kain: Defiance in 2003, it emerged that a sixth Legacy of Kain game had entered development at Ritual Entertainment in 2004, but was canceled before being publicly revealed. In 2013, speculation concerning the existence of another new Legacy of Kain project began with the discovery that series publisher Square Enix had registered the domain name warfornosgoth.com, thereby making reference to Nosgoth, the fictional setting of the series. The Official Xbox Magazine subsequently revealed evidence that Passion Pictures artist Richard Buxton had worked on a Legacy of Kain "animation pitch" in 2011; Buxton responded with the comment, "I'm not allowed to talk about that". In May 2013, an Advanced Micro Devices patch log and a profile on the Steam database were found to contain further references to an upcoming title named "War for Nosgoth" or "Nosgoth". Additionally, the LinkedIn profile of video game composer Kevin Riepl referred to a project named "Nosgoth", citing Square Enix as its publisher and Psyonix Games as its developer.
In early June 2013, Square Enix London Studios community manager George Kelion responded to the leaks and ensuing fan speculation by confirming the existence and ongoing development of the Nosgoth project to VG247 and Eurogamer, saying "it feels weird to have a bunch of info out there and we don't want the community to get the wrong idea". He clarified that Nosgoth was set in the same universe as previous Legacy of Kain titles, but that it was "not a traditional or even single-player LoK experience". and that "it's very much on a separate branch to both the Soul Reaver and Blood Omen series". Kelion further stated that long-time Legacy of Kain developers Crystal Dynamics were not involved in the project, and that Nosgoth would be officially announced, and properly revealed, at a later date, beyond the Electronic Entertainment Expo 2013.
In late June 2013, Legacy of Kain fan Mama Robotnik posted evidence from sources at Climax Studios, that Nosgoth had originated as a primarily single-player, story-based game, named Legacy of Kain: Dead Sun. The project had allegedly been under development between 2009 and 2012 by Climax Studios with supervision from Crystal Dynamics, and was intended as a PlayStation 4 launch title. Its multiplayer mode, developed by Psyonix Games, reportedly became Nosgoth after the single-player component of the game was canceled by Square Enix in 2012, who projected that it would fail to meet sales expectations. Kelion responded by confirming that Dead Sun existed and that Nosgoth was intended to be its multiplayer companion game, but said Nosgoth is a separate project which has "grown in size and scope since its initial conception", featuring "different mechanics, characters, levels and gameplay". He rejected the idea that it is "the multiplayer component of Dead Sun pulled out and fleshed out".
According to Kelion, Nosgoth is not a reboot, a mobile game, an open world game, a massively multiplayer online role-playing game (MMORPG), or a real-time strategy game, nor is it "best described as a MOBA game" (multiplayer online battle arena). It derives influences from the second game in the series, Legacy of Kain: Soul Reaver, and is played from a third-person viewpoint. It was set to be available on Microsoft Windows as a download-only title. In March 2016, Square Enix and Psyonix announced that the game would leave early access very soon. However, on April 8, 2016, Square Enix announced that they had cancelled the game as it had failed to captivate a broad audience.
ReceptionGameSpot Cameron Woolsey rated the game 6/10, praising its entertaining gameplay and detailed environments, but citing a lack of content, balance concerns, technical problems and discontinuity with the Legacy of Kain series as faults.
The game holds a score of 3.58/5 on MMOs.com. Marc Marasigan highlighted the portrayal of vampires and entertaining action but also mentioned the limited number of game modes and the matchmaking system as weaknesses.MMO & MMORPG Games'' gave it a 6/10. The review lists vampire gameplay, graphics and team deathmatch as positives, and the lack of development of the Legacy of Kain universe and unbalanced gameplay between humans and vampires as negatives.
References
External links
Official game blog (archived) - Official website one month before shutdown
Asymmetrical multiplayer video games
Dark fantasy video games
Early access video games
Free-to-play video games
Legacy of Kain
Multiplayer video games
Square Enix games
Video games developed in the United States
Cancelled Windows games
Windows-only games
Windows games
Inactive massively multiplayer online games
Unreal Engine 3 games
Video games about vampires
Psyonix games | Nosgoth | Physics | 1,892 |
5,138,198 | https://en.wikipedia.org/wiki/Gray%20short-tailed%20opossum | The gray short-tailed opossum (Monodelphis domestica) is a small South American member of the family Didelphidae. Unlike most other marsupials, the gray short-tailed opossum does not have a true pouch. The scientific name Monodelphis is derived from Greek and means "single womb" (referring to the lack of a pouch) and the Latin word domestica which means "domestic" (chosen because of the species' habit of entering human dwellings). It was the first marsupial to have its genome sequenced. The gray short-tailed opossum is used as a research model in science, and is also frequently found in the exotic pet trade. It is also known as the Brazilian opossum, rainforest opossum and in a research setting the laboratory opossum.
Description
Gray short-tailed opossums are relatively small animals, with a superficial resemblance to voles. In the wild they have head-body length of and weigh ; males are larger than females. However, individuals kept in captivity are typically much larger, with males weighing up to . As the common name implies, the tail is proportionately shorter than in some other opossum species, ranging from . Their tails are only semi-prehensile, unlike the fully prehensile tail characteristic of the North American opossum.
The fur is greyish brown over almost the entire body, although fading to a paler shade on the underparts, and with near-white fur on the feet. Only the base of the tail has fur, the remainder being almost entirely hairless. The claws are well-developed and curved in shape, and the paws have small pads marked with fine dermal ridges. Unlike many other marsupials, females do not have a pouch. They typically possess thirteen teats, which can be retracted into the body by muscles at their base.
Distribution and habitat
The gray short-tailed opossum is found generally south of the Amazon River, in southern, central, and western Brazil. It is also found in eastern Bolivia, northern Paraguay, and in Formosa Province in northern Argentina. It inhabits rainforest environments, scrubland, and agricultural land, and often enters man-made structures, such as houses. There are no recognised subspecies.
Behaviour
Gray short-tailed opossums eat rodents, frogs, reptiles, and invertebrates, as well as some fruit. They hunt primarily by scent, poking their snout into vegetation in search of prey or dead animals to scavenge. Once they find living prey, they pounce onto it, holding it down with their forefeet while delivering a killing strike, often to the base of the neck, with their sharp teeth. They can successfully take prey up to their own size.
They are nocturnal, being most active in the first three hours after dusk. Although they may occasionally shelter in natural crevices in the rock, they normally spend the day in concealed nests constructed of leaves, bark, and other available materials. The nests of females are more complex and tightly woven than those of males. They are solitary, coming together only to mate, and with each individual occupying a home range of , flagged with scent marks. The approach of another member of the species is commonly met with hissing and screeching, which may escalate to defensive strikes launched while the animal is standing on its hind legs.
Reproduction
The opossums breed year round when the climate is suitable, being able to raise up to six litters of six to eleven young each during a good year. Females only come into oestrus when exposed to male pheromones, with ovulation being induced only by physical contact with the male. Gestation lasts fourteen days, after which the young attach to a teat, where they remain for the next two weeks. Like all marsupials, the young are born undeveloped; in this species they are just in length and weigh at birth. The young grow hair at around three weeks, open their eyes about a week later, and are weaned at eight weeks
Gray short-tailed opossums are sexually mature at five to six months of age, and live for up to forty-nine months in captivity.
Laboratory opossum
The gray short-tailed opossum possesses several features that make it an ideal research model, particularly in studies of marsupials, as well as the immunological and developmental research on mammalian systems. It breeds relatively easily in laboratory settings, and neonates are exposed and can be readily accessed because, unlike other marsupial species, female opossums lack a pouch: neonates simply cling to the teats. Opossums are born at a stage that is approximately equivalent to 13- to 15-day-old fetal rats or 40-day-old human embryos. Like other marsupials, the inadequacies of the neonate's immune system function make it an ideal model for both transplant and cancer research, as well as general investigations into immune system development and its similarities to other eutherian mammals.
Its genome was sequenced and a working draft published in May 2007:
the decoding work, directed by MIT and Harvard, reveals the opossum to have between 18,000 and 20,000 protein-coding genes. The full genome sequence and annotation can be found on the Ensembl Genome Browser.
References
External links
Know Your STO (Short-tailed Opossums), pet care website by Molly Kalafut
View the opossum genome on Ensembl.
Short-tailed opossums
Marsupials of Argentina
Marsupials of Brazil
Marsupials of Bolivia
Mammals of Paraguay
Gray short-tailed opossum
Mammals described in 1842
Taxa named by Johann Andreas Wagner | Gray short-tailed opossum | Biology | 1,177 |
28,813,575 | https://en.wikipedia.org/wiki/Beit%20Netofa%20Valley | The Beit Netofa Valley, or Sahl al-Battuf (, Arabic: سهل البطوف) is a valley in the Lower Galilee region of Israel, midway between Tiberias and Haifa. Covering 46 km2, it is the largest valley in the mountainous part of the Galilee and one of the largest in the southern Levant.
Etymology
The name Beit Netofa Valley first appears in the Mishna (Shevi'it 9:5) and later in medieval rabbinical literature, receiving its name from the Roman-era Jewish settlement of Beth Netofa which stood at its northeastern edge. The valley's Arabic name is and as such appears as in crusader documents.
Geography and climate
The valley is 16 km long and on average 3 km wide, a graben formed by two parallel east-west trending faults running to its north and south. It lies between two horsts forming the Yodfat range to the north and the Tur'an range to the south, basically separating the heart of the Galilee from Nazareth area. Limestone hills to the east indicate the valley was also shaped by karstic processes. Long and narrow and ringed by steep hills, the valley soil is fatty clay relatively impermeable to water, leading to seasonal winter flooding, a phenomenon already described in the 14th century by medieval Arab geographer Al-Dimashqi.
On February 7, 1950, a meteorological station in the valley recorded the lowest temperature ever recorded in Israel: -13.7°C. This extreme remained unsurpassed for over half a century in Israeli records until 2015, when the Israeli-occupied Golan Heights experienced -14.2°C.
Settlements
As the valley floor is set under water by winter rains, and due to its high agricultural value, villages have only been established at the margins of the valley, where the terrain starts rising. These are, from west to east, Kibbutz Hanaton, Kafr Manda, Rumana, Uzeir, Bu'eine Nujeidat and Eilabun. The Jewish religious community settlement of Mitzpe Netofa overlooks the valley from Mount Tur'an, which separates the Beit Netofa and Tur'an valleys.
The fertile valley land is used for agriculture and is largely owned and cultivated by the inhabitants of settlements either in the valley itself, or from nearby areas. The latter category includes inhabitants of the Arab settlements of Sakhnin, Arraba and Bi'ina, and of the Jewish settlements of Yodfat, Zippori and Kibbutz HaSolelim.
National Water Carrier
The Beit Netofa Canal, a part of Israel's National Water Carrier, runs through the valley. The 17-kilometer-long open canal was built with an oval base due to the clay soil. The width of the canal is 19.4 meters, the bottom is 12 meters wide and it is 2.60 meters deep. At the southwestern edge of the Beit Netofa Valley it reaches the two Eshkol reservoirs, where the water is cleaned and tested before flowing south towards the Negev.
Archaeological sites
Several archaeological sites litter the valley. The earliest, Netofa I and II, date from the Chalcolithic period and are found on its hilly western flank near Kafr Manda. The assemblages found at the sites are rich in flint artifacts and tools and include bifacial tools, scrapers, sickle blades and retouched blades. Finds also include an arrow head and pottery. The sites are farming villages of a size and richness previously unknown in the Chalcolithic Galilee.
Two tells stand on the valley floor. The first is Tel Hanaton (Tell Bedeiwiyeh), which occupies roughly 5 hectares and dominates the western end of the valley. Hanaton has been identified as the Hinnatuni of the Amarna letters, and according to the Bible, it and the surrounding regions fell under the control of the tribe of Zebulun. The second tell, Tell el-Wayiwat, is 0.4 hectares in size and rises 3.5 meters above the valley floor at its eastern edge. Two seasons of excavations were carried out at the site in 1986 and 1987 by Beth Alpert Nakhai, J.P. Dessel and Bonnie L. Wisthoff on behalf of the University of Arizona, the William F. Albright Institute of Archaeological Research and the American Schools of Oriental Research. These have revealed five major strata, dating from the Middle Bronze Age through to the 11th century BCE, in the Iron Age.
At the valley's northeastern edge stands the site of ancient Beth Netofa, its name preserved in the Arab place name, Khirbet Natif. It shows signs of habitation from the Iron Age through the Persian and Roman periods and up to medieval times. Nearby, next to Highway 65 that runs along the eastern edge of the valley, lies Hurvat Amudim, another Roman-era Jewish settlement. Khirbet Qana, on the north edge of the valley, has long been recognized as the biblical Cana of Galilee, the site of Jesus' first miracle (John 2:11).
See also
Al-Batuf Regional Council
References
Bibliography
Valleys of Israel
Landforms of Northern District (Israel)
National Water Carrier of Israel
Rifts and grabens
Lower Galilee | Beit Netofa Valley | Engineering | 1,101 |
649,208 | https://en.wikipedia.org/wiki/Starting%20fluid | Starting fluid is a volatile, flammable liquid which is used to aid the starting of internal combustion engines, especially during cold weather or in engines that are difficult to start using conventional starting procedures. It is typically available in an aerosol spray can, and may sometimes be used for starting direct injected diesel engines or lean burn spark engines running on alcohol fuel. Some modern starting fluid products contain mostly volatile hydrocarbons such as heptane (the main component of natural gasoline), with a small portion of diethyl ether, and carbon dioxide (as a propellant). Some formulations contain butane or propane as both propellant and starting fuel. Historically, diethyl ether, with a small amount of oil, a trace amount of a stabilizer and a hydrocarbon propellant has been used to help start internal combustion engines because of its low autoignition temperature.
Diethyl ether is distinct from petroleum ether (a crude oil distillate consisting mostly of pentane and other alkanes) which has also been used for starting engines.
Usage
Four stroke engines
Starting fluid is sprayed into the engine intake near the air filter, or into the carburetor bore or a spark plug hole of an engine to get added fuel to the combustion cylinder quickly. Using starting fluid to get the engine running faster avoids wear to starters and fatigue to one's arm with pull start engines, especially on rarely used machines. Other uses include cold weather starting, vehicles that run out of fuel and thus require extra time to restore fuel pressure, and sometimes with flooded engines. Mechanics sometimes use it to diagnose starting problems by determining whether the spark and ignition system of the vehicle is functioning; if the spark is adequate but the fuel delivery system is not, the engine will run until the starting fluid vapors are consumed. It is used more often with carbureted engines than with fuel injection systems. Caution is required when using starting fluid with diesel engines that have preheat systems in the intake or glow-plugs installed, as the starting fluid may pre-ignite, leading to engine damage.
Two stroke engines
Starting fluid is not recommended for regular use with some two-stroke engines because it does not possess lubricating qualities by itself. Lubrication for two-stroke engines is achieved using oil that is either mixed into the fuel by the user or injected automatically into the fuel supply; engines requiring premixed fuel that are run solely on starting fluid do not receive an adequate supply of lubrication to their crankcase and cylinder(s). Engines that haven't been run recently are especially vulnerable to damage from oil starvation; starting fluid, a strong solvent, tends to strip residual oil off of cranks and cylinder walls, further reducing lubrication during the period of fuel starvation. WD-40 was previously recommended for use on two stroke engines because it has lubricating qualities, but the formulation with non-flammable CO2 as propellant instead of propane no longer has the same combustible nature, making it useless as starting fluid on any type of engine.
Abuse
Diethyl ether has a long history as a medical anesthetic; when starting fluid was mostly ether, a similar effect could be obtained using it. Use at the present time directly as an inhalant includes the effect of the petroleum solvents, which are more toxic as inhalants than diethyl ether.
Sometimes referred to as "passing the shirt," the starting fluid is sprayed on a piece of cloth and held up to one's face for inhalation. This trend has gradually picked up since the turn of the century, as phrases such as "etherized" and "ethervision" have gained popularity. The effects of inhalation vary, but have been known to include lightheadedness, loss of coordination, paranoia, and sometimes hallucinations.
References
Fuels | Starting fluid | Chemistry | 788 |
5,710,698 | https://en.wikipedia.org/wiki/Olefin%20fiber | Olefin fiber is a synthetic fiber made from a polyolefin, such as polypropylene or polyethylene. It is used in wallpaper, carpeting, ropes, and vehicle interiors.
Olefin's advantages are its strength, colorfastness and comfort, its resistance to staining, mildew, abrasion, and sunlight, and its good bulk and cover.
History
Italy began production of olefin fibers in 1957. The chemist Giulio Natta successfully formulated olefin suitable for more textile applications. Both Natta and Karl Ziegler were later awarded the Nobel Prize for their work on transition metal catalysis of olefins to fiber, also known as Ziegler–Natta catalysis. Production of olefin fibers in the U.S. began in 1960. Olefin fibers account for 16% of all manufactured fibers.
Major fiber properties
Olefin fibers have great bulk and cover while having low specific gravity. This means “Warmth without the weight.” The fibers have low moisture absorption, but they can wick moisture and dry quickly. Olefin is abrasion, stain, sunlight, fire, and chemical resistant. It does not dye well, but has the advantage of being colorfast. Since Olefin has a low melting point, textiles can be thermally bonded. The fibers have the lowest static of all manufactured fibers and a medium luster. One of the most important properties of olefin is its strength. It keeps its strength in wet or dry conditions and is very resilient. The fiber can be produced for strength of different properties.
Production method
The Federal Trade Commission's official definition of olefin fiber is “A manufactured fiber in which the fiber forming substance is any long-chain synthetic polymer composed of at least 85% by weight of ethylene, propylene, or other olefin units”
Polymerization of propylene and ethylene gases, controlled with special catalysts, creates olefin fibers. Dye is added directly to the polymer before melt spinning is applied. Additives, polymer variations and different process conditions can create a range of characteristics.
High pressure production, which uses ten tons per square inch, creates a film for molded materials. Low pressure production uses a low temperature with a catalyst and hydrocarbon solvent. This process is less expensive and produces a polyethylene polymer more for textile use.
The polymer is then melted, spun, by a spinneret into water, or air cooled. The fiber is drawn out to six times the spun length. Gel spinning is a new method in which a gel form of polyethylene polymers is used.
Physical and chemical structure
Physical
Olefin fibers can be multi- or monofilament and staple, tow, or film yarns. The fibers are colorless and round in cross section. This cross section can be modified for different end uses. The physical characteristics are a waxy feel and colorless.
Chemical
There are two types of polymers that can be used in olefin fibers. The first, polyethylene, is a simple linear structure with repeating units. These fibers are used mainly for ropes, twines and utility fabrics.
The second type, polypropylene, is a three-dimensional structure with a backbone of carbon atoms. Methyl groups protrude from this backbone. Stereoselective polymerization orders these methyl groups to the same spatial placement. This creates a crystalline polypropylene polymer. The fibers made with these polymers can be used in apparel, furnishing and industrial products.
Manufacturers
The first commercial producer of an olefin fiber in the United States was Hercules, Inc. (FiberVisions). Other U.S. olefin fiber producers include Asota; American Fibers and Yarns Co; American Synthetic Fiber, LLC; Color-Fi; FiberVisions; Foss Manufacturing Co., LLC; Drake Extrusion; Filament Fiber Technology, Inc.; TenCate Geosynthetics; Universal Fiber Systems LLC.
Trademarks according to fabric use
Producer – Allied-Signal
A.C.E. – Tire cord, furniture webbing
Producer – DuPont
CoolMax – Warm-weather and action wear
Hollofil, Quallofil – Fiberfill and insulating fibers
Sontara – Spunlaced nonwoven fabrics
Thermostat – Cold-weather wear
Thermoloft – Fiberfill and insulating fibers
Tyvek – Used for house wraps to postal envelopes to clothing
Producer – Trevira
ESP – Apparel and furnishings
Celwet – Nonwovens
Comfort Fiber – Staple fiber for apparel uses
Floor Guardian – Gym Floor Carpet Protection System
Loftguard – Staple fiber for industrial uses
Polar Guard
Lambda – Filament yarns with spun-yarn characteristics
Serene
Superba
Trevira HT – Marine and military uses; ropes, cordages
Trevira ProEarth – Recycled-content geotextiles
Trevira XPS – Carpeting
BTU – Cold-weather apparel
Producer – 3M
Thinsulate – Cold-weather action wear
Uses
Apparel
Sports & active wear, socks, hoodies, thermal underwear; lining fabrics.
Home furnishing
Olefin can be used by itself or in blends for indoor and outdoor carpets, carpet tiles, and carpet backing. The fiber can also be used in upholstery, draperies, wall coverings, slipcovers, and floor coverings. It is often used in basements due to its quick-drying and mold-resistant properties.
Automotive
Olefin can be used for interior fabrics, sun visors, arm rests, door and side panels, trunks, parcel shelves, and resin replacement as binder fibers.
Industrial
In an industrial setting, olefin creates carpets; ropes, geo-textiles that are in contact with the soil, filter fabrics, bagging, concrete reinforcement, and heat-sealable paper (e.g. tea- and coffee-bags).
Care procedures
When dry-cleaned, many dry-cleaning solvents can swell Olefin fibers. Since Olefin dries quickly, line drying and low tumble drying with little or no heat is recommended. Since Olefin is not absorbent, waterborne stains do not present a problem. However, oily stains are difficult to remove, though lukewarm water, detergent, and bleach can be used to remove such stains. Olefin fiber has a low melting point (around 225 to 335 °F (107 to 168 °C), depending on the polymer's grade) so care must be taken to iron these at a low temperature, as to prevent melting. Items such as outdoor carpets and other fabrics can be hosed off. Olefin is easy to recycle.
See also
Alkene
Elastolefin
References
Synthetic fibers
1957 introductions | Olefin fiber | Chemistry | 1,379 |
4,338,281 | https://en.wikipedia.org/wiki/PortMedia | PortMedia, formerly PortMusic, is a set of open source computer libraries for dealing with sound and MIDI. Currently the project has two main libraries: PortAudio, for digital audio input and output, and PortMidi, a library for MIDI input and output. A library for dealing with different audio file formats, PortSoundFile, is being planned, although another library, libsndfile, already exists and is licensed under the copyleft GNU Lesser General Public License. A standard MIDI file I/O library, PortSMF, is under construction.
PortMusic has become PortMedia and is hosted on SourceForge.
See also
List of free software for audio
External links
PortMusic website
Audio libraries
Computer libraries
Free audio software | PortMedia | Technology | 152 |
37,255,397 | https://en.wikipedia.org/wiki/Pi%20Leonis | Pi Leonis, Latinised from π Leonis, is a single star in the zodiac constellation Leo. It is a red-hued star that is visible to the naked eye with an apparent visual magnitude of 4.70. This object is located at a distance of some 410 light-years from the Sun based on parallax, and is drifting further away with a radial velocity of +22 km/s. Because the star lies near the ecliptic it is subject to occultations by the Moon.
This is an evolved, red giant star with a stellar classification of M2 III. With the supply of hydrogen at its core exhausted, it has expanded to 70 times the Sun's radius. The star shines with 1,077 times the luminosity of the Sun from an expanded outer atmosphere that has an effective temperature of . According to the General Catalogue of Variable Stars, it is a suspected variable star with a maximum magnitude of 4.67. The age of Pi Leonis is estimated at 550 million years.
References
M-type giants
Suspected variables
Leo (constellation)
Leonis, Pi
Durchmusterung objects
Leonis, 29
086663
049029
3950 | Pi Leonis | Astronomy | 243 |
64,701,801 | https://en.wikipedia.org/wiki/Transaural | Transaural Stereo is a technology suite of analog circuits and digital signal processing algorithms related to the field of sound playback for audio communication and entertainment. It is based on the concept of crosstalk cancellation but in some versions can embody other processes such as binaural synthesis and equalization.
The technology was developed in the 1970's by Duane H. Cooper and Jerald L. Bauck.
Description
The central concept behind transaural stereo is that there are two loudspeakers and a single listener (two ears). The left-channel signal should only reach the left ear and the right-channel signal should only reach the right ear, each with appropriate timbral corrections.
To effect this, a circuit or computer algorithm is devised. It is based on the knowledge of the four frequency-dependent transfer functions, the so-called ipsilateral and contralateral paths.:
L-to-L
L-to-R
R-to-L
R-to-R
These four functions are examples of head-related transfer functions (HRTF).
A more general theory allows arbitrary numbers of loudspeakers and ears (listeners). The inputs to the process are sometimes recorded binaural signals from a recording mannequin ("dummy head") but this is not a requirement. Virtual loudspeakers can be formed by combining crosstalk cancellation with binaural image synthesis so that, for example, narrowly-spaced loudspeakers can be made to sound farther apart or a five-channel surround sound system can be made with only two actual loudspeakers, a virtual home theater.
History
transaural stereo was developed by Duane H. Cooper and Jerald L. Bauck. An early version was published as a Master's thesis at the University of Illinois in 1978 and later in the Journal of the Audio Engineering Society. The work was continued in the mid-1980s as an improvement on and practical implementation of the early work in comparative auditoria studies in the 1960s of Schroeder and Atal which was reported as obtaining unstable images under slight head movements.
Cooper and Bauck, using methods to stabilize images and reduce the filter count, made an analog crosstalk canceller, a two-speaker spreader, and an eight-position binaural image synthesizer which doubled as a binaural pan pot in 1987–1989 using biquadratic analog filters in shuffler configurations. Later implementations used highly efficient digital biquadratic filters.
The distributed source concept with both discrete and continuous source distributions was created in March 1997 and later refined and the refinement named Optimal Source Distribution.
References
External links
Transaural Stereo
Stereophonic sound
Music technology | Transaural | Engineering | 549 |
14,719,595 | https://en.wikipedia.org/wiki/Robert%20S.%20Williamson | Robert Stockton Williamson (January 21, 1825 – November 10, 1882) was an American soldier and engineer, noted for conducting surveys for the transcontinental railroad in California and Oregon. Inducted into the Army Corps of Engineers in 1861, he had a distinguished record serving in the American Civil War, winning two brevet promotions. When the US Army Corps of Engineers established its San Francisco District office in 1866, he was appointed as the first commander of the office. Formally promoted to the rank of lieutenant colonel in 1869, he retired in 1871, because of health problems, and died in San Francisco in 1882.
Early life and career
Williamson was born in Oxford, New York and lived in Elizabeth, New Jersey. He was named after Commodore Robert F. Stockton, a family friend. He joined the Navy in 1843 as a master's mate under Stockton on the USS Princeton, the first screw-driven steam ship in the Navy. Williamson was detached from the ship 10 days before one of its guns exploded, killing several people.
It was through Stockton's influence that Williamson was appointed to the United States Military Academy. He graduated fifth in his class in 1848 and appointed a second lieutenant in the Corps of Topographical Engineers. He was assigned to conduct surveys for proposed routes for the transcontinental railroad in California and Oregon, leading surveys of the Sierra Nevada above the Feather River alongside William Horace Warner. In 1853, War Secretary Jefferson Davis chose Williamson to lead surveys of California's southern Sierra and mountains near Los Angeles for the Pacific Railroad. His work was published in volume 5 of the War Department's Reports of Explorations and Surveys. Williamson was then assigned to the staff of the commanding general of the Department of the Pacific, and was the engineer in charge of the military roads in southern Oregon.
Civil War
After the outbreak of the American Civil War, Williamson was commissioned with the rank of Captain into the 1st Battalion of Engineers, and was the Chief Topographical Engineer in North Carolina. He was brevetted Major on March 14, 1862, for service at the Battle of New Bern, and brevetted a Lieutenant Colonel at the Battle of Fort Macon on April 26, 1862.
He was then assigned as Chief Topographical Engineer for the Army of the Potomac. Williamson returned to California as the Chief Topographical Engineer of the Department of the Pacific. He was formally promoted to the rank of Major on May 7, 1863.
In 1863, Williamson transferred to the Corps of Engineers and served as lighthouse engineer for the Pacific Coast. He also worked on defenses and harbors along the coast.
Postbellum
In 1866, Major Williamson was appointed Commander and Officer-in-Charge when the U. S. Army Corps of Engineers established its San Francisco District Office in 1866. This office was then mainly responsible for engineering related to rivers and harbors along the entire Pacific coast, from Canada to Mexico, and included Hawaii. He remained in this position until 1871.
He was formally promoted to Lieutenant Colonel on February 2, 1869, just before submitting his survey on improvements to San Pedro Bay, California. This proposed construction of a jetty, the first federal harbor works at the site of the future Port of Los Angeles. The project would enhance shipping and also help entice the Southern Pacific Railroad to build to the harbor rather than to San Diego.
In 1870, he was elected as a member to the American Philosophical Society.
He retired from the Army as a lieutenant colonel in 1882, due to illness. Williamson had suffered from bad health for the last 20 years of his life and died of tuberculosis in San Francisco, California. He was buried at the Masonic Cemetery in San Francisco.
Legacy
In California, Mount Williamson is named for him.
Williamson Mountain and the Williamson River in Oregon are named in his honor.
A western North American woodpecker, the Williamson's sapsucker, and the mountain whitefish, Prosopium williamsoni, are named after him.
Williamson Valley (Arizona) is named after him.
Notes
References
External links
Report Upon the Removal of Blossom Rock San Francisco Harbor, California. Williamson, R. S. and W. H. Heuer. 1870.
1825 births
1882 deaths
Engineers from Elizabeth, New Jersey
United States Army Corps of Topographical Engineers
United States Military Academy alumni
United States Army officers
19th-century American explorers
Union army officers
People from Oxford, New York
Engineers from New York (state)
Burials at Masonic Cemetery (San Francisco)
Military personnel from New Jersey | Robert S. Williamson | Engineering | 898 |
36,231,779 | https://en.wikipedia.org/wiki/Ross%27%20%CF%80%20lemma | Ross' lemma, named after I. Michael Ross, is a result in computational optimal control. Based on generating Carathéodory- solutions for feedback control, Ross' -lemma states that there is fundamental time constant within which a control solution must be computed for controllability and stability. This time constant, known as Ross' time constant, is proportional to the inverse of the Lipschitz constant of the vector field that governs the dynamics of a nonlinear control system.
Theoretical implications
The proportionality factor in the definition of Ross' time constant is dependent upon the magnitude of the disturbance on the plant and the specifications for feedback control. When there are no disturbances, Ross' -lemma shows that the open-loop optimal solution is the same as the closed-loop one. In the presence of disturbances, the proportionality factor can be written in terms of the Lambert W-function.
Practical applications
In practical applications, Ross' time constant can be found by numerical experimentation using DIDO. Ross et al showed that this time constant is connected to the practical implementation of a Caratheodory- solution. That is, Ross et al showed that if feedback solutions are obtained by zero-order holds only, then a significantly faster sampling rate is needed to achieve controllability and stability. On the other hand, if a feedback solution is implemented by way of a Caratheodory- technique, then a larger sampling rate can be accommodated. This implies that the computational burden on generating feedback solutions is significantly less than the standard implementations. These concepts have been used to generate collision-avoidance maneuvers in robotics in the presence of uncertain and incomplete information of the static and dynamic obstacles.
See also
Ross–Fahroo lemma
Ross–Fahroo pseudospectral method
References
Numerical analysis
Control theory
ar:هندسة اتصالات
de:Optimale Steuerung
es:Control óptimo
fr:Commande optimale
it:Controllo ottimo
he:בקרה אופטימלית
ru:Оптимальное управление | Ross' π lemma | Mathematics | 439 |
65,382,854 | https://en.wikipedia.org/wiki/Allophanic%20acid | Allophanic acid is the organic compound with the formula H2NC(O)NHCO2H. It is a carbamic acid, the carboxylated derivative of urea. Biuret can be viewed as the amide of allophanic acid. The compound can be prepared by treating urea with sodium bicarbonate:
H2NC(O)NH2 + NaHCO3 → H2NC(O)NHCO2H + NaOH
The anionicconjugate base, H2NC(O)NHCO2−, is called allophanate. Salts of this anion have been characterized by X-ray crystallography. The allophanate anion is the substrate for the enzyme allophanate hydrolase.
Allophanate esters arise from the condensation of carbamates.
References
Ureas
Functional groups
Carbamates | Allophanic acid | Chemistry | 190 |
57,993,701 | https://en.wikipedia.org/wiki/Microstructurally%20stable%20nanocrystalline%20alloys | Microstructurally stable nanocrystalline alloys are alloys that are designed to resist microstructural coarsening under various thermo-mechanical loading conditions.
Many applications of metal materials require that they can maintain their structure and strength despite very high temperatures. Efforts to prevent deformations from long term stress, referred to as creep, consist of manipulating alloys to reduce coarsening and migration of individual grains within the metal.
The small size of individual metal grains provides high interfacial surface energy which is what prompts coarsening, the increase in grain size, and eventually metallic softening. Nanocrystalline creep is considered to follow the Coble creep mechanism, the diffusion of atoms along grain boundaries at low stress levels and high temperatures. One method used to reduce coarsening, is by employing an alloy in which one component has good solubility with another. Since grain size decreases with high solute concentration, the rate of coarsening is slowed until inconsequential.
Copper and 10% atomic tantalum nanocrystalline alloy
In 2016, researchers at the Arizona State University and the United States Army Research Laboratory reported a microstructurally stable nanocrystalline alloy made of copper and 10% atomic tantalum (Cu–10 at% Ta). This microstructurally stable nanocrystalline alloy demonstrated high creep resistance under an applied stress and temperature ranges 0.85 to 1.2% of the shear modulus and .5-.64Tm respectively, the steady creep rates were consistently less than 10−6 s−1.
This stability was credited to the mechanistic creep process and the alloy’s core–shell-type structures. The scientists determined that the copper alloy creep occurred in dislocation climb areas under levels of relatively larger stress, claiming that any diffusion creep occurring was negligible. The core–shell-type nanostructures prevented coarsening by securing grain boundaries, a mechanism known as Zener pinning. In these structures more interfacial bonding interactions were possible, increasing strength. Oxide-dispersion strengthened (ODS) ferritic alloys16 and molybdenum alloys17’s great strength and ductility were also credited to these nanostructures.
Nickel and 13% tungsten nanocrystalline alloy
In 2007, a nickel (Ni) and tungsten (W) nanocrystalline alloy was reported to have resistance to coarsening. Experimental data reported that the alloy coarsened to 28 nm from its original grain size of 20 nm after 30 minutes of exposure to heat of 600 degrees Celsius. This growth was then compared to the coarsening rate of an individual grain of Ni placed in heat of 300 degrees Celsius for 30 minutes.
Tungsten and 20% titanium nanocrystalline alloy
In 2012, a tungsten (W) and 20% titanium (Ti) nanocrystalline alloy after a week of exposure to heat of 1100 degrees Celsius in an argon atmosphere was claimed by the researchers to have displayed no change in grain size from the initial 20 nm. Meanwhile, the unalloyed W under the same conditions exhibited a final size on the micrometer scale. Another reviewer describes the coarsening of the W-Ti alloy to be a 2 nm size increase from the original 22 nm. The authors attribute the microstructural stability to a complex chemical arrangement. The nanocrystalline metallic grains were made via a high-energy ball mill method.
References
Alloys | Microstructurally stable nanocrystalline alloys | Chemistry | 725 |
40,446,785 | https://en.wikipedia.org/wiki/Free%20Component%20Library | The Free Component Library, abbreviated FCL, is a software component library for Free Pascal.
The FCL consists of a collection of units that provide components and classes for general programming tasks. Although it is intended to be compatible with Delphi's Visual Component Library (VCL) the FCL is restricted to non-visual components. On the other hand, its functionality partly exceeds that of the VCL.
Visual components are provided by the Lazarus Component Library (LCL).
The FCL is based on the Free Pascal Runtime Library (RTL).
Further reading
External links
FCL documentation in the Free Pascal Wiki
Complete online reference
Free Pascal
Pascal (programming language) libraries
Computer libraries
Component-based software engineering
Platform-sensitive development | Free Component Library | Technology | 150 |
3,246,627 | https://en.wikipedia.org/wiki/Fast%20Green%20FCF | Fast Green FCF, also called Food green 3, FD&C Green No. 3, Green 1724, Solid Green FCF, and C.I. 42053, is a turquoise triarylmethane food dye. Its E number is E143.
Fast Green FCF is recommended as a replacement of Light Green SF yellowish in Masson's trichrome, as its color is more brilliant and less likely to fade. It is used as a quantitative stain for histones at alkaline pH after acid extraction of DNA. It is also used as a protein stain in electrophoresis. Its absorption maximum is at 625 nm.
Fast Green FCF is poorly absorbed by the intestines. Its use as a food dye is prohibited in the European Union and some other countries. In the United States, Fast Green FCF is the least used of the seven main FDA approved dyes.
Toxicology
A reevaluation of Fast Green FCF published by the World Health Organization in 2017 concluded that it has low toxicity and is not carcinogenic or genotoxic, and that there were no health concerns with consumption of Fast Green FCF at the previously established allowable daily intake (which itself is much higher than estimates of actual dietary exposure to Fast Green FCF).
Notes
Biochemistry detection methods
Triarylmethane dyes
Staining dyes
Food colorings
Anilines
Phenols
Benzenesulfonates | Fast Green FCF | Chemistry,Biology | 294 |
3,988,692 | https://en.wikipedia.org/wiki/Useful%20conversions%20and%20formulas%20for%20air%20dispersion%20modeling | Various governmental agencies involved with environmental protection and with occupational safety and health have promulgated regulations limiting the allowable concentrations of gaseous pollutants in the ambient air or in emissions to the ambient air. Such regulations involve a number of different expressions of concentration. Some express the concentrations as ppmv and some express the concentrations as mg/m3, while others require adjusting or correcting the concentrations to reference conditions of moisture content, oxygen content or carbon dioxide content. This article presents a set of useful conversions and formulas for air dispersion modeling of atmospheric pollutants and for complying with the various regulations as to how to express the concentrations obtained by such modeling.
Converting air pollutant concentrations
The conversion equations depend on the temperature at which the conversion is wanted (usually about 20 to 25 degrees Celsius). At an ambient air pressure of 1 atmosphere (101.325 kPa), the general equation is:
and for the reverse conversion:
Notes:
Pollution regulations in the United States typically reference their pollutant limits to an ambient temperature of 20 to 25 °C as noted above. In most other nations, the reference ambient temperature for pollutant limits may be 0 °C or other values.
1 percent by volume = 10,000 ppmv (i.e., parts per million by volume).
atm = absolute atmospheric pressure in atmospheres
mol = gram mole
Correcting concentrations for altitude
Atmospheric pollutant concentrations expressed as mass per unit volume of atmospheric air (e.g., mg/m3, μg/m3, etc.) at sea level will decrease with increasing altitude because the atmospheric pressure decreases with increasing altitude.
The change of atmospheric pressure with altitude can be obtained from this equation:
Given an atmospheric pollutant concentration at an atmospheric pressure of 1 atmosphere (i.e., at sea level altitude), the concentration at other altitudes can be obtained from this equation:
As an example, given a concentration of 260 mg/m3 at sea level, calculate the equivalent concentration at an altitude of 1,800 meters:
Ca = 260 × 0.9877 18 = 208 mg/m3 at 1,800 meters altitude
Standard conditions for gas volumes
A normal cubic meter (Nm3 ) is the metric expression of gas volume at standard conditions and it is usually (but not always) defined as being measured at 0 °C and 1 atmosphere of pressure.
A standard cubic foot (scf) is the USA expression of gas volume at standard conditions and it is often (but not always) defined as being measured at 60 °F and 1 atmosphere of pressure. There are other definitions of standard gas conditions used in the USA besides 60 °F and 1 atmosphere.
That being understood:
1 Nm3 of any gas (measured at 0 °C and 1 atmosphere of absolute pressure) equals 37.326 scf of that gas (measured at 60 °F and 1 atmosphere of absolute pressure).
1 kmol of any ideal gas equals 22.414 Nm3 of that gas at 0 °C and 1 atmosphere of absolute pressure ... and 1 lbmol of any ideal gas equals 379.482 scf of that gas at 60 °F and 1 atmosphere of absolute pressure.
Notes:
kmol = kilomole or kilogram mole
lbmol = pound mole
Wind speed conversion factors
Meteorological data includes wind speeds which may be expressed as statute miles per hour, knots, or meters per second. Here are the conversion factors for those various expressions of wind speed:
1 m/s = 2.237 statute mile/h = 1.944 knots
1 knot = 1.151 statute mile/h = 0.514 m/s
1 statute mile/h = 0.869 knots = 0.447 m/s
Note:
1 statute mile = 5,280 feet = 1,609 meters
Correcting for reference conditions
Many environmental protection agencies have issued regulations that limit the concentration of pollutants in gaseous emissions and define the reference conditions applicable to those concentration limits. For example, such a regulation might limit the concentration of NOx to 55 ppmv in a dry combustion exhaust gas corrected to 3 volume percent O2. As another example, a regulation might limit the concentration of particulate matter to 0.1 grain per standard cubic foot (i.e., scf) of dry exhaust gas corrected to 12 volume percent CO2.
Environmental agencies in the USA often denote a standard cubic foot of dry gas as "dscf" or as "scfd". Likewise, a standard cubic meter of dry gas is often denoted as "dscm" or "scmd" (again, by environmental agencies in the USA).
Correcting to a dry basis
If a gaseous emission sample is analyzed and found to contain water vapor and a pollutant concentration of say 40 ppmv, then 40 ppmv should be designated as the "wet basis" pollutant concentration. The following equation can be used to correct the measured "wet basis" concentration to a "dry basis" concentration:
Thus, a wet basis concentration of 40 ppmv in a gas having 10 volume percent water vapor would have a dry basis concentration = 40 ÷ ( 1 - 0.10 ) = 44.44 ppmv.
Correcting to a reference oxygen content
The following equation can be used to correct a measured pollutant concentration in an emitted gas (containing a measured O2 content) to an equivalent pollutant concentration in an emitted gas containing a specified reference amount of O2:
Thus, a measured concentration of 45 ppmv (dry basis) in a gas having 5 volume % O2 is
45 × ( 20.9 - 3 ) ÷ ( 20.9 - 5 ) = 50.7 ppmv (dry basis) of when corrected to a gas having a specified reference O2 content of 3 volume %.
Correcting to a reference carbon dioxide content
The following equation can be used to correct a measured pollutant concentration in an emitted gas (containing a measured CO2 content) to an equivalent pollutant concentration in an emitted gas containing a specified reference amount of CO2:
Thus, a measured particulates concentration of 0.1 grain per dscf in a gas that has 8 volume % CO2 is
0.1 × ( 12 ÷ 8 ) = 0.15 grain per dscf when corrected to a gas having a specified reference CO2 content of 12 volume %.
Notes:
Although ppmv and grains per dscf have been used in the above examples, concentrations such as ppbv (i.e., parts per billion by volume), volume percent, grams per dscm and many others may also be used.
1 percent by volume = 10,000 ppmv (i.e., parts per million by volume).
Care must be taken with the concentrations expressed as ppbv to differentiate between the British billion which is 1012 and the USA billion which is 109.
See also
Standard conditions of temperature and pressure
Units conversion by factor-label
Atmospheric dispersion modeling
Roadway air dispersion modeling
Bibliography of atmospheric dispersion modeling
Accidental release source terms
Choked flow
References
External links
More conversions and formulas useful in air dispersion modeling are available in the feature articles at www.air-dispersion.com.
U.S. EPA tutorial course has very useful information.
Atmospheric dispersion modeling
Air pollution
Environmental engineering | Useful conversions and formulas for air dispersion modeling | Chemistry,Engineering,Environmental_science | 1,502 |
37,287,625 | https://en.wikipedia.org/wiki/CloudForge | CloudForge was a software-as-a-service product for application development tools and services, such as Git hosting, Subversion (SVN) hosting, issue trackers and Application Lifecycle Management. CloudForge was built on CollabNet’s cloud hosting and integration platform, acquired from Codesion.com in October 2010.
History
CloudForge was first released in beta on April 30, 2012 and then officially released on July 30, 2012. CloudForge was built upon Codesion.com, which was founded as CVSDude by Mark Bathie in Brisbane, Australia in 2002. The team relocated to Silicon Valley and renamed to Codesion.com in early 2010, and was acquired by CollabNet in Brisbane, California, in 2010.
Outages
At 18:30 EST, 21st Feb 2015, all Cloudforge sites, including the main site www.cloudforge.com went offline. Paid SVN repositories were not available as the sitewide maintenance overran by over 24 hours.
Sunset
On July 8, 2020, Digital.ai announced the sunset of the CloudForge product on October 1, 2020.
As announced all accounts were terminated as of October 1, 2020. Account holders have not been reimbursed for time prepaid before the July 2020 announcement and emails to the company requesting refunds have been ignored.
References
External links
Official website
Version control
Project hosting websites
Project management software
Software project management
Computing websites
Companies based in San Mateo County, California
Internet properties established in 2012 | CloudForge | Technology,Engineering | 311 |
20,240,240 | https://en.wikipedia.org/wiki/Primate%20cognition | Primate cognition is the study of the intellectual and behavioral skills of non-human primates, particularly in the fields of psychology, behavioral biology, primatology, and anthropology.
Primates are capable of high levels of cognition; some make tools and use them to acquire foods and for social displays; some have sophisticated hunting strategies requiring cooperation, influence and rank; they are status conscious, manipulative and capable of deception; they can recognise kin and conspecifics; they can learn to use symbols and understand aspects of human language including some relational syntax, concepts of number and numerical sequence.
Studies in primate cognition
Theory of mind
Theory of mind (also known as mental state attribution, mentalizing, or mindreading) can be defined as the "ability to track the unobservable mental states, like desires and beliefs, that guide others' actions". Premack and Woodruff's 1978 article "Does the chimpanzee have a theory of mind?" sparked a contentious issue because of the problem of inferring from animal behavior the existence of thinking, of the existence of a concept of self or self-awareness, or of particular thoughts.
Non-human research still has a major place in this field, however, and is especially useful in illuminating which nonverbal behaviors signify components of theory of mind, and in pointing to possible stepping points in the evolution of what many claim to be a uniquely human aspect of social cognition. While it is difficult to study human-like theory of mind and mental states in species which we do not yet describe as "minded" at all, and about whose potential mental states we have an incomplete understanding, researchers can focus on simpler components of more complex capabilities.
For example, many researchers focus on animals' understanding of intention, gaze, perspective, or knowledge (or rather, what another being has seen). Part of the difficulty in this line of research is that observed phenomena can often be explained as simple stimulus-response learning, since mental states can often be inferred based on observed behavioural cues. Recently, most non-human theory of mind research has focused on monkeys and great apes, who are of most interest in the study of the evolution of human social cognition. Research can be categorized in to three subsections of theory of mind: attribution of intentions, attribution of knowledge (and perception), and attribution of belief.
Attribution of intentions. Research on chimpanzees, capuchin monkeys, and Tonkean macaques (Macaca tokeana) has provided evidence that they are sensitive to the goals and intentions of others and are able to differentiate between when an experimenter is unable to give them food versus when the experimenter is just unwilling to.
Attribution of knowledge (and perception). Hare et al. (2001) demonstrates that chimpanzees are aware of what other individuals know. They can also understand what another perceives, and they selectively choose food that is not visible to their competitor.
Attribution of belief. A false-belief test is a comprehensive test used to test for an individual's theory of mind. Understanding language is a key component to being able to understand the directions for the false-belief test, and researchers have had to get creative to utilize this test in the research of non-human primates' theory of mind. Recent technology has enabled researchers to closely resemble the false-belief task without needing to use language. In Krupenye et al. (2016), an advanced eye-tracking technology was used to test for false-belief understanding in apes. The findings of this experiment showed that apes understood and accurately anticipated the behavior of an individual who held a false belief.
There has been some controversy over the interpretation of evidence purporting to show theory of mind ability—or inability—in animals. Part of this debate has involved whether animals are really able to associate cognitive abilities with another individual, or if they are just able to read and understand behavior. Povinelli et al. (1990) points out that most evidence in support of great ape theory of mind involves naturalistic settings to which the apes have already adapted through past learning. Their "reinterpretation hypothesis" explains away evidence supporting attribution of mental states to others in chimpanzees as merely evidence of risk-based learning; that is, the chimpanzees learn through experience that certain behaviors in other chimpanzees have a probability of leading to certain responses, without necessarily attributing knowledge or other intentional states to those other chimpanzees. They have proposed testing theory of mind abilities in great apes in novel, and not naturalistic settings. Experimenters since then, such as demonstrated in Krupenye et al. (2016), have gone to extensive lengths to control for behavioral cues by placing the apes in novel settings as suggested by Povinelli and colleagues. Research has shown that there is substantial evidence for some non-human primates to track the mental state, like desires and beliefs, of other individuals that cannot be deduced to a response of learned behavioural cues.
Communication in the wild
For most of the 20th century, scientists who studied primates thought of vocalizations as physical responses to emotions and external stimuli. The first observations of primate vocalizations representing and referring to events in the exterior world were observed in vervet monkeys in 1967. Calls with specific intent, such as alarm calls or mating calls has been observed in many orders of animals, including primates. Researchers began to study vervet monkey vocalizations in more depth as a result of this finding. In the seminal study on vervet monkeys, researchers played recordings of three different types of vocalizations they use as alarm calls for leopards, eagles, and pythons. Vervet monkeys in this study responded to each call accordingly: going up trees for leopard calls, searching for predators in the sky for eagle calls, and looking down for snake calls. This indicated a clear communication that there is a predator nearby and what kind of predator it is, eliciting a specific response. The use of recorded sounds, as opposed to observations in the wild, gave researchers insight into the fact that these calls contain meaning about the external world. This study also produced evidence that suggests vervet monkeys improve in their ability to classify different predators and produce alarm calls for each predator as they get older. Further research into this phenomenon has discovered that infant vervet monkeys produce alarm calls for a wider variety of species than adults. Adults only use alarm calls for leopards, eagles, and pythons while infants produce alarm calls for land mammals, birds, and snakes respectively. Data suggests that infants learn how to use and respond to alarm calls by watching their parents.
A different species of monkeys, the wild Campbell's monkeys have also been known to produce a sequence of vocalization that require a specific order to elicit a specific behaviour in other monkeys. Changing the order of the sounds changes the resulting behaviour, or meaning, of the call. Diana monkeys were studied in a habituation-dishabituation experiment that demonstrated the ability to attend to the semantic content of calls rather than simply to acoustic nature. Primates have also been observed responding to alarm calls of other species. Crested Guinea fowl, a ground-dwelling fowl, produce a single type of alarm call for all predators it detects. Diana monkeys have been observed to respond to the most likely reason for the call, typically a human or leopard, based on the situation and respond according to that. If they deem a leopard is the more likely predator in the vicinity they will produce their own leopard-specific alarm call but if they think it is a human, they will remain silent and hidden.
The ability for non-human primates to understand call systems that belong to a different species of monkey happens but to a limited extent. In this case Diana monkeys and Campbell's monkeys often form mixed species groups but they seem only to respond to each other's danger related calls.
Tool use
There are many reports of primates making or using tools, both in the wild or when captive. Chimpanzees, gorillas, orangutans, capuchin monkeys, baboons, and mandrills have all been reported as using tools. The use of tools by primates is varied and includes hunting (mammals, invertebrates, fish), collecting honey, processing food (nuts, fruits, vegetables and seeds), collecting water, weapons and shelter.
Tool making is much rarer, but has been documented in orangutans, bonobos and bearded capuchin monkeys. Research in 2007 shows that chimpanzees in the Fongoli savannah sharpen sticks to use as spears when hunting, considered the first evidence of systematic use of weapons in a species other than humans. Captive gorillas have made a variety of tools. In the wild, mandrills have been observed to clean their ears with modified tools. Scientists filmed a large male mandrill at Chester Zoo (UK) stripping down a twig, apparently to make it narrower, and then using the modified stick to scrape dirt from underneath its toenails.
There is some more recent controversy over whether tool use represents a higher level of physical cognition, although this contradicts a long held tradition of tool use as conferring the highest status in the animal world. One study suggests that primates could use tools due to environmental or motivational clues, rather than an understanding of folk physics or a capacity for future planning.
Problem solving
In 1913, Wolfgang Köhler started writing a book on problem solving titled The Mentality of Apes (1917). In this research, Köhler observed the manner in which chimpanzees solve problems, such as that of retrieving bananas when positioned out of reach. He found that they stacked wooden crates to use as makeshift ladders in order to retrieve the food. If the bananas were placed on the ground outside of the cage, they used sticks to lengthen the reach of their arms.
Köhler concluded that the chimps had not arrived at these methods through trial-and-error (which American psychologist Edward Thorndike had claimed to be the basis of all animal learning, through his law of effect), but rather that they had experienced an insight (sometimes known as the Eureka effect or an "aha" experience), in which, having realized the answer, they then proceeded to carry it out in a way that was, in Köhler's words, "unwaveringly purposeful."
Asking questions and giving negative answers
According to numerous published studies, apes are able to answer human questions, and the vocabulary of the acculturated apes contains question words.
Despite these abilities, the published research literature did not include instances of apes asking questions themselves; in human-primate conversations, questions were exclusively asked by humans. Ann and David Premack designed a methodology to teach apes to ask questions in the 1970s: "In principle, interrogation can be taught either by removing an element from a familiar situation in the animal's world or by removing the element from a language that maps the animal's world. It is probable that one can induce questions by purposefully removing key elements from a familiar situation. Suppose a chimpanzee received its daily ration of food at a specific time and place, and then one day the food was not there. A chimpanzee trained in the interrogative might inquire 'Where is my food?' or, in Sarah's case, 'My food is?' Sarah was never put in a situation that might induce such interrogation because for our purposes it was easier to teach Sarah to answer questions".
A decade later, the Premacks wrote: "Though [Sarah] understood the question, she did not herself ask any questions—unlike the child who asks interminable questions, such as What that? Who making noise? When Daddy come home? Me go Granny's house? Where puppy? Toy? Sarah never delayed the departure of her trainer after her lessons by asking where the trainer was going, when she was returning, or anything else".
Joseph Jordania suggested that the ability to ask questions could be the crucial cognitive threshold between human and other ape mental abilities. Jordania suggested that asking questions is not a matter of the ability to use syntactic structures, that it is primarily a matter of cognitive ability.
g factor of intelligence in primates
The general factor of intelligence, or g factor, is a psychometric construct that summarizes the correlations observed between an individual's scores on various measures of cognitive abilities. First described in humans, the g factor has since been identified in a number of nonhuman species.
Primates in particular have been the focus of g research due to their close taxonomic links to humans. A principal component analysis run in a meta-analysis of 4,000 primate behaviour papers including 62 species found that 47% of the individual variance in cognitive ability tests was accounted for by a single factor, controlling for socio-ecological variables. This value fits within the accepted range of the influence of g on IQ.
However, there is some debate as to the influence of g on all primates equally. A 2012 study identifying individual chimpanzees that consistently performed highly on cognitive tasks found clusters of abilities instead of a general factor of intelligence. This study used individual-based data and claim that their results are not directly comparable to previous studies using group data that have found evidence for g. Further research is required to identify the exact nature of g in primates.
See also
Animal cognition
Cognitive tradeoff hypothesis
Deep social mind
Hominid intelligence
Great ape language
Primate empathy
Primate archaeology
Pig intelligence
References
Further reading
Animal intelligence
Zoology
Primatology | Primate cognition | Biology | 2,808 |
914,773 | https://en.wikipedia.org/wiki/Traffic%20message%20channel | Traffic Message Channel (TMC) is a technology for delivering traffic and travel information to motor vehicle drivers. It is digitally coded using the ALERT C or TPEG protocol into Radio Data System (RDS) carried via conventional FM radio broadcasts. It can also be transmitted on Digital Audio Broadcasting or satellite radio. TMC allows silent delivery of dynamic information suitable for reproduction or display in the user's language without interrupting audio broadcast services. Both public and commercial services are operational in many countries. When data is integrated directly into a navigation system, traffic information can be used in the system's route calculation.
Development
Detailed technical proposals for an RDS-TMC broadcasting protocol were first developed in the European Community's DRIVE programme research project RDS-ALERT, a partnership of the BBC, Philips, Blaupunkt, TRRL and CCETT led by Castle Rock Consultants (CRC). The main goal of the project was to develop and build consensus upon a draft standard for broadcasting RDS-TMC traffic messages in densely coded digital form.
An initial proposal for defining RDS-TMC data fields had been made to the European Conference of Ministers of Transport (ECMT) in Madrid, based on a scheme developed by CCETT and Philips in the Eureka-sponsored CARMINAT research project. This proposal required the use of at least two 104-bit RDS data groups for each message. Within these RDS Groups, 32 bits per group would be used for traffic data, giving a total traffic message length of 64 bits. A second proposal, by Bosch-Blaupunkt and the German Road Research Institute BASt, sought to use just a single RDS Group per traffic message. Then, in 1987, the CEC invited Castle Rock Consultants to lead a joint team that would take TMC development a stage further. CRC produced a proposal for a modified BASt/Blaupunkt single group message definition, which became known as the ALERT A coding scheme. Tests also continued at CCETT and BBC on the CARMINAT approach, which formed the basis of an alternative ALERT B coding proposal.
A major question addressed in the Alert A scheme was the total number of traffic event locations to be coded. Initial estimates suggested that, in Europe, a maximum of 65,000 significant junctions might be needed for the Federal Republic of Germany. An efficient coding system would require only 16 bits to code these, simply by numbering each intersection from 1 to 65535. Calculations for France, Britain and elsewhere suggested that around 30,000 to 40,000 locations should be enough for most European national or U.S. statewide systems. A standard 16-bit location code was, therefore, adopted for inter-urban networks. The Madrid proposal of 1987, by comparison, had required 33 bits to code problem location, with separate fields for road number, road class, area of the country, etc. These 33 bits gave a theoretical total of 8.5 billion location codes, most of which could never be used.
After consultation with ECMT, a combined approach was developed called the ALERT C Protocol that aimed to combine the best features of each approach. ALERT A and C replaced the CARMINAT message categories cause, effect and advice with a single 11-bit basic message code. This permits up to 2048 basic message phrases to be broadcast. The new ALERT protocols significantly increased the efficiency of message coding, shortening the basic message content from 18 to 11 bits. In conjunction with the revised location codes, which saved 17 of the 33 bits previously assigned, this allowed the great majority of traffic messages to be broadcast using a single TMC data sequence. In 1991, ECMT recommended moving forward with further testing of the protocols. The work continued with a larger consortium including Volvo and Ford Motor Company in the European Commission's DRIVE II project ATT-ALERT.
Operation
Each traffic incident is binary-encoded and sent as a TMC message. Each message consists of an event code, location code, expected incident duration, affected extent and other details.
The message contains a list of up to 2048 event phrases defined by 11 binary bits (of which 1402 were in use as of 2007) that can be translated by the receiver into the user's language. Some phrases describe individual situations such as a crash, while others cover combinations of events such as construction causing long delays.
In Europe, location code tables are maintained on a national level. Those location tables are integrated in the maps provided by in-vehicle navigation system companies such as HERE Technologies and TomTom and by vehicle manufacturers such as Volvo. In other countries, such as the U.S. and Canada, private companies maintain the location tables and market TMC services commercially.
Sources of traffic information typically include police, traffic control centers, camera systems, traffic speed detectors, floating car data, winter driving reports and roadwork reports.
Coordination
TMC-Forum, a non-profit organization whose members included service providers, receiver manufacturers, car manufacturers, map vendors, broadcasters (public and private), automobile clubs, and public authorities, was a forum to discuss traffic information related matters. It maintained the TMC-Standard (ISO 14819). On 11 November 2007, the TMC-Forum and the TPEG-Forum merged into the Traveller Information Services Association (TISA). TISA has taken over all of TMC-Forum's activities and responsibilities.
Functionality
RDS-TMC is a low-bandwidth system. Each RDS-TMC traffic message comprises 37 data bits sent at most 1–3 times per second, using a low capacity data channel primarily designed for FM radio station name identification and tuning. Compressing traffic incident descriptions in multiple languages into 16 bits for a location, 11 bits for an event description code, plus 3 bits for the event's extent and a few extra bits for the duration/system management was necessary due to pre-existing constraints in the RDS standard. Almost all the other broadcast data bits were already assigned from each 104-bit RDS Group.
A major design challenge of RDS-TMC was to find a way of describing traffic event locations across an entire state or country. Such a system could not convey precise latitude-longitude data (available 25 years later using GPS in applications such as Waze). Instead, RDS-TMC relies on the use of location tables that point only to significant highway junctions. The precision of each traffic event's location is low compared to that of modern smartphone devices. The user's navigation system locates a driver to about 3 metres (10 feet), but only knows, for example, that a crash took place between Exit 3 and Exit 4, northbound on a particular motorway. This limitation requires that traffic events (accidents, congestion, burst water mains, faulty traffic lights, etc.) have to be superimposed onto maps by mapping the reported location to the TMC location table. If the nearest location table point lies at some distance from the exact position of the incident, then the report appears on a section of main road between two junctions instead of at its exact location. The limited precision can make a significant difference as to how navigation devices interpret the incident, potentially leading to an occasional poor route choice.
In the US and elsewhere, systems such as CARS (Condition Acquisition and Reporting System) can pinpoint event locations or their start and end points with one-metre precision. These real-time data are published in XML for access by companies such as Google and TomTom. These incident reports can be delivered to mobile phones and handheld devices in vehicles. However, major real-world traffic incidents usually spread from hundreds of metres up to many kilometres, once traffic backups have developed. On motorways and other major roads, there are typically few or no detours available between significant junctions, which are all included in the TMC location tables. Many traffic report locations are only approximate, and as queues grow, locations can change swiftly. So GPS-based systems are more precise, but are not necessarily more accurate.
Security
In April 2007, two Italian security researchers presented research about RDS-TMC at the CanSecWest security conference. The presentation, entitled "Unusual Car Navigation Tricks", raised the point that RDS-TMC is a wireless cleartext protocol and showed how to build a receiver and transmitter with inexpensive electronics capable of injecting false and potentially dangerous messages.
Detailed instructions and schematics were published in Issue No. 64 of Phrack hacking magazine.
The TMC Forum responded by stating that the effects of any 'pirate' TMC broadcasts would be non-existent on users not on routes affected by fake obstruction messages and that such broadcasts would directly interfere with that country's TMC carrier station, which would lead to criminal or civil liability. They stated that it was therefore unlikely that such activity would take place.
Actual RDS-TMC attacks have been known to occur, for instance in Belgium in 2019 where road users were warned of "air raids on the E40 road" in March and that "firefights broke out on the E17" in August
. Official government advice was to ignore these messages, local police services admitted that locating the source of the transmissions was going to be difficult and that – even though clearly communication laws were broken – arrests or convictions were unlikely.
Devices and navigation programs
An RDS-TMC receiver is a special FM radio tuner that can decode TMC data. Satellite TMC receivers use a dedicated data channel that is broadcast as part of much larger broadcast digital audio channels. TMC data is decoded by matching event and location codes against look-up tables of phrases and locations. The results can be translated into audio or visually displayed on a Sat nav device. The look-up tables must be implemented in a service-specific database mapped to geographic routes and intersections. As with the navigation systems themselves, periodic upgrades are needed as the road system changes. This provides opportunities for vendors to generate revenue.
The technical concepts of RDS-TMC originated about 30 years ago, initially by Blaupunkt and Philips. With European Commission funding, the BBC, Transport Research Laboratory and CCETT came together in a team led by Castle Rock Consultants to develop the standard. More recently, personal navigation devices (PND) have emerged as an alternative way to deliver traffic information via mobile devices employing GPS.
Automobile companies continue to roll out RDS-TMC products. One reason is that the use of mobile devices is attracting legislative attention due to concerns about driver distraction. Like car radios, in-vehicle navigation systems have not so far generated the same concerns and may continue to outsell handheld solutions.
Higher-end models of personal navigation assistants come with a built-in TMC receiver, and depending on the country, the service is available in Eclipse, Garmin, iPhone (Navigon), Navman, Navway, Mio, Pioneer, TomTom and Uniden navigation systems, as well as in Volvo, BMW and Ford Falcon navigation systems, among many others.
TMC adapters can extend mobile navigation systems with integrated GPS receivers with TMC functionality. They can include a bluetooth or USB connection. The adapter passes traffic messages to the navigation software for route calculations. The adapters generally include a connector for FM/TMC, an antenna (2,5mm phone jack or MCX jack 50 Ohm). Compatible navigation programs include AvMap, Destinator PN, Falk Navigator TMC Edition (special version for MyGuide Navigator 6500XL TMC Bundle), GoPal, iGO, Mireo, Navigon MN5, Route 66, and Sygic.
Coverage
In some places, TMC coverage is smaller than that of the radio programme carrying the TMC service, therefore white spots exist. For example, in the US, one of the two TMC commercial services is run by Clear Channel Communications, whose 95 FM station urban markets typically have some level of traffic information service. Another is Sirius Satellite Radio, which covers all of North America, including sparsely-populated rural areas and near-empty deserts. Although vendors are beginning to make arrangements with information systems such as CARS, operated by state police and state departments of transportation, coverage is likely to remain sketchy in some states during the next few years.
Operation worldwide
The following countries provide a TMC service:
Australia
Intelematics Australia broadcasts a national encrypted RDS-TMC service focused initially on urban Australia under the brand 'SUNA Traffic Channel'. The service reaches around 85% of urban Australia, using commercial FM broadcasters in seven cities, as well as via XML for online and smartphone applications. The service is available on GPS navigation systems including Navman, Mio, Uniden, iPhone (Navigon & Sygic), Eclipse, Pioneer, Alpine and Clarion. SUNA Traffic Channel is also available in Ford, Holden, Honda, Toyota, Nissan, Mercedes-Benz, and many other navigation systems. SUNA is currently the only source of comprehensive, metropolitan congestion monitoring content in Australia – proprietary technology interfaces to traffic light control systems. The SUNA broadcast service is fully compliant with both RDS and TMC. However, since the broadcast is encrypted it does not work on in-car GPS navigation systems that do not have a commercial arrangement with SUNA.
Austria
In Austria, ORF broadcasts a free service on radio channels Ö1, Ö2 (9 regional channels), Hitradio Ö3 and FM4. It is supported by the Federal Ministry for Traffic, Innovation and Technology (BMVIT). ASFINAG is responsible for the location table, currently version 2.1, which received updates to handle increased use during Euro 2008. Its location table contains around 8,000 codes.
Baltic region
Mediamobile Nordic plans to broadcast traffic information in the Baltic region. As of 2014, no service is reportedly available in Latvia, Lithuania or Estonia, although location tables (maintained by Destia) were certified by TISA in 2008. As of 2017, an unencrypted TMC service is available on Viker Raadio in Estonia. Mediamobile has a traffic information center in Estonia for the Nordic region.
Belgium
Belgium hosts TMC services: TMOBILIS in Belgium, TIC-VL in Flanders and RTBF in Wallonia and Brussels. Except for TMOBILIS, they are all currently open services.
TMOBILIS is provided by Be-Mobile and Touring Mobilis. It is the only fully Belgian service. It combines all Belgian sources from the Flemish, Walloon and Brussels government, police stations, a national Floating Car Data system based on GPS positions from vehicles and the Touring Mobilis call center. It is nationally broadcast by both VRT on Studio Brussel for Flanders and RTBF on Classic 21 in Wallonia and Brussels.
TIC-VL is broadcast by VRT on Radio 2 and uses content from the Vlaams Verkeerscentrum. Coverage of content and transmissions is limited to Flanders.
In Wallonia and Brussels, CLASS.21 is broadcast by RTBF on Classic 21. The service is from the Centre PEREX of the Service public de Wallonie (SPW, formerly MET) in collaboration with TMC4U. Coverage of transmissions and content are limited to Wallonia and Brussels.
Technum creates the location tables by order of the regional communities. Since December 2004 broadcast messages use location table version 1.4b, which added N-roads. The latest version is 2.9 ().
Bulgaria
A national TMC service for Bulgaria started beta testing in December 2010. The service is provided by TrafficNav, a Budapest based traffic information company in cooperation with the broadcast hardware manufacturer Kvarta. Data sources include real time traffic information provided by tix.bg, presently for Sofia. The service can be accessed by most Garmin navigation devices and will soon be supported in several factory car navigation devices.
Colombia
Legislation does not allow the insertion of external digital data into analogue FM transmissions and the use of RDS-TMC technology is also banned.
Czechia
TMC developments in the Czech Republic are coordinated by CEDA, which is responsible for the location table. Its current version is 4.1, containing more than 16,000 records.
There were 3 providers of TMC service in Czech republic:
JSDI – transmitted on Český rozhlas Radiožurnál – is a free TMC service provided by the Czech Road Motorway Directorate (ŘSD ČR). Content consists of closures, road restrictions and winter maintenance across the country, accident information from rescue services and detailed content from TIC Prague. In December 2022, service moved from FM network of Český rozhlas Vltava to Český rozhlas Radiožurnál due to much better country coverage.
DIC PRAHA – transmitted on frequency of Český rozhlas Plus – 92.6 MHz, provided detailed traffic information for Prague only
Cloesed TMC services:
TELEASIST – (TMC service switched off in 2017) was transmitted on radio network of Český rozhlas Radiožurnál and available countrywide. Information were provided by Teleasist together with Global Assistance.
Denmark
The free TMC service DK-TMC in Denmark is operated by Vejdirektoratet or DRD (Danish Road Directorate). It is broadcast on DR P1, P3 and P4. DRD is also responsible for the location table. The current version is 11.1 and contains around 10,000 location codes.
Finland
V-Traffic, the commercial service in Finland, is provided by MediaMobile since 2007. The service covers the largest cities and roads 1–999, covering the whole country. TMC messages are broadcast nationally on Yle Radio Suomi. V-Traffic uses several information sources to broadcast validated traffic data, including floating car data as well as data from public authorities, traffic cameras, radio stations, road users and several partnership companies. The service is encrypted, based on specifications set by the TISA. The service is available on the majority of navigation units sold in new cars, such as Volkswagen, Audi, Seat, Opel, Volvo, Toyota, Lexus, Mercedes-Benz, Subaru, Suzuki and Skoda, as well as portable navigation devices from Garmin.
The location table is public and provided by Finnish Traffic Agency. The latest version, V2.1, contains approximately 28.000 locations points.
France
Only commercial RDS-TMC traffic broadcast services are available in France.
The commercial service V-Traffic is provided by Mediamobile, a subsidiary of TDF, with two shareholders: Renault and Vinci. The traffic service provides real-time information on 185,000 km of main roads in France, including all highways (11 800 km). It is transmitted on the frequencies of France Inter and is received nationally (99% national coverage). The service is not encrypted, but restricts access using different location table numbers. In 2010 the company signed a partnership with Météo-France for a common road weather hazard service.
Another commercial service is provided by ViaMichelin and Carte Blanche Conseil, transmitted by the Towercast network (NRJ group). In September 2005 PSA Peugeot Citroën signed a partnership with ViaMichelin.
A free TMC service was offered by Autoroute FM but discontinued in 2012.
Location tables are released by the government agency SETRA. The latest version 10.1, was certified by TISA in 2013 and released in 2014. It covers 184 913 km of roads in France, and contains about 25 984 location data points.
Germany
Germany offers both public and commercial services. The public service is an open, free service that can be received via public radio stations.
The other service, TMCpro, is a pay service provided by Navteq Services GmbH and owned by Navteq. It was developed and originally provided by T-Systems Traffic GmbH, a subsidiary of T-Systems that was bought by Navteq in January 2009. The service went live across Germany at the beginning of 2005. The content is provided by ddg Gesellschaft für Verkehrsdaten mbh, a wholly owned subsidiary of T-Systems Traffic GmbH. It is an encrypted service based on the conditional access specifications of the TMC Forum.
BASt, the German Federal Highway Research Institute, releases location tables. In version 5.1 all major access roads leading to football arenas that were used in the World Championship in 2006 were added. The current version is 10.1 and contains 44,233 location codes.
Greece
A TMC-service has been available in the Attica region since September 2010, to be rolled out for nationwide coverage in 2011. The service is provided by TrafficNav, a Budapest-based traffic information company, and is available on Galaxy Radio and Radio DeeJay. The service can be accessed by most Garmin and Mio navigation devices and is to be featured in several built-in car navigation devices.
A second TMC-service is provided by Be-Mobile, a service provider based in Belgium. The service is available via Sentra FM.
Hungary
A national TMC-service has been available since 2008. The service is provided by TrafficNav, the Budapest traffic information company and is available on the national FM networks of Petőfi Radio (Channel 2 of Magyar Rádió, Hungary's State Radio). The service is encrypted and can be accessed by most navigation devices manufactured by TomTom, Garmin, Navigon, Mio and Navon, and is featured in several built-in car navigation devices, including selected models of Volvo, Toyota and Lexus. The service is based on V2.0 of the Hungary location table.
Indonesia
In October 2009, GEWI Europe GmbH & Co. KG released the TISA certified Location Table version 1.0 for Indonesia. GEWI's updated Location Table version 1.1 was certified by TISA on 14 Mar 2012.
In September 2011, iQios Sejahtera launched the first real-time traffic service in Indonesia.
Iran
TMC service is currently unavailable, although the infrastructure is in place; originally for use by the Iranian National Broadcasting Company (IRIB). The service is expected to become publicly available in 2020.
Rayan amin company has begun research in this area
Ireland
TMC for Dublin went live in November 2010. The service was extended to provide national coverage later that year. The service is provided by TrafficNav, the Budapest traffic information company and is available on RTÉ Radio 1, a national FM network of Ireland's State Radio. Data sources include real time traffic information provided by Dublin City Council. The service can be accessed by most Garmin navigation devices and will soon be featured in several built-in car navigation devices.
Israel
A commercial RDS-TMC service was initiated by Decell Technologies in February 2011. Decell provides national coverage broadcast by several regional radio stations. The content distribution relies on Decell's TISA certified TMC location table 36. Decell provides real-time flow and incident traffic data on RDS-TMC to all leading navigation companies.
Italy
A free public RDS-TMC service became available in Italy on 1 July 1998, offered by RAI. CCISS (National Traffic Information Centre) provides the service. RAI broadcasts on Rai Radio 1, Radio2 and Radio 3 FM. This service covers the entire country. Additionally, Rai operates a dedicated traffic news station heavily focusing on motorway viability which trasmits on the fixed frequency of 103.3 FM , Rai Isoradio.
A commercial service is provided by radio station RTL 102.5 in cooperation with InfoBlu. This service covers 90% of the population of Italy, and is still expanding coverage. It has been encrypted since 2007.
The Italian location table, provided by RAI-CCISS, is in version 2.1 with around 12.500 codes.
As of April 2018, the location table was at version 4.3 with more than 41,000 codes. It has all highways, state roads, county roads and urban roads for main towns.
Netherlands
In the Netherlands there are private and public TMC services.
The private service is provided by VID in cooperation with Be-Mobile, and broadcast via Radio 2 and 3FM.
One free services is provided by ANWB in collaboration with technology provider Simacan. The service is broadcast via radio stations Sky Radio, Radio Veronica and BNR Nieuwsradio.
Another free service is provided by VID in cooperation with Be-Mobile, and broadcast via Radio 1 and Radio 538.
Location tables come from Nationaal Dataportaal Wegverkeer. The current version is 9.2, in use since 4 June 2014.
New Zealand
The New Zealand Automobile Association broadcasts traffic alerts via FM broadcast radio in and around Greater Auckland, Wellington and Christchurch.
Norway
Since 2009, NRK is testing an open TMC service.
The open service is transmitted using the P1-frequency. NRK broadcasts information on road works, planned closures and winter-closed mountain passes. Updates on accidents and other unforeseen information are currently done Mon-Fri 05.30–22.00, Sat 09.30–17.00 and Sun 13.00–22.00.
Commercial radio station P4 and Mediamobile is providing a TMC service called V-Traffic in Norway. This service is encrypted but free for all private users when the navigator manufacturer has included it in its product.
The Norwegian FM network will be closed in January 2017 and with that, it is foreseen that all TMC services will be closed. Since 2014, P4 and Mediamobile are running a Digital Radio service that replaces the RDS-TMC in Norway. This service goes under the name V-Traffic DAB.
Statens vegvesen, the Norwegian Public Roads Administration (NPRA), provides location tables.
Poland
On 1 May 2010, commercial TMC service became available in Poland, transmitted by radio station RMF FM. The service, called V-Traffic, is provided by Mediamobile, a subsidiary of the TDF Group, one of the biggest providers of broadcast services in Europe, based in Paris.
In Poland, service is available in PND devices: Garmin, Mio and Becker as well as in car embedded navigations used by Toyota, Volvo and Ford. The service is built on more than 100 different sources, processed automatically (Floating Car Data) or manually by operators in Mediamobile Traffic Information Centre based in Warsaw. Minimum guaranteed signal coverage is 95% of the population and 93% geographical coverage.
In November 2012, CE-Traffic launched commercial TMC service in Poland – Premium RDS-TMC. CE-Traffic partnered with EuroZET media group that is a part of Lagardère Group, in order to provide connectivity country-wide. The service is based on CE-Traffic data generated from Floating Car Data systems fused with journalistic information. It is available for major interconnecting roads, urban streets in 15 major cities, and other roads commonly used by drivers as shortcuts or alternative routes.
The location table includes future changes in the backbone network until the end of 2013.
Portugal
Since March 2011 TMC has been carried on RFM radio, provided by Be-mobile. Be-Mobile released TMC table version 1.1. Navteq Maps editions since Q3/2011 now provide TMC coverage for Portugal. There are now three TMC channels in Portugal.
Recently, Summer Blast radio was included in the TMC providing coverage for Portugal.
Romania
Starting 30 May 2012, TMC service is available in Romania, on private radio station (ProFM). The service is provided by TrafficNav.
TraficOK was the first TMC system tested and implemented in Romania. The system uses a location table of over 11,500 entries, which provides nearly full coverage of the Romanian road infrastructure. The table was developed by AROBS and certified by TISA – Traveller Information Services Association. Data on traffic flow, events, weather warnings, road repairs and traffic jams is collected from several sources. The TraficOK project was developed by AROBS Transilvania Software and Be-Mobile.
Messages are sent via Europa FM radio stations (in FM bandwidth) to various hardware equipment (navigation systems, mobile phones, etc.) equipped with TMC modules.
TraficOK was planned to be available in Bucharest, Ploiești, Pitești, Constanța, Brașov, Cluj-Napoca, Târgu Mureș, Oradea, Arad, Timișoara, Iași and Bacău.
Singapore
In June 2006, GEWI Europe GmbH & Co. KG released the first TISA certified TMC Location Table for Singapore. Its Singapore company, GEWI Asia Pacific Pte. Ltd. offered the service. The latest location table version 1.3, updated and certified in March 2014, includes the Marina Coastal Expressway, more than 150 car park locations within the Central Business District and downtown areas. GEWI's traffic services are available on several models of Smartphones, PAPAGO!, Garmin and TomTom navigation devices, Honda and Toyota in-car navigation systems.
In Nov 2010, the Land Transport Authority announced the release of the Location Table for Singapore. Quantum Inventions offers a traffic data service based on this location table and includes traffic incidents information, traffic speeds, parking availability, weather, road closures, etc. Various brands of GPS systems using the Galactio software provide these dynamic data in the navigation system.
Slovakia
There are two RDS-TMC services running in Slovakia.
"SSC RTVS" – is a free public TMC service transmitting on the network of Slovensky Rozhlas – Radio Slovensko. Transmitted content consist mainly of static information – roadworks and closures. Information transmitted in this service are shown also on web www.zjazdnost.sk of Slovenska sprava ciest.
"DECELL (SK)" – is a paid service, which was launched in May 2013. Messages are transmitted countrywide on the private station Fun Radio. Information consists of current traffic situation provided by CE-Traffic a.s and is available exclusively for owners of Garmin navigation devices.
In September 2018, a new version 4.0 of location tables was issued, covering all highways and all important roads. This version replaced older version 3.2 released in 2014.
Slovenia
A national TMC-service became available in June 2009. The service is provided by TrafficNav, the Budapest traffic information company and is available on two national FM networks of Radiotelevizija Slovenija, Slovenia's National Public Radio (ARS and Radio SI). The service can be accessed by most Garmin, Navigon and Navo navigation devices and will soon be featured in several factory fitted car navigation devices. The service is based on V3.0 of the Slovenia location table.
The Motorway Company in the Republic of Slovenia prepared a new location table DARS 702–35, V1.1 which is freely available for integration in maps. The service is also transmitted on two national FM networks of Radiotelevizija Slovenija, Slovenia's National Public Radio (Prvi and Radio Val 202).
South Africa
Garmin was first to offer the service in South Africa, in time for the FIFA Soccer World Cup in 2010. Navigon coming on board shortly thereafter.
TMC services in South Africa have been available since late 2009, a service provided by Altech Netstar. In partnership with INRIX, Altech Netstar broadcasts their Premium Traffic Service throughout Gauteng, Kwa-Zulu Natal and Western Cape Peninsula. The company planned to launch in Orange Free State, Eastern Cape and Mpumalanga in 2012. Altech Netstar offers a commercial service to its OEM and Commercial Customers. Altech Netstar broadcasts XML services to their device partners and wholesale customers.
Spain
A TMC service is available in Spain on RNE 3. It is provided by SCT as the operator of traffic management in Catalonia Autonomous Community, DT in the Basque Country Autonomous Community and DGT (Traffic General Directorate) for the rest of the country.
Road network coverage is the motorways, national roads and first level roads that belong to the Autonomous Communities. RACC is working on urban services, starting with Seville and Barcelona to broadcast on RNE 2.
Location tables are provided by DGT, Dirección General de Tráfico. The current version is 2.1 and contains about 7.750 entries.
Sweden
The most used TMC service in Sweden is run by Mediamobile under the name V-Traffic Premium RDS-TMC. It is a fully encrypted service with focus on congestions, slippery road warnings and other safety related messages for the driver.
A public service was available in Sweden. Swedish Transport Administration, or Trafikverket, was responsible for the location tables. Version v2.3 contained about 17337 data points, and covered 1138000 km of the Sweden road network.
The public service has not been developed since 2009 when the government decided to stop distribution of end user services. The service ended in August 2022.
Sweden is divided into 8 broadcasting zones to avoid transmitting traffic information that is not useful at that location. They cover the European, national and major county highways. The service is broadcast on Sveriges Radio P3 radio station and covers 98 percent of Sweden.
Swedish Transport Administration (Trafikverket) has an information page for Trafikverket RDS-TMC in Swedish.
Switzerland
A TMC service is available in Switzerland. The broadcaster is SRG SSR idée suisse or Swiss Broadcasting Corporation who transmits TMC on FM chain 1 and FM chain 3 all over Switzerland.
In German speaking areas: Radio SRF 1 (G) / Radio SRF 3 (G) / La 1ère (F) / Rete Uno (I) partly
In French speaking areas: La 1ère (F) / Couleur 3 (F) / Radio SRF 1 (G) / Rete Uno (I) partly
In Italian speaking areas: Rete Uno (I) / Rete Tre (I) / Radio SRF 1 (G) / La 1ère (F)
Daughter company Viasuisse operates the service.
Location codes are the responsibility of the Swiss Federal Roads Authority FEDRO but B+S Ingenieur (Bundesamt für Strassen) distributes the location tables. Version 5.5 contains around 10,000 codes.
Taiwan
The Taiwanese police radio station and Ministry of Transportation and Communication (MOTC) both broadcast RDS-TMC traffic data. It is currently available for TomTom, Garmin, Panasonic, PaPaGo and Mio devices.
Turkey
Turkey has 3 RDS TMC services.
1: HERE (previously NAVTEQ) has the broadest RDS-TMC service in Turkey, covering the largest 11 cities in the country: Istanbul, Ankara, Adana, Bursa, Mersin, Izmir, Eskisehir, Antalya, Konya, Kayseri and Gaziantep. The service was launched in July 2012.
2: TMC service in Turkey has been published by Basarsoft and TrafficNav in 2012. RDS TMC is available only in the cities where the traffic congestions are a big problem for the people. Istanbul, Ankara, Izmir, Antalya, Bursa are the cities where broadcasting is being done.
3: TMC service in Turkey was published by Be-Mobile and Infotech in 2012. The RDS TMC service is available in Istanbul, Ankara, Izmir, Bursa and Antalya.
All 3 Turkish TMC services are paid services and users can have it in Navigation devices. Both PND and Automotive products are using TMC service in Turkey.
United Kingdom
The INRIX provides a commercial TMC service, iTMC in the United Kingdom. It is broadcast nationally on Classic FM and other commercial radio stations. The BBC Charter prohibits it from carrying a commercial service. ITIS provides traffic data on RDS-TMC to major automotive companies (BMW, Mercedes, Toyota, Ford, Renault, Jaguar Land Rover and others). The price of the service is included in the price of the car or navigation system.
This system uses Floating Vehicle Data, which include positional information from over 160,000 fleet vehicles fitted. The data is complemented by journalistic or "Incident" data provided by Trafficlink. Trafficlink is owned by ITIS and provides traffic and travel bulletins to BBC Radio and to over 95% of the UK commercial radio stations. Incident data includes road works, accidents and closures.
Trafficmaster, a Teletrac Navman brand, operated a national service using local and regional radio broadcaster Global Radio which provided reception across mainland Britain. This system used road-side infrastructure to measure vehicle travel time between sensors placed a few miles apart, and used number plate recognition technology. Data sources used by Teletrac Navman included Floating Vehicle data from fitted telematics devices, 8500 under-road inductance loops, over 1800 CCTV cameras, Congestion Zone charging cameras, Police control rooms plus a proprietary network of ANPR and IR sensors cameras across the road network. Teletrac Navman provided traffic data on RDS-TMC to major automotive OEM brands including VW, Audi, Skoda, Mazda, Chrysler, Honda, Seat and Vauxhall. Teletrac Navman also supplies the Department for Transport with historical traffic data for modelling purposes which is used by central and local governments and their sub-contractors when analyzing road network improvement opportunities. The Trafficmaster service via RDS-TMC was withdrawin at the end of March 2023.
Both services maintain their own location tables. The current location table version of ITIS is 5.1. The last location table version of Trafficmaster was 3.1.
United States and Canada
In the United States of America, XM Satellite Radio and Sirius Satellite Radio provide TMC service all over the US. Navteq provides traffic data to both providers. Navteq Traffic delivers traffic information and related advertising via RDS and HD signals to navigation devices nationwide. Navteq also provided traffic data sourced from sensors, probes and other technologies in 10+ countries as of December 2009. INRIX, Inc. fuses TMC data with real-time flow information from its crowd-sourced network of floating cars and mobile devices with information from other public and private sources to deliver real-time and predictive traffic information.
iHeartMedia and Tele Atlas have a TMC service called Total Traffic & Weather Network (TTWN) (also iHeartMedia's branding for traffic and weather reporting), using FM RDS in 77 US cities and three Canadian metropolitan areas. These services are both offered by subscription and were initially available to many in-car navigation units via an expansion module purchased separately.
The TomTom RDS-TMC Traffic Receiver acquires information through an FM signal broadcast by Clear Channel's regional providers. By connecting a compatible TomTom navigation device to the RDS-TMC Traffic Receiver, users receive traffic information via the TMC connection. Traffic alerts appear in the traffic bar on the right side of the screen. Tapping the traffic bar reveals further information, such as accident or traffic delays. The RDS-TMC Traffic Receiver is compatible with the TomTom VIA series, GO 920, TomTom GO 720, TomTom ONE XL and TomTom ONE 3rd edition. It integrates RDS-TMC Traffic information with TomTom GO and ONE products.
In addition to these after-market services, six major motor manufacturers offer RDS-TMC as standard in their U.S. vehicles, including Volvo and BMW.
iBiquity HD Radio provides a TMC service based on RDS-TMC.
Other areas
In the Netherlands, a map of current and planned TMC service is available from the Traffic Message Channel.
In Luxembourg, no service is currently planned.
A location table for UAE v4.0 has been certified.
In Turkey, various location tables are available.
In China, investigations are ongoing to choose a technology for its traffic information system. The main candidates are the Japanese system VICS and the European TMC. A TMC Location table version 1.0 has already been certified. Following the advancement of 4G/5G and IoT, China is actively planning in using 5G to broadcast traffic message and Hong Kong is also in the path. Several consultations in ITIS and Electric Vehicle are proceeding in 2019.
In Serbia, an RDS-TMC system along A1 highway started in July 2019, however with limitations.
See also
Bluetooth
Google Maps
Intelligent transportation system
Yahoo! Maps Traffic
References
External links
General
, the TMC and TPEG Forum successor
from the RDS Forum, now maintained by TISA
Singapore
Germany
Austria
Switzerland
Belgium
Bulgaria
Sweden
(in Swedish)
Netherlands
France
Spain
Denmark
(only in Danish)
Finland
Ireland
Italy
Norway
Poland
Slovakia
UK
Czech Republic
Automotive navigation systems
Broadcast engineering
Radio technology | Traffic message channel | Technology,Engineering | 8,449 |
4,568,160 | https://en.wikipedia.org/wiki/Sphingosine%20kinase | Sphingosine kinase (SphK) is a conserved lipid kinase that catalyzes formation sphingosine-1-phosphate (S1P) from the precursor sphingolipid sphingosine. Sphingolipid metabolites, such as ceramide, sphingosine and sphingosine-1-phosphate, are lipid second messengers involved in diverse cellular processes. There are two forms of SphK, SphK1 and SphK2. SphK1 is found in the cytosol of eukaryotic cells, and migrates to the plasma membrane upon activation. SphK2 is localized to the nucleus.
Function
S1P has been shown to regulate diverse cellular processes. It has been characterized as a lipid signaling molecule with dual function. On one hand, it exerts its actions extracellularly by binding to the five different S1P receptors that couple to a variety of G-proteins to regulate diverse biological functions, ranging from cell growth and survival to effector functions, such as proinflammatory mediator synthesis. On the other hand, it appears to act as an intracellular second messenger, although the relevant molecular target(s) to which it binds within cells remains to be discovered. The role of S1P in various functions of cells and tissues is established, including regulation of cell survival and motility, angiogenesis, and inflammatory responses. Sphingosine kinases (SphKs) types 1 and 2, the two enzymes identified so far in mammals that produce S1P by ATP-dependent phosphorylation of sphingosine, have therefore received considerable interest.
Sphingolipid metabolism
Sphingolipids are ubiquitous membrane constituents of all eukaryotic cells. In general, the term sphingolipid (SL) refers to any of a number of lipids consisting of a head group attached to the 1-OH of ceramide (Cer). Ceramides consist of a sphingoid base, commonly referred to as a long-chain base (LCB), which is N-acylated. De novo synthesis of LCBs begins with the condensation of palmitoyl-CoA with serine, forming 3-ketosphinganine (Fig. 1). This product is then reduced to sphinganine, also known as dihydrosphingosine (dihydro-Sph; 2-amino-1,3-dihydroxy-octadecane). A 14– to 26-carbon fatty acid chain is then added in an amide linkage with the 2-amino group, forming dihydroceramide (dihydro-Cer). A head group, such as phosphocholine or a carbohydrate, can now be added to the 1-OH, forming a sphingolipid, although most sphingolipids of higher eukaryotes contain further modifications of the LCB.
Popular culture
During "100,000 Airplanes", a third season episode of The West Wing, sphingosine kinase is fictitiously described as "the enzyme believed to control all signal pathways to cancer growth." Learning of it inspires the protagonist of the series, President Josiah Bartlet, to consider launching an Apollo program to cure cancer.
References
Further reading
External links
EC 2.7.1
Lipids | Sphingosine kinase | Chemistry | 711 |
899,616 | https://en.wikipedia.org/wiki/Fanavid | Fanavid is a Brazilian glass manufacturer based in Guarulhos, São Paulo, Brazil. Fanavid specializes in all types of auto glass.
History
Fanavid was founded in 1963 by Mansur Jose Farhat in São Paulo, Brazil, as an importer of glass. In 1974 a tempered glass plant was founded in Village Guillermo. In 1980 a curved laminated glass and plain glass plant was founded. In 1992 a plant was opened in Dutra, with another opening there in 1995. In 2002, all the operations of Fanavid are centered in the plant of Guarulhos.
External links
Fanavid Official Website (Portuguese)
Glassmaking companies
Manufacturing companies of Brazil
Companies based in São Paulo (state)
Manufacturing companies established in 1963
Brazilian brands | Fanavid | Materials_science,Engineering | 156 |
55,652,659 | https://en.wikipedia.org/wiki/Erythrophleine | Erythrophleine is a complex alkaloid and ester of tricyclic diterpene acids derived from many of the plants in the genus erythrophleum. A highly toxic compound, it is most commonly known for its use in West African trials by ordeal. Exposure to erythrophleine can quickly lead to ataxia, dyspnea, heart paralysis, and sudden death. Visible effects of erythrophleine poisoning include induced terror, labored and irregular breathing, convulsions, urination, and vomiting.
Mechanism of action
Once ingested, erythrophleine primarily acts on the body by disrupting the nervous system. It does this by inhibiting Na-K ATPase, an enzyme that breaks down ATP to generate an electric potential by moving sodium and potassium ions against their concentration gradient. In vertebrates, this potential is used to transmit signals across neural synapses. Normally, sodium-potassium pumps move potassium ions into the nerve cell and sodium ions out, but studies have shown that exposure to erythrophleine reduces this action dramatically. This can have a number of compounding effects including weakened nerve signaling responses and inhibited ability to maintain cellular homeostasis.
While the exact mechanism of this process is unknown, it is likely similar to that of cardiac glycosides. Cardiac glycosides inhibit Na-K ATPase by stabilizing it in the E2-P transition state, preventing sodium ions from being extruded. They do this by mimicking potassium and tightly binding to Na-K ATPase at the potassium active site. The most well-known of these molecules is the active toxin in foxglove.
Use as an ordeal poison
Erythrophleine's primary use is as a toxin in ancient West African ordeal trials, called sassywood. The process has largely been outlawed, but due to the limited judicial infrastructure of some West African states, ordeal trials still take place with some regularity. Some prominent economists have even argued that sassywood is a more effective substitute to Liberian courts, given the decrepit nature of the country's judicial system.
The main trial consists of creating a poisonous brew derived from the bark of the sasswood tree and administering it to the accused. In order to create the drink, bark of the ordeal tree was simply scraped, powdered, added to water, and allowed to steep. However, many cultures added additional ingredients to the mixture that made the final recipe much more complicated. Once consumed, if the defendant fails to throw up all of the poison before it enters their system, they are pronounced guilty and the poison likely kills them. On the other hand, if they manage to throw up all of the poison and maintain full control of their limbs, then they are cleared of any wrongdoing.
References
Alkaloids
Methyl esters
Phenanthrenes
Ethanolamines | Erythrophleine | Chemistry | 608 |
1,776,033 | https://en.wikipedia.org/wiki/List%20of%20word%20processor%20programs | The following is a list of notable word processor programs.
Word processor programs
Free and open-source software
AbiWord – available for AmigaOS, Linux, ReactOS and Solaris
Apache OpenOffice Writer – available for Linux, macOS and Windows
Calligra Words – available for Linux and Windows
Collabora Online Writer – available for Android, ChromeOS, iOS, iPadOS, Linux, Mac, Online and Windows
GNU TeXmacs – document preparation system – available for Linux, macOS and Windows
Groff – available for BSD and Linux
LibreOffice Writer – available for Linux, macOS and Windows, and unofficial: Android, ChromeOS, FreeBSD, Haiku, iOS, iPadOS, OpenBSD, NetBSD and Solaris
LyX – TeX – available for ChromeOS, Haiku, OS/2, Linux, macOS, UNIX and Windows
TextEdit – available for macOS and Linux
WordGrinder – available for Linux, macOS and Windows
Freeware and proprietary suites
Apple Pages – available for iOS, iPadOS, macOS and online
Atlantis Word Processor – available for Windows
Baraha – available for Windows
Bean – available for macOS
DavkaWriter – available for macOS and Windows
Final Draft – screenplay/teleplay word processor, available for macOS and Windows
Adobe FrameMaker – Windows
Gobe Productive Word Processor – Windows and Linux
Google Docs
Hangul (also known as HWP) – Windows, Mac and Linux
IA Writer – Mac, iOS
IBM SCRIPT – IBM VM/370
IBM SCRIPT/VS – IBM z/VM or z/OS systems
Ichitaro – Japanese word processor produced by JustSystems
Adobe InCopy – Mac and Windows
iStudio Publisher – Mac
Jarte – Windows
JustSystems – Windows
Mathematica – technical and scientific word processing
Mellel – Mac
Microsoft Word – Online, Windows and Mac
Nextcloud
Nisus Writer – Mac
Nota Bene – Windows, Mac
OnlyOffice
Polaris Office – Android and Windows Mobile
PolyEdit – Windows
RagTime – Windows and Mac
Scrivener – Windows, Mac and Linux
TechWriter – RISC OS
Text Control – Word Processing SDK Library
TextMaker – Windows and Linux
ThinkFree Office Write – Windows, Mac and Linux
Ulysses – Mac, iPadOS, iOS
WordPerfect – Windows and Linux
WPS Writer – Windows and Linux
WriteOnline
XaitPorter – word processor for Enterprise, allowing both single-user and team collaboration approaches to learning
Discontinued word processor programs
See also
Comparison of word processors
List of office suites
List of text editors
Online office suite
Online spreadsheet
References
Notes
Word processors
Online office suites | List of word processor programs | Technology | 542 |
29,657,855 | https://en.wikipedia.org/wiki/Rutherford%20Aris%20bibliography | This bibliography of Rutherford Aris contains a comprehensive listing of the scientific publications of Aris, including books, journal articles, and contributions to other published material.
Books
Edited books
Chapters in books
"Stirred pots and empty tubes" (with A. Varma). In (N.R. Amundson and L. Lapidus, eds.). Chemical Reactor Theory. Englewood Cliffs, NJ: Prentice-Hall, 1977 (pp. 79–155).
"Reactor Steady-State Multiplicity and Stability" (with A. Varma and M. Morbidelli). In (J.J. Carberry and A. Varma, eds.). Chemical Reactor and Reaction Engineering. New York: Marcel Dekker, 1987 (Ch. 15).
Journal articles
"On the application of Angstrom's method of measuring thermal conductivity" (with C.H. Bosanquet) Br. J. Appl. Phys. 5, 252–255 (1954).
"On the dispersion of a solute in a fluid flowing through a tube" Proc. Roy. Soc, A235, 67–77 (1956).
"On shape factors for irregular particles–I: The steady state problem. Diffusion and reaction." Chem. Eng. Sci. 6, 262–268 (1957). Reprinted in "Classic papers from Chemical Engineering Science." Chem. Eng. Sci. 50, 3897–3903 (1995).
"On shape factors for irregular particles–II: The transient problem. Heat transfer to a packed bed." Chem. Eng. Sci. 7, 8–14 (1957).
"Some remarks on longitudinal mixing or diffusion in fixed beds" (with N.R. Amundson). AIChE Journal 3, 280–282 (1957).
"Stability of some chemical systems under control" (with N.R. Amundson). Chem. Eng. Prog. 53, 227–230 (1957).
"An analysis of chemical reactor stability and control, parts I–III" (with N.R. Amundson). Chem. Eng. Sci. 7, 121–155 (1958).
"On the dispersion of linear kinematic waves." Proc. Roy. Soc. A245, 268–277 (1958).
"Statistical analysis of a reactor: Linear theory" (with N.R. Amundson). Chem. Eng. Sci. 9, 250–262 (1958).
"Diffusion and reaction in flow system of Turner's structures." Chem. Eng. Sci. 10, 80–87 (1959).
"On the dispersion of a solute by diffusion, convection and exchange between phases." Proc. Roy. Soc. A252, 538–550 (1959).
"Notes on the diffusion-type model for longitudinal mixing in flow (Levenspiel, Smith and Van der Laan)." Chem. Eng. Sci. 10, 266–267 (1959).
"The longitudinal diffusion coefficient in flow through a tube with stagnant pockets." Chem. Eng. Sci. 11, 194–198 (1959).
"An analysis of chemical reactor stability and control–IV: Mixed derivative and proportional control." (with D.J. Nemanic, J.W. Tiemey, and N. R. Amundson). Chem. Eng. Sci. 11, 199–206 (1959).
"Some optimization problems in chemical engineering" (with R. Bellman and R. Kalaba). Chem. Eng. Symp. Ser. 56, 95–100 (1960).
"On optimum cross-current extraction" (with D.F. Rudd and N.R. Amundson). Chem. Eng. Sci. 12, 88–97 (1959).
"On Denbigh's optimum temperature sequence." Chem. Eng. Sci. 12, 56–64 (1960).
"Studies in optimization–I: The optimum design of adiabatic reactors with several beds." Chem. Eng. ScL, 12, 243–252 (1960).
"Studies in optimization–II: Optimum temperature gradients in tubular reactors." Chem. Eng. Sci. 13, 18–29 (1960).
"Studies in optimization–Ill: The optimum operating conditions in sequences of stirred tank reactors." Chem. Eng. Sci. 13, 75–81 (1960).
"The optimal design of stagewise adiabatic reactors." Paper presented at the AIChE/ORSA Symposium on Optimization in Chemical Engineering, New York, 1960.
"On the dispersion of a solute in pulsating flow through a tube." Proc. Roy. Soc. A259, 370–376 (1960).
"Chemical reactor design: For homogeneous flow reactions, a digital computer can determine the optimum combination of reactor type and operating conditions" (with G.T. Westbrook). Ind. Eng. Chem. 53, 181–186 (1961).
"Studies in optimization–IV: The optimum conditions for a single reaction." Chem. Eng. Sci. 13, 197–206 (1961).
"The determination of optimum operating conditions by the methods of dynamic programming." Z. Elektrochem. 65, 229–244 (1961).
"Tubular reactor sensitivity" (with J. Coste and N.R. Amundson). A.I.Ch.E.J. 7, 124–128 (1961).
"A study of iterative optimization" (with D. F. Rudd and N.R. Amundson) A.I.Ch.E.J. 7, 376–384 (1961).
"Optimal bypass rates for sequences of stirred tank reactors" Can. J. Chem. Eng. 39, 121–126 (1961).
"Heat transfer in fluidised and moving beds" (with N. R. Amundson). Paper presented at the Proceedings of the Symposium on the Interaction between Fluids and Particles, London, 20–22 June 1962.
"Studies in optimization–V: The bang-bang control of a batch reactor" (with N. Blakemore). Chem. Eng. Sci. 17, 591–598 (1962).
"On optimal adiabatic reactors of combined types." Can. J. Chem. Eng. 40, 87–92 (1962).
"Adaptive control" Br. Chem. Eng. 7, 896–900 (1962).
"Stability of nonadiabatic packed bed reactors" (with S.L. Liu and N.R. Amundson). Ind. Eng. Chem. Fund. 2, 12–20 (1963).
"Independence of chemical reactions" (with H. S. Mah). Ind. Eng. Chem. Fund. 2, 90–94 (1963).
"Optimal adiabatic bed reactors for sulfur dioxide with cold shot cooling" (with K.-Y. Lee). Ind. Eng. Chem. Proc. Des.–Dev. 2, 300–306 (1963).
"Control of a cold shot adiabatic bed reactor with a decaying catalyst" (with K.-Y. Lee). Ind. Eng. Chem. Proc. Des.–Dev. 2, 306–309 (1963).
"Dynamic programming in countercurrent systems" (with D. Yesberg). /. Math. Anal. Appl. 7, 421–424 (1963).
"Review of progress in control engineering." Br. Chem. Eng. 8, 432–441 (1963).
"The fundamental arbitrariness in stoichiometry." Chem. Eng. Sci. 18, 554–555 (1963).
"The algebra of systems of second-order reactions." Ind. Eng. Chem. Fund. 3, 28–37 (1964).
"An analysis of chemical reactor stability and control–VIII: The direct method of Lyapunov. Introduction and applications to simple reactions in stirred vessels" (with R.B. Warden and N.R. Amundson). Chem. Eng. ScL 19, 149–172 (1963).
"An analysis of chemical reactor stability and control–IX: Further investigations into the direct method of Lyapunov" (with R.B. Warden and N.R. Amundson). Chem. Eng. Sci. 19, 173–190 (1964).
"Studies in optimization–VI: The application of Pontryagin's method to the control of a stirred reactor" (with CD. Siebenthal). Chem. Eng. Sci. 19, 729–746 (1964).
"Studies in optimization–VII: The application of Pontryagin's methods to the control of batch and tubular reactors" (with CD. Siebenthal). Chem. Eng. Sci. 19, 747–761 (1964).
"An adaptive control of the batch reactor–I: Identification of kinetics" (with H.H.-Y. Chien). Automatica 2, 41–58 (1964).
"The adaptive control of a batch reactor–II: Optimal path control" (with H.H.-Y. Chien). Automatica 2, 59–71 (1964).
"Optimization of multistage cyclic and branching systems by serial procedures" (with G.L. Nemhauser and D.J. Wilde). A.I.Ch.E.J. 10, 913–919 (1964).
"N-segment least-squares approximation" (with M.M. Denn). A.I.A.A.J. 8, 432 (1964).
"Optimal policies for first-order consecutive reversible reactions" (with R. S. H. Mah). Chem. Eng. Sci. 19, 541–553 (1964).
"Chemical reactor analysis: A morphological approach." Ind. Eng. Chem. 56, 23–27 (1964).
"An elementary derivation of the maximum principle" (with M. M. Denn). A.I.Ch.E.J. 11, 367–368 (1965).
"The dynamics of reactors of mixed type: I. The nature of the steady state" (with CE. Gall). Can. J. Chem. Eng. 43, 16–22 (1965).
"A normalization for the Thiele modulus (letter with correction in November issue)." Ind. Eng. Chem. Fund. 4, 227–229 (1965).
"Stability estimates for the stirred tank reactor" (with W. Regenass). Chem. Eng. ScL 20, 60–66 (1964).
"Second order variational equations and the strong maximum principle" (with M.M. Denn). Chem. Eng, Scu 20, 373–384 (1965).
"Generalized Euler equations" (with M.M. Denn). ZAMP 16, 290–295 (1965).
"Green's functions and optimal systems: Necessary conditions and an iterative technique" (with M.M. Denn). Ind. Eng. Chem. Fund. 4, 7–16 (1965).
"Green's functions and optimal systems: The gradient direction in decision space" (with M.M. Denn). Ind. Eng. Chem. Fund. 4, 213–222 (1965).
"Green's functions and optimal systems: Complex interconnected structures" (with M.M. Denn). Ind. Eng. Chem. Fund. 4, 248–257 (1965).
"Prolegomena to the rational analysis of systems of chemical reactions." Arch. Ration. Mech. Anal. 19, 81–99 (1965).
"An adaptive control of the batch racton–III: Simplified parameter estimation" (with W.H. Ray). Automatica 3, 53–71 (1965).
"On simple exchange waves in fixed beds" (with N.R. Amundson and R. Swanson). Proc. Roy. Sac. A286, 129–139 (1965).
"On the theory of reactions in continuous mixtures" (with G.R. Gavalas). Phil. Trans. Roy. Soc. A 260, 351–393 (1966).
"Is sophistication really necessary? Ind. Eng. Chem. 58, 32–37 (1966).
"Rationale for optimal reactor design, (with W.H. Ray). Ind. Eng. Chem. Fund. 5, 478–483 (1966).
"Questing control of chemical reactors." Paper presented at the Annual Meeting of the American Institute of Chemical Engineers, Mexico City, October 1966.
"Bacterial growth as an optimal process" (with C.H. Swanson, A.G. Fredrickson, and H.M. Tsuchiya). /. Theor. Biol. 12, 228–250 (1966).
"Compartmental analysis and the theory of residence time distributions." In K.B. Warren, (ed.,). Intracellular Transport, (pp. 167–197). New York: Academic Press, 1966.
"Studies in optimization–VIII: Questing control of a stirred tank reactor" (with R.N. Schindler). Chem. Eng. Sci. 22, 319–336 (1967).
"Studies in optimization–IX.: The questing control of a two phase reactor" (with R.N. Schindler). Chem. Eng. Scu 11, 337–344 (1967).
"Studies in optimization–X: Questing control with an economic criterion" (with R.N. Schindler). Chem. Eng. Sci. 11, 345–352 (1967).
"An adaptive control of the batch reactor–IV: A more sophisticated controller" (with W.H. Ray). Automatica 4, 139–161 (1967).
"Simple control policies for reactors with catalyst decay" (with A. Chou and W.H. Ray). Trans. Insm Chem. Engrs. 45, 153–159 (1967).
"On the mathematical status of the pseudosteady state hypothesis of biochemical kinetics" (with F.G. Heineken and H.M. Tsuchiya). Math. Biosci. 1, 95–113 (1967).
"Transition between regimes in gas-solid reactions." Ind. Eng. Chem. Fund. 6, 315–318 (1967).
"On the accuracy of determining rate constants in enzymatic reactions" (with F.G. Heineken and H.M. Tsuchiya). Math. Biosci. 1, 115–141 (1967).
"Prolegomena to the rational analysis of systems of chemical reactions–II: Some addenda. Arch. Ration. Mech. Anal. 11, 356–364 (1968).
"On what sort of place, if any, theoretical and mathematical studies should have in graduate chemical engineering research." Chem. Eng. Educ. 1, 36–39 (1967).
"Sufficient conditions for the uniqueness of the steady state." Chem. Eng. ScL 23, 1501 (1968).
"Optimal control for pyrolytic reactors" (with A.P. Jackman). Paper presented at the Fourth European Symposium, Brussels, 1968.
"Canon and method in the arts and sciences (Olin Lecture, Yale University)." Chem. Eng. Educ. 3, 48–52 (1969).
"On stability criteria of chemical reaction engineering." Chem. Eng. Sci 14, 149–169 (1968).
"A note on mechanism and memory in the kinetics of biochemical reactions." Math. Biosci 3, 421–429 (1968).
"Communications on the theory of diffusion and reaction–I: A complete parametric study of the first-order, irreversible exothermic reaction in a flat slab of catalyst" (with D.W. Drott). Chem. Eng. Sci. 24, 541–551 (1969).
"Communications on the theory of diffusion and reaction–II: The effect of shape on the effectiveness factor" (with S. Rester). Chem. Eng. Sci. 24, 793–795 (1969).
"Communications on the theory of diffusion and reaction–Ill: The simulation of shape effects" (with S. Rester and J. Jouven). Chem. Eng. Sci. 24, 1019–1022 (1968).
"Communications on the theory on diffusion and reaction–IV: Combined effects of internal and external diffusion in the non-isothermal case" (with B. Hatfield). Chem. Eng. Sci. 24, 1213–1222 (1969).
"Mathematical aspects of chemical reaction" Ind. Eng. Chem. 61, 17–29 (1969).
"Some problems in chemical reactor analysis with stochastic features: Linear systems with fluctuating coefficients" (with T.M. Pell, Jr.). Ind. Eng. Chem. Fund. 8, 339–345 (1969).
"A remark on the equilibrium theory of the parametric pump." Ind. Eng. Chem. Fund. 8, 603 (1969).
"Problems in chemical reactor analysis with stochastic features: Control of linearized distributed systems on discrete and corrupted observations" (with T.M. Pell, Jr.). Ind. Eng Chem. Fund. 9, 15–20 (1970).
"Chemical kinetics and the ecology of mathematics," Am. Scient. 58, 419–428 (1970).
"Conmiunications on the theory of diffusion and reaction–V: Findings and conjectures concerning the multiplicity of solutions." (with I. Copelowitz). Chem. Eng. Sci. 25, 906–909 (1970).
"Communications on the theory of diffusion and reaction–VI: The effectiveness of spherical catalyst particles in steep external gradients" (with I. Copelowitz). Chem. Eng. Sci. 25, 885–896 (1970).
"Studies in optimization–XI: An experimental test of questing controller" (with J.M. Wheeler). Chem. Eng. Sci. 25, 445–462 (1970).
"On the theory of multicomponent chromatography" (with H.-K. Rhee and N.R. Amundson). Phil. Trans. Roy. Soc. A 267, 419–455 (1970).
"Algebraic aspects of formal chemical kinetics." In M. Bunge (ed.). Studies in the Foundations, Methodology and Philosophy of Science, (Vol. 4, pp. 119–129). New York: Springer-Verlag, 1971.
"Multicomponent adsorption in continuous countercurrent exchangers" (with H.-K. Rhee and N.R. Amundson). Phil. Trans. Roy. Soc. A 269, 187–215 (1971).
"A note on a form of the Emden–Fowler Equation" (with B.N. Mehta). /. Math. Anal. Appl. 36, 611–621 (1971).
"A note on the structure of the transient behavior of chemical reactors." Chem. Eng. J. 2, 140–141 (1970).
"Communications on the theory of diffusion and reaction–VII: The isothermal pth order reaction" (with B.N. Mehta). Chem. Eng. Sci. 26, 1699–1712 (1971).
"Transients in distributed chemical reactors part 1: A simplified model" (with D.L. Schruben). Chem. Eng. J. 2, 179–188 (1970).
"Surface diffusion and reaction at widely separated sites" /. Catal. 22, 282–284 (1971).
"Variational bounds for problems in diffusion and reaction" (with W. Strieder). /. Inst. Maths Appl. 8, 328–334 (1971).
"On the realistic and interesting parameter ranges in the theory of diffusion and reaction" (with M.C. Mercer). Latin Am. J. Chem. Eng. Appl. Chem. 2, 149–162 (1971).
Some problems in the analysis of transient behavior and stability of chemical reactors. First International Symposium on Chemical Reaction Engineering no. 109. Washington, D.C.: American Chemical Society, 1972.
"Communications on the theory of diffusion and reaction–VIII: Variational bounds on the effectiveness factor" (with S. Rester). Chem. Eng. Sci. 27, 347–360 (1972).
"On a mechanism for autocatalysis" (with D.R. Schneider and N.R. Amundson). Chem. Eng. Sci. 27, 895–905 (1972).
"Mobility, permeability, and the pseudosteady-state hypothesis." Math. BioscL 13, 1–8 (1972).
"Diffusive and electrostatic effects with insolubilized enzymes" (with M.L. Schuler and H.M. Tsuchiya). /. Theor. Biol. 35, 67–76 (1972).
"A method of representing the nonisothermal effectiveness factor for fixed bed calculations" (with J.G. Jouven). A.I.Ch.E.J. 18, 402–408 (1972).
"The control of a stirred tank reactor with hysteresis in the control element–I: Phase space analysis" (with J.C. Hyun). Chem. Eng. Sci. 27, 1341–1359 (1972).
"The control of a stirred tank reactor with hysteresis in the control element–II: Describing function analysis" (with J.C. Hyun). Chem. Eng. Sci. 27, 1361–1370 (1972).
"On the equations for the movement and deformation of a reaction front" (with R.H. Knapp). Arch. Ration. Mech. Anal. 44, 165–177 (1972).
"Some problems common to chemical engineering and the biological sciences." Paper presented at the Proceedings of the Scandinavian Congress of Chemical Engineering, November 1971.
"Some interactions between problems in chemical engineering and the biological sciences" In G. Lindner and K. Nyberg (eds.). Environmental Engineering, (pp. 215–225). Dordrecht-Holland: D. Reidel Publishing Co., 1973.
"Transients in distributed chemical reactors. Part 2: Influence of diffusion in the simplified model" (with I.H. Farina). Chem. Eng. J. 4, 149–170 (1972).
"Asymmetries generated by diffusion and reaction, and their bearing on active transport through membranes" (with K.H. Keller). Proc. Natl. Acad Sci. USA 69, 777–779 (1972).
"An analysis of chemical reactor stability and sensitivity–XIV: The effect of the steady state hypothesis" (with D.R. Schneider and N.R. Amundson). Chem. Eng, ScL 28, 885–896 (1973).
"Hydrodynamic focusing and electronic cell-sizing techniques" (with M.L. Shuler and H.M. Tsuchiya). Appl. Microbiol. 24, 384–388 (1972).
"Diffusive and electrostatic effects with insolubilized enzymes subject to substrate inhibition" (with M.L. Shuler and H.M. Tsuchiya). /. Theor. Biol. 41, 347–356 (1973).
"Communications on the theory of diffusion and reaction–IX: Internal pressure and forced flow for reactions with volume change" (with J.P.G. Kehoe). Chem. Eng. ScL 28, 2094 – 2098 (1973).
"Communications on the theory of diffusion and reaction–X. A generalization of Wei's bounds on the maximum temperature" (with C. Georgakis). Chem. Eng. Sci. 29, 291–293 (1974).
"The theory of diffusion and reaction: A chemical engineering symphony (1973 ASEE Award lecture)." Chem. Eng. Educ. 8, 20–40 (1973).
Counter-current moving bed chromatographic reactors (with S. Viswanathan). ACS Symposium Series no. 133. Washington, D.C.: American Chemical Society, 1974.
"An analysis of the countercurrent moving bed reactor" (with S. Viswanathan). SIAM–AMS Proc. 8, 99–124 (1974).
"Phenomena of multiplicity, stability, and symmetry." Ann. NY Acad. Sci. 231, 86–98 (1974).
"On the ostensible steady state of a dynamical system." Rend. Lincei Ser. W//57, 1–9 (1974).
"What's the use of a Ph.D. anyway?" AIChE Stud. Memb. Bull. 15, 5–7 (1974).
"On the concept of the steady state in chemical reactor analysis" (with S. Viswanathan). Chem. Eng. Commun. 2, 1–4 (1975).
"The design of stirred reactors with hollow fiber catalysts for Michaelis–Menten kinetics" (with C. Georgakis and P.C.-H. Chan). Biotechnol. Bioeng. 57, 99–106 (1975).
"Diffusion, reaction and the pseudo-steady-state hypothesis" (with C. Georgakis). Math. Biosci. 25, 237–258 (1975).
"Diffusion and first order reaction in a general multilayered membrane" (with B. Bunow). Math. Biosci. 26, 157–174 (1975).
"Carberry's ultimate paper." Chem. Eng. Educ. 9, 118–119 (1975).
"Diffusion and reaction in mycelial pellets." J. Ferment. Technol. 53, 899–901 (1975).
"Some thoughts on the nature of academic research in chemical engineering." Chem. Eng. Educ. 10, 2–5 (1976).
"Computational methods for the tubular chemical reactor" (with A. Varma, C. Georgakis, and N.R. Amundson). Comput. Meth. Appl. Mech. Eng. 8, 319–330 (1976).
"How to get the most out of an equation without really trying." Chem. Eng. Educ. 10, 114–124 (1976).
"Geometric correction factors for the Weisz diffusivity cell" (with W.W. Meyer and L.L. Hegedus). /. Catal. 42, 135–138 (1976).
"Modeling the monolith: Some methodological considerations" (with S.T. Lee). Paper presented at the Fourth International-Sixth European Symposium on Chemical Reaction Engineering, Heidelberg, 6–8 April 1976.
"Studies in the control of tubular reactors–I: General considerations" (with C. Georgakis and N.R. Amundson). Chem. Eng. ScL 32, 1359–1369 (1977).
"Studies in the control of tubular reactors–II: Stabilization by modal control" (with C. Georgakis and N.R. Amundson). Chem. Eng. ScL 32, 1371 – 1379 (1977).
"Studies in the control of tubular reactors–III: Stabilization by observer design" (with C. Georgakis and N.R. Amundson). Chem. Eng. Sci. 32, 1381–1387 (1977).
"Academic chemical engineering in an historical perspective." Ind. Eng. Chem. Fund. 16, 1–5 (1977).
"The sciences and the humanities" (with M. Penn). Chem. Eng. Educ. 11, 68–73, 85 (1977).
"Dynamics of a chemostat in which two organisms compete for a common substrate" (with A.E. Humphrey). Biotechnol Bioeng. 19.1375–1386 (1977).
"Re, k and ir. A conversation on some aspects of mathematical modelling." Appl Math. Modelling 1, 386–394 (1977).
"Art and craft in the modelling of chemical processes." Paper presented at the Proceedings of the First International Conference on Mathematics Modelling, St. Louis, MO, 29 August–1 September 1977.
"On the effects of radiative heat transfer in monoliths" (with S.-T. Lee). Chem. Eng. Sci. 32, 827–837 (1977).
"The infiltration of lymphocytes and macrophages into a site of infection." In A.I. Bell, A.S. Perelson, and G.H. Pimbley (eds.). Theoretical Immunology, (Vol. 8). New York: Marcel Dekker, 1978.
"Finite stability regions for large-scale systems with stable and unstable subsystems" (with M. Morari and G.S. Stephanopoulos). Int. J. Com. 26, 805–815 (1977).
"Models of the catalytic monolithic." Paper presented at the Levich Conference, Oxford, 1978. 151. Poisoning in monolithic catalysts. American Chemical Society Symposium Series no. 65. Washington, D.C.: American Chemical Society, 1978.
"An analysis of thermal desorption mass spectra. I" (with C.-M. Chan and W.H. Weinberg). Appl. Surf. Sci. 1, 360–376 (1978).
"Temperature gradients in porous catalyst pellets." Ind. Eng. Chem. Fund. 17, 309–313 (1978).
"Horses of other colors: Some notes on seminars in a chemical engineering department." Chem. Eng. Educ. 12, 148–151 (1978).
"Chemical reactors and some bifurcation phenomena." Ann. NY Acad. Sci. 316, 314–331 (1979).
"De exemplo simulacrorum continuorum discretalumque." Arch. Rat. Mech. Anal. 70, 203–209 (1979).
"Measurement of leukocyte motility and chemotaxis parameters using a quantitative analysis of the under-agarose migration assay" (with D. Lauffenburger). Math. Biosci. 44, 121–138 (1978).
"A stochastic analysis of the growth of competing microbial populations in a continuous biochemical reactor" (with G.S. Stephanopoulos and A.G. Frederickson). Math. Biosci. 45, 99–135 (1979).
"Stability analysis of structured chemical engineering systems via decomposition" (with M. Morari and G.S. Stephanopoulos). Chem. Eng. Sci. 34, 11–15 (1979).
"The role of dimensionless parameters in the Briggs–Haldane and Michaelis–Menten approximations" (with P.S. Crooke and R.D. Tanner). Chem. Eng. Sci. 34, 1354–1357 (1979).
"Creeping fronts and traveling waves (E. Wicke festschrift)." Chem. Eng. Tech. 51, 767–771 (1979).
"Effect of catalyst loading on the simultaneous reactions of NO, CO, and O2" (with L.L. Hegedus, R.K. Herz and S.H. Oh). /. Catal. 57, 513–515 (1979).
"Method in the modeling of chemical engineering systems." In C.T. Leondes (ed.). Control and Dynamic Systems: Advances in Theory and Application, (Ch. 20). New York: Academic Press, 1979.
"The growth of competing microbial populations in a CSTR with periodically varying inputs" (with G.S. Stephanopoulos and A.G. Fredrickson). A.I.Ch.E.J. 25, 863–872 (1979).
"Traveling waves in a simple population model involving growth and death" (with C.R. Kennedy). Bull. Math. Biol. 42, 397–429 (1980).
"Bifurcations of a model diffusion-reaction system" (with C.R. Kennedy). In P. Holmes (ed.). New Approaches to Nonlinear Problems in Dynamics (pp. 211–233). Philadelphia: SIAM, 1980.
"A continuous chromatographic reactor" (with B.K. Cho and R.W. Carr, Jr.). Chem. Eng. Sci. 35, 74–81(1980).
"The mere notion of a model" (with M. Penn). Math. Model. 1, 1–12 (1980).
"Hierarchies of models in reactive systems." In W. E. Stewart, W.H. Ray, and J. Conway (eds.), Dynamics and Modelling of Reactive Systems, (pp. 1–35). New York: Academic Press, 1980.
"A note on the Beneventan script." Soc. Scribes Ilium. Newsl. 18 (1980).
"Bilinear approximation of general non-linear dynamic systems with linear inputs" (with S. Svoronos and G.S. Stephanopoulos). Int. J. Cont. 31, 109–126 (1980).
"Observations on fixed-bed dispersion models: The role of the interstitial fluid" (with S. Sundaresan and N.R. Amundson). A.I.Ch.E.J. 26, 529–536 (1980).
"A new continuous flow reactor for simultaneous reaction and separation" (with B.K. Cho and R.W. Carr). Sep. Sci. Technol. 15, 679–696 (1980).
"Effects of random motility on growth of bacterial populations" (with D. Lauffenburger and K.H. Keller). Microb. EcoL 7, 207–227 (1981).
"Some canonical chemical engineering catastrophes (Churchill festschrift)." Chem. Eng. Commun. 9, 51 (1981).
"Isothermal sustained oscillations in a very simple surface reaction" (with C.G. Takoudis and L.D. Schmidt). Surf ScL 105, 325–333 (1981).
"Multiple steady states in reaction controlled surface catalysed reactions" (with C.G. Takoudis and L.D. Schmidt). Chem. Eng. ScL 36, 377–386 (1981).
"Steady state multiplicity in surface reactions with coverage dependent parameters" (with C.G. Takoudis and L.D. Schmidt). Chem. Eng. Sci. 36, 1795–1802 (1981).
"The notions of uniqueness and multiplicity of steady states in the development of chemical reactor analysis." In W.F. Furler (ed.), A Century of Chemical Engineering, (pp. 389–404). New York: Plenum Press, 1982.
"On the behaviour of two stirred tanks in series" (with S. Svoronos and G.S. Stephanopoulos). Chem. Eng. ScL 3, 357–366 (1982).
"Weakly coupled systems of nonlinear elliptic boundary value problems" (with K. Zygourakis). Nonlin. Anal 6, 555–569 (1982).
"The intangible tints of dawn." Thor. Quart. 1, 52–60 (1982).
"The mathematical theory of a countercurrent catalytic reactor" (with B.K. Cho and R.W. Carr). Proc. Roy. Soc. A383, 147–189 (1982).
"Chemical engineering at the University of Minnesota." Chem. Eng. Educ. 16, 50–54 (1982).
Continuous reaction gas chromatography: The dehydrogenation of cyclohexane over Pt/g-AI2O3 (with A.W. Wardwell and R.W. Carr, Jr.). American Chemical Society Symposium Series no. 196. Washington, D.C.: American Chemical Society (1982).
"Isothermal oscillations in surface reactions with coverage independent parameters" (with C.G. Takoudis and L.D. Schmidt). Chem. Eng. Sci. 37, 69–76 (1982).
"The scope of R.T.D. theory." In A. Pethos and R.D. Noble (eds.), Residence time distribution theory in chemical engineering, (pp. 1–21). Weinheim: Verlag Chemie GmbH, 1982.
"Residence time distribution with many reactions and in several environments." In A. Petho and R.D. Noble (eds.). Residence time distribution theory in chemical engineering, (pp. 24–40). Weinheim: Veriag Chemie GmbH, 1982.
Review of Insights into Chemical Engineering, by P.V. Danckwerts, Chem. Eng. Sci. 37, 1123 (1982).
"Some characteristic nonlinearities of reacting systems." In A. Bishop, D. Campbell, and B. Nicolaenko (eds.). Nonlinear Problems: Present and Future, North-Holland, Amsterdam, 1982.
"Effects of cell motility and chemotaxis on microbial population growth" (with D. Lauffenburger and K. Keller). Biophys. J. 40, 209–219 (1982).
"Multiple oxidation reactions and diffusion in the catalytic layer of monolith reactors" (with K. Zygourakis). Chem. Eng. Sci. 38, 733–744 (1983).
"Monotone iteration methods for solving coupled systems of nonlinear boundary value problems" (with K. Zygourakis). Comput. Chem. Eng. 7, 183–193 (1983).
"A two-layer model of the atmosphere indicating the effects of mixing between the surface layer and the air aloft" (with D.D. Reible and F.H. Shair). Atmos. Environ. 17, 25–33 (1983).
"The interpretation of sorption and diffusion data in porous solids." Ind. Eng. Chem. Fund. 22, 150–151 (1983).
"Theoretical and experimental aspects of catalyst impregnation" (with S.-Y. Lee). In G. Poncelet, P. Grange, and P.A. Jacobs (eds.), Studies in Surface Science and Catalysis, (Vol. 16; pp. 35–45). Amsterdam: Elsevier, 1983.
"Effectiveness of catalytic archipelagos–I: Regular arrays of regular islands" (with D.-Y. Kuan and H.T. Davis). Chem. Eng. Sci. 38, 719–732 (1983).
"Effectiveness of catalytic archipelagos–II: Random arrays of random islands" (with D.-Y. Kuan and H.T. Davis). Chem. Eng. Sci. 38, 1569–1579 (1983).
"On the dynamics of a stirred tank with consecutive reactions" (with D.V. Jorgensen). Chem. Eng. Sci. 38, 45–53 (1983).
Chemical reaction engineering as an intellectual discipline. American Chemical Society Symposium Series no. 226. Washington, D.C.: American Chemical Society 1983.
"R.H. Wilhelm's influence on the development of chemical reaction engineering (from the Wilhelm lectures at Princeton)." Chem. Eng. Educ, 17, 10–41 (1983).
"Discrete cell model of pore-mouth poisoning of fixed-bed reactors" (with B.K. Cho and L.L. Hegedus). A.I.Ch. E.J. 29, 289–297 (1983).
"The jail of shape." Chem. Eng. Commun. 2A, 167–181 (1983).
"The distribution of active ingredients in supported catalysts prepared by impregnation" (with S.-Y. Lee). Catal Rev. 21, 207–340 (1984).
"Traveling bands of chemotactic bacteria in the context of population growth" (with D. Lauffenburger and C.R. Kennedy). Bull. Math. Biol. 46, 19–40 (1984).
"Rate multiplicity and oscillations in single-species surface reactions" (with I. Kevrekidis and L.D. Schmidt). Surf. Sci. 137, 151–166 (1984).
"Problems in the dynamics of chemical reactors." Paper presented at the International Chemical Reaction Engineering Conference, Pune, 1984.
"Estimation of fin efficiencies of regular tubes arrayed in circumferential fins" (with D.-Y. Kuan and H.T. Davis). Int. J. Heat Mass Trans. 27, 148–151 (1984).
"Mathematical analysis for a chromatographic reactor" (with T. Petroulas and R.W. Carr). In R. Vichnevetsky and R.S. Stepleman (eds.). Advances in Computer Methods for Partial Differential Equations–V: Proceedings of the Fifth IMACS International Symposium on Computer Methods for Partial Differential Equations, New Brunswick: IMACS, Dept. of Computer Science, Rutgers University, 1984.
"On the dynamics of periodically forced chemical reactors" (with I.G. Kevrekidis and L.D. Schmidt). Chem. Eng. Commun. 30, 323–330 (1984).
"More on the dynamics of a stirred tank with consecutive reactions" (with D.V. Jorgensen and W.W. Farr). Chem. Eng. Sci. 39, 1741–1752 (1984).
"Thermodynamic limitations on the dynamic behaviour of heterogeneous reacting systems" (with I. Kevrekidis and L.D. Schmidt). Inst. Chem. Eng. Symp. Ser. 87, 109–115 (1984).
"Analysis of the counter-current moving-bed chromatographic reactor" (with T. Petroulas and R.W. Carr). Comp. Maths. Appls. 11, 5–34 (1985).
"Numerical computation of invariant circles of maps" (with I.G. Kevrekidis, L.D. Schmidt, and S. Pelikan). Physica 16D, 243–251 (1985).
"On the permeability of membranes with parallel, but interconnected, pathways." Math. Biosci. (Bellman Memorial Issue) 11, 5–16 (1985).
"Analysis and performance of a countercurrent moving-bed chromatographic reactor" (with T. Petroulas and R.W. Carr, Jr.). Chem. Eng. Set 40, 2233–2240 (1985). 235a.
"Some common features of periodically forced reacting systems" (with I.G. Kevrekidis and L.D. Schmidt). Chem. Eng. Sci. 41, 1263–1276 (1986).
"The stirred tank forced" (with I.G. Kevrekidis and L.D. Schmidt). Chem. Eng. Sci. 41, 1549–1560 (1986).
"Yet who would have thought the old man to have had so much blood in him?'‚ÄîReflections on the multiplicity of steady states of the stirred tank reactor" (with W.W. Farr). Chem. Eng. Sci. 41, 1385–1402 (1986).
"Resonance in periodically forced processes" (with I.G. Kevrekidis and L.D. Schmidt). Chem. Eng. Sci. 41, 905–911 (1986).
"Entrainment regions for periodically forced oscillators" (with D.G. Aronson, R.P. McGehee, and I.G. Kevrekidis). Phys. Rev. A, 33, 2190–2192 (1986).
"On a problem in hindered diffusion (Serrin festschrift)." Arch. Rat. Mech. Anal. 95, 83–91 (1986).
"The mathematical background of chemical reactor analysis–I. Preliminaries, batch reactors."Physica D20, 82–90 (1986).
"The mathematical background of chemical reactor analysis–II. The stirred tank reactor." In G.S.S. Ludford (ed.). Reacting Flows: Combustion and Chemical Reactors, Lectures in Applied Mathematics, (Vol. 24, pp. 75–107). Providence: American Mathematical Society, 1986.
"The continuous countercurrent moving bed chromatographic reactor" (with B. Fish and R.W. Carr). Chem. Eng. Sci. 41, 661 (1986).
"An analysis of the counter-current adsorber" (with D. Altshuller, G. Vazquez, and R.W. Carr). Chem. Eng. Commun. 52, 311 (1987).
"Anisotropic membrane transport" (with K.-M. Jem and E.L. Cussler). Chem. Eng. Commun. 55, 5–17 (1987).
"On apparent second-order kinetics" (with T.C. Ho). A.I.Ch.E.J. 33, 1050–1051 (1987).
"A global study of Kondepudi's pitchfork" (with X.-H. Song). Sadhana 10, 1–12 (1987).
"Degenerate Hopf bifurcations in the CSTR with reactions A → B → C" (with W.W. Farr). Can. Math. Soc. Conf. Proc. 8, 397–418 (1987).
"A general theory of anisotropic membranes" (with E.L. Cussler) Chem. Eng. Commun. 58, 3–16 (1987).
"Nonlinear dynamics and strange attractors" (with K.S. Chang). Kor. J. Chem. Eng. 4, 95–104 (1987).
"A sequence of scripts." Scribe 41, 7–12 (1987).
"Ann Hechle's 'In the Beginning.' Call. Revs 4, 41–46 (1987).
"Painting with words: The art of Donald Jackson." Call. Revs 6, 18–28 (1987).
"Forced oscillations of chemical reactors." In P. Gray et al. (eds.), Spatial Inhomogeneities and Transient Behaviour in Chemical Kinetics., Manchester: Manchester University Press, 1990.
"Autonomous bifurcations of a simple bimolecular surface-reaction model" (with M.A. McKamin and L.D. Schmidt). Proc. Roy. Soc. A415, 363–387 (1988).
"Barrier membranes" (with E.L. Cussler, S.E. Hughes, and W.J. Ward, III). /. Membrane Sci. 38, 161–174 (1988).
"Forced oscillations of a self-oscillating bimolecular surface reaction model" (with M.A. McKamin and L.D. Schmidt). Proc. Roy. Soc. A417, 363–388 (1988).
"Response of nonlinear oscillators to forced oscillations: Three chemical reaction case studies" (with M.A. McKamin and L.D. Schmidt). Chem. Eng. Sci. 43, 2833–2844 (1988).
"Chaotic behaviour of two counter-currently cooled reactors" (with K.S. Chang). Lat. Am. Appl. Res. 18, 1 (1988).
"Modelling cubic autocatalysis by successive bimolecular steps" (with P. Gray and S.K. Scott). Chem. Eng. Sci. 43, 207–211 (1988).
"Ut Simulacrum, Poesis." New Literary Hist. 20, 323–340 (1988–1989).
"Theories of precipitation induced by dissolution" (with J. Kopinsky and E.L. Cussler) A.I.Ch.E.J. 34, 2005–2010 (1988).
"Effects of velocity on homogeneous-heterogeneous ignition and extinction" (with R.J. Olsen and L.D. Schmidt). Combust. Sci. Tech., 99 (1995).
"On reactions in continuous mixtures." A.I.Ch.E.J. 35, 539–548 (1989).
"On the limits of facilitated diffusion" (with E.L. Cussler and A. Bhown). /. Memb. Sci. 43, 149–164 (1989).
"Continuous lumping of nonlinear chemical kinetics" (with G. Astarita). Chem. Eng. Proc. 26, 63–69 (1989).
"On aliases of differential equations" (with G. Astanta). Rend. Lincei Ser. VIII 83, 7–ll (1989).
"Military technology and garrison organization: Some observations on Anglo-Saxon military thinking in light of the burghal hidage" (with B.S. Bachrach). Tech. CuL 31, 1–17 (1990).
"Forced oscillations of chemical reactors with multiple steady states" (with G.A. Cordonier and L.D. Schmidt). Chem. Eng. Sci. 45, 1659–1675 (1990).
"The simulated countercurrent moving bed chromatographic reactor" (with A.J. Ray, A.L. Tonkovich, and R.W. Carr). Chem. Eng. Sci. 45, 2431–2437 (1990).
"The effects of phase transitions, surface diffusions, and defects on surface catalyzed reactions: Fluctuations and oscillations" (with D.G. Vlachos and L.D. Schmidt). J. Chem. Phys. 93, 8306–8313 (1990).
"Manners Makyth Modellers (Fifth Danckwerts Lecture)." Trans. I.Ch.E. 69, 165–174 (1991).
"The ignition criteria for stagnation-point flow: Semenov–Frank–Kamenetski or van't Hoff' (with X¬ª Song and L.D. Schmidt). Combust. Set Tech. 75, 311–331 (1991).
"Chemical engineering and the liberal education today." Paper presented at the 25th Phillips Lecture, Oklahoma State University, 26 April 1991.
"Steady states and oscillations in homogeneousheterogeneous reaction systems" (with X. Song and L.D. Schmidt). Chem. Eng. ScL 46, 1203–1215 (1991).
"Diffusion and reaction in a Mandelbrot lung." Chaos, Sol. Fract. 1, 583–593 (1991).
"Multiple indices, simple lumps and duplicitous kinetics." In A.V. Sapre and F.J. Krambeck (eds.), Chemical Reactions in Complex Mixtures, (pp. 25–41). New York: Van Nostrand Reinhold, 1991.
"The mathematics of continuous mixtures." In G. Astarita and S.F. Sandler (eds.). Kinetics and Thermodynamic Lumping of Multicomponent Mixtures. Elsevier, 1991.
"The effect of phase transitions, surface diffusion, and defects on heterogeneous reactions: Multiplicities and fluctuations" (with D.G. Vlachos and L.D. Schmidt). Surface Sci. 249, 248–264 (1991).
"Buoyancy-driven flows of a radiatively participating fluid in a vertical cylinder heated from below" (with A.G. Salinger, S. Brandon, and J.J. Derby). Proc. Roy. Soc. A442, 313–341 (1992).
"Dynamics of homogeneous-heterogeneous reactors" (with R.J. Olsen, W.R. Williams, and L.D. Schmidt). Chem. Eng. Sci. 47 2505–2510 (1992).
"Kinetics of facet formation during growth and etching of crystals" (with D.G. Vlachos and L.D. Schmidt). In K.S. Liang, M.P. Anderson, R.F. Bruisma, and G.G. Scoles (eds.). Interface Dynamics and Growth, (Vol. 237, pp. 145150). 1992.
"Structures of small metal clusters–II: Phase transitions and isomerization" (with D.G. Vlachos and L.D. Schmidt). /. Chem. Phys. 96, 6891–6901 (1992).
"Structure of small catalyst particles" (with D.G. Vlachos and L.D. Schmidt). Chem. Eng. Sci. 47, 2769–2774 (1992).
"Structures of small metal clusters–I: Low temperature behavior" (with D.G. Vlachos and L.D. Schmidt). /. Chem. Phys. 96, 6880–6890 (1992).
"Bifurcation and global stability in surface catalyzed reactions using the Monte Carlo method" (with D.G. Vlachos and L.D. Schmidt). In H. Swinney, R. Aris, and D. Aronson (eds.). Patterns and Dynamics in Reactive Media, (Vol. 37, pp. 187–206). New York: Springer-Verlag, 1991.
"Spatial and temporal patterns in catalytic oscillations" (with D.G. Vlachos, F. Smith, and L.D. Schmidt). Physica A 188, 302–321 (1992).
"Comments on mitigation of backmixing via catalyst dilution." Chem. Eng. Sci. 47, 507508 (1992).
"Modeling the spontaneous ignition of coal stockpiles" (with A.G. Salinger and J.J. Derby). A.I.Ch.E.J. 40, 991–1004 (1993).
"Ends and beginnings in the mathematical modelling of chemical engineering systems." Chem. Eng. ScL 48, 2507–2517 (1993).
"Products in methane combustion near surfaces" (with D.G. Vlachos and L.D. Schmidt). A.I.Ch.E.J. 40, 1018–1025 (1994).
"Ignition and extinction of flames near surfaces: Combustion of CH in air" (with D.G. Vlachos and L.D. Schmidt). A.I.Ch.E.J. 40, 1005–1007 (1994).
"Ignition and extinction of flames near surfaces: Combustion of H2 in air" (with D.G. Vlachos and L.D. Schmidt). Comb. Flame 95, 313–335 (1993).
"Kinetics of faceting of crystals in growth, etching, and equilibrium" (with D.G. Vlachos and L.D. Schmidt). Phys. Rev. 47, 4896–4909 (1993). 275. "Continuous reactions in a non-isothermal CSTR-I. Multiplicity of steady states" (with P. Cicarelli). Chem. Eng. ScL 49, 621–631 (1994).
" 'Almost discrete* F-distributed chemical species and reactions." Chem. Eng. Set 49, 581588 (1994).
"Two eyes are better than one: Some reflections on the importance of having more than one viewpoint in mathematical modelling and other disciplines." Mathl Comput Model 18, 95–115 (1993).
"Enhanced Q yields from methane oxidative coupling by means of a separative chemical reactor" (with A.L. Tonkovich and R.W. Carr). Science 262, 221–223 (1993).
"Design and performance of a simulated countercurrent moving-bed separator" (with B.B. Fish and R.W. Carr). AJ.Ch.E.J. 39, 17831790 (1993).
"Reaction of a continuous mixture in a bubbling fluidized bed" (with N.R. Amundson). Trans I. Chem. E 71, 611–617 (1993).
"De Motu Arietum" (with B.S. Bachrach). In A Festschrift for Professor Lawrence Marcus (pp. 1–13). New York: Marcel Dekker, 1993.
"The simulated countercurrent moving bed chromatographic reactor: A novel reactorseparator" (with A.K. Ray and R.W. Carr). Chem. Eng. Sci. 49, 69–480 (1994).
"An essay on contemporary criticism." New Lit. Hist. 25, 35–46 (1994).
"Complementary viewpoints: Some thoughts on binocular vision in mathematical modelling and Latin paleography." New Lit. Hist. 26, 395–417 (1995).
"Jean Mabillon." In H. Damico and J.B. Zavadil (eds.), Medieval Scholarship: Biographical Studies on the Formation of a Discipline. Vol. 1: History, New York: Garland Publishing, Inc., 1995.
"Chaos in a simple two-phase reactor" (with K. Alhumaizi). Chaos, Sol. Fracl 4, 1985 – 2014 (1994).
"Adsorption kinetics for the case of step and S-shaped isotherms" (with A.V. Kruglov). A.LCh.E.J. 41, 2393–2398 (1995).
"Mathematical models in catalyst design." In L.L. Hegedus (ed.), Catalyst Design: Progress and Perspectives, (pp. 213–244). New York: Wiley, 1987.
"Shooting method for bifurcation analysis of boundary value problems" (with X. Song and L.D. Schmidt). Chem. Eng. Commun. 84, 217–229 (1989).
"Bifurcation behavior in homogeneous-heterogeneous combustion: II. Computations for stagnation-point flow" (with X. Song, W.R. Williams, and L.D. Schmidt). Comb. Flame, 292–311 (1991).
"Ignition and extinction of homogeneous-heterogeneous combustion: CH4 and C3H8 on Pt" (with X. Song, W.R. Williams, and L.D. Schmidt). Paper presented at the 23rd International Combustion Institute, 1990.
"Reaction of a continuous mbcture in a bubbling fluidized bed" (with N.R. Amundson). In N.P. Cheremisinoff (ed.), Advances in Engineering Fluid Mechanics, (pp. 105–117). New York, 1996.
"Comparison of small metal clusters: Ni, Pd, Pt, Cu, Ag, and Au" (with D.G. Vlachos and L.D. Schmidt). Z. Phys. D. 26, 156–158 (1993).
"Dynamics of catalytic reactions on metal surfaces" (with L.D. Schmidt). In Unsteady State Processes in Catalysis, (pp. 203–216). Utrecht, Holland: VSP Press, 1990.
"Computer-aided experimentation in countercurrent reaction chromatography and simulated countercurrent chromatography" (with B.B. Fish and R.W. Carr). Chem. Eng. Sci. 43, 1867–1873 (1988).
"Determination of Arrhenius constants by linear and nonlinear fitting" (with N.H. Chen). A.LCh.E.J. 38, 626–628 (1992).
"Effect of pressure on the combustion of methane near inert surfaces" (with A. Balakrishna, D.G. Vlachos and L.D. Schmidt). Comb. Flame, (to be published).
"Forcing an entire bifurcation diagram: Case studies in chemical oscillators" (with I.G. Kevrekidis and L.D. Schmidt). Physica 23D, 391 (1986).
"Finite element formulations for large-scale, coupled flows in adjacent porous and open fluid domains" (with A.G. Salinger and J.J. Derby). Int. J. Num. Meth. Fluids 18, 1185–1209 (1994).
"Continuous countercurrent moving-bed separator" (with B.B. Fish and R.W. Carr). A.LCh.E.J. 35, 737–745 (1989).
"Optimization of the countercurrent movingbed chromatographic separator" (with B.B. Fish and R.W. Carr). A.LCh.E.J. 39, 1621–627 (1993).
" Autocatalytic continuous reactions in a stirred tank: I. Multiplicity of steady states" (with P. Cicarelli). Chem. Eng. Sci. 49, 5307–5313 (1994).
"Pt-catalyzed combustion of CH4-C3-C8 mixtures" (with A. Balakrishna and L.D. Schmidt). Chem. Eng. Sci. 49, 11–18 (1994).
"Steady-state flow transitions in the radiative Rayleigh–Benard problem: Visualizing a bifurcation diagram" (with A.G. Salinger and J.J. Derby). Vid. J. Eng. Res. 3, 97–109 (1993).
"Parallel Gray/Scott reactions in a stirred vessel (Peter Gray festschrift)." Faraday-Trans. 92, 2839–2842 (1996).
"Analysis of a continuous immobilization reactor." (Eli Ruckenstein festschrift) (with K. Roenigk). Ind. Eng. Chem. Res. 35, 2889–2899 (1996).
"Reflections on Keats' equation (J. Villadsen festschrift)." Chem. Eng. Sci. 52, 2447–2455 (1997).
"Mass transfer from small ascending bubbles" (M.M. Sharma festschrift). Chem. Eng. Sci. 52 (24), 4439–4446 (1997).
"Model reduction in a class of multi-phase systems" (H. Brenner festschrift). Chem. Eng. Comm. 148–150, 285–289 (1996).
"A fine flurry of fudge factors" (AIChE Institute Lecture, 1997) Chem. Eng. Prog.
"On some dynamical diagrams of chemical reaction engineering." Chaos 9 (3) (1999).
"Dissections, transgressions and perilous paths." Mathl Comp. Model. 28, 91–101 (1998).
"The beauty of self-adjoint symmetry" (with D. Ramkrishna). Ind. Eng. Chem. Res 38 (3), 845–850 (1999).
References
Bibliographies by writer
Bibliographies of American writers
Chemical engineering books
Science bibliographies | Rutherford Aris bibliography | Chemistry,Engineering | 13,946 |
27,962,727 | https://en.wikipedia.org/wiki/Restriction%20site%20associated%20DNA%20markers | Restriction site associated DNA (RAD) markers are a type of genetic marker which are useful for association mapping, QTL-mapping, population genetics, ecological genetics and evolutionary genetics. The use of RAD markers for genetic mapping is often called RAD mapping. An important aspect of RAD markers and mapping is the process of isolating RAD tags, which are the DNA sequences that immediately flank each instance of a particular restriction site of a restriction enzyme throughout the genome. Once RAD tags have been isolated, they can be used to identify and genotype DNA sequence polymorphisms mainly in form of single nucleotide polymorphisms (SNPs). Polymorphisms that are identified and genotyped by isolating and analyzing RAD tags are referred to as RAD markers. Although genotyping by sequencing presents an approach similar to the RAD-seq method, they differ in some substantial ways.
Isolation of RAD tags
The use of the flanking DNA sequences around each restriction site is an important aspect of RAD tags. The density of RAD tags in a genome depends on the restriction enzyme used during the isolation process. There are other restriction site marker techniques, like RFLP or amplified fragment length polymorphism (AFLP), which use fragment length polymorphism caused by different restriction sites, for the distinction of genetic polymorphism. The use of the flanking DNA-sequences in RAD tag techniques is referred as reduced-representation method.
The initial procedure to isolate RAD tags involved digesting DNA with a particular restriction enzyme, ligating biotinylated adapters to the overhangs, randomly shearing the DNA into fragments much smaller than the average distance between restriction sites, and isolating the biotinylated fragments using streptavidin beads. This procedure was used initially to isolate RAD tags for microarray analysis. More recently, the RAD tag isolation procedure has been modified for use with high-throughput sequencing on the Illumina platform, which has the benefit of greatly reduced raw error rates and high throughput. The new procedure involves digesting DNA with a particular restriction enzyme (for example: SbfI, NsiI,…), ligating the first adapter, called P1, to the overhangs, randomly shearing the DNA into fragments much smaller than the average distance between restriction sites, preparing the sheared ends into blunt ends and ligating the second adapter (P2), and using PCR to specifically amplify fragments that contain both adapters. Importantly, the first adapter contains a short DNA sequence barcode, called MID (molecular identifier) that is used as a marker to identify different DNA samples that are pooled together and sequenced in the same reaction. The use of high-throughput sequencing to analyze RAD tags can be classified as reduced-representation sequencing, which includes, among other things, RADSeq (RAD-Sequencing).
Detection and genotyping of RAD markers
Once RAD tags have been isolated, they can be used to identify and genotype DNA sequence polymorphisms such as single nucleotide polymorphisms (SNPs). These polymorphic sites are referred to as RAD markers. The most efficient way to find RAD tags is by high-throughput DNA sequencing, called RAD tag sequencing, RAD sequencing, RAD-Seq, or RADSeq.
Prior to the development of high-throughput sequencing technologies, RAD markers were identified by hybridizing RAD tags to microarrays. Due to the low sensitivity of microarrays, this approach can only detect either DNA sequence polymorphisms that disrupt restriction sites and lead to the absence of RAD tags or substantial DNA sequence polymorphisms that disrupt RAD tag hybridization. Therefore, the genetic marker density that can be achieved with microarrays is much lower than what is possible with high-throughput DNA-sequencing.
History
RAD markers were first implemented using microarrays and later adapted for NGS (Next-Generation-Sequencing). It was developed jointly by Eric Johnson and William Cresko's laboratories at the University of Oregon around 2006. They confirmed the utility of RAD markers by identifying recombination breakpoints in D. melanogaster and by detecting QTLs in threespine sticklebacks.
ddRADseq
In 2012 a modified RAD tagging method called double digest RADseq (ddRADseq) was suggested. By adding a second restriction enzyme, replacing the random shearing, and a tight DNA size selection step it is possible to perform low-cost population genotyping. This can be an especially powerful tool for whole-genome scans for selection and population differentiation or population adaptation.
hyRAD
A study in 2016 presented a novel method called hybridization RAD (hyRAD), where biotinylated RAD fragments, covering a random fraction of the genome, are used as baits for capturing homologous fragments from genomic shotgun sequencing libraries. DNA fragments are first generated using ddRADseq protocol applied to fresh samples, and used as hybridization-capture probes to enrich shotgun libraries in the fragments of interest. This simple and cost-effective approach allows sequencing of orthologous loci even from highly degraded DNA samples, opening new avenues of research in the field of museomics. Another advantage of the method is not relying on the restriction site presence, improving among-sample loci coverage. The technique was first tested on museum and fresh samples of Oedaleus decorus, a Palearctic grasshopper species, and later implemented in regent honeyeater, arthropods, among other species. A lab protocol was developed to implement hyRAD in birds.
See also
Genotyping by sequencing
References
DNA sequencing | Restriction site associated DNA markers | Chemistry,Biology | 1,199 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.