id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
4,186,736 | https://en.wikipedia.org/wiki/Stephen%20P.%20Hubbell | Stephen P. Hubbell (born February 17, 1942) is an American ecologist known for his work on tropical rainforests, theoretical ecology, and biodiversity. He is a professor emeritus at the University of Georgia and the University of California, Los Angeles.
Hubbell is the author of the unified neutral theory of biodiversity and biogeography, former chair of the National Council for Science and the Environment, co-founder of the CTFS Forest Global Earth Observatory, a researcher at the Smithsonian Tropical Research Institute, and a fellow of the American Academy of Arts and Sciences. In 2016, he was awarded the International Prize for Biology.
Life and career
Hubbell received a B.A. in biology from Carleton College in 1963 and a PhD in zoology from University of California, Berkeley in 1969.
He is author of the unified neutral theory of biodiversity and biogeography (UNTB), which seeks to explain the diversity and relative abundance of species in ecological communities not by niche differences but by stochastic processes (random walk) among ecologically equivalent species. Hubbell is also a senior staff scientist at the Smithsonian Tropical Research Institute in Balboa, Panama. He is also well known for tropical forest studies. In 1980, he and Robin B. Foster of the Field Museum in Chicago, launched the first of the 50 hectare forest dynamics studies on Barro Colorado Island in Panama. This plot became the flagship of a global network of large permanent forest dynamics plots, all following identical measurement protocols. This global network now has more than 70 plots in 28 countries, and these plots contain more than 12000 tree species and 7 million individual trees that are tagged, mapped, and monitored long-term for growth, survival and recruitment. The Center for Tropical Forest Science coordinates research across global network of plots through the Smithsonian Tropical Research Institute. The program has expanded into the temperate zone, and is now known as the Forest Global Earth Observatory Network or ForestGEO.
In 1988, while a Professor at Princeton University, he founded the Committee for the National Institutes of the Environments (CNIE), a non-profit organization in Washington, D.C., on his fellowship from the Pew Charitable Trusts. The goal of the CNIE was to promote the creation of a government agency called the National Institutes of the Environment (NIE), modeled on the National Institutes of Health. After a dozen years, the organization became the National Council for Science and the Environment, whose mission is "to improve the scientific basis of environmental decision-making."
Hubbell was born in Gainesville, Florida. He earned his doctorate in zoology at the University of California, Berkeley, in 1969. As a professor at the University of Michigan, he taught graduate courses for the Organization for Tropical Studies in Costa Rica. Later, at Princeton University, as a professor of ecology and evolutionary biology, he continued study of the population biology of tropical trees.
In 2003, Hubbell became Distinguished Research Professor in the Department of Plant Biology at the University of Georgia.
As a Fellow at the Pew Institute for Ocean Science, Hubbell initiated the establishment of the National Council for Science and the Environment (NCSE), which works with the parties that create and use environmental knowledge to influence environmental decisions.
Hubbell is married to evolutionary ecologist Patricia Adair Gowaty, who is also a Distinguished Professor at the University of California, Los Angeles.
Honors and awards
American Association for the Advancement of Science, Fellow, 1980
Pew Fellows Program in Conservation and the Environment, Fellow, 1990
National Council for Science and the Environment, Chair, 1991–
American Academy of Arts and Sciences, Fellow, 2003
W.S. Cooper Award, Ecological Society of America, 2006
Eminent Ecologist Award, Ecological Society of America, 2009
International Prize for Biology, 2016
Publications
References
External links
Scientific American Interview with Steve Hubbell
American ecologists
Fellows of the American Academy of Arts and Sciences
Fellows of the American Association for the Advancement of Science
University of Georgia faculty
1942 births
Living people
Carleton College alumni
University of Michigan faculty
Fellows of the Ecological Society of America
Neutral theory | Stephen P. Hubbell | [
"Biology"
] | 815 | [
"Non-Darwinian evolution",
"Neutral theory",
"Biology theories"
] |
4,186,980 | https://en.wikipedia.org/wiki/Gravity%20%28alcoholic%20beverage%29 | Gravity, in the context of fermenting alcoholic beverages, refers to the specific gravity (abbreviated SG), or relative density compared to water, of the wort or must at various stages in the fermentation. The concept is used in the brewing and wine-making industries. Specific gravity is measured by a hydrometer, refractometer, pycnometer or oscillating U-tube electronic meter.
The density of a wort is largely dependent on the sugar content of the wort. During alcohol fermentation, yeast converts sugars into carbon dioxide and alcohol. By monitoring the decline in SG over time the brewer obtains information about the health and progress of the fermentation and determines that it is complete when gravity stops declining. If the fermentation is finished, the specific gravity is called the final gravity (abbreviated FG). For example, for a typical strength beer, original gravity (abbreviated OG) could be 1.050 and FG could be 1.010.
Several different scales have been used for measuring the original gravity. For historical reasons, the brewing industry largely uses the Plato scale (°P), which is essentially the same as the Brix scale used by the wine industry. For example, OG 1.050 is roughly equivalent to 12°P.
By considering the original gravity, the brewer or vintner obtains an indication as to the probable ultimate alcoholic content of their product. The OE (Original Extract) is often referred to as the "size" of the beer and is, in Europe, often printed on the label as or sometimes just as a percent. In the Czech Republic, for example, common descriptions are "10 degree beers", "12 degree beers" which refers to the gravity in Plato of the wort before the fermentation.
Low vs. high gravity beers
The difference between the original gravity of the wort and the final gravity of the beer is an indication of how much sugar has been turned into alcohol. The bigger the difference, the greater the amount of alcohol present and hence the stronger the beer. This is why strong beers are sometimes referred to as high gravity beers, and "session" or "small" beers are called low gravity beers, even though in theory the final gravity of a strong beer might be lower than that of a session beer because of the greater amount of alcohol present.
Terms related to gravity
Specific gravity
Specific gravity is the ratio of the density of a sample (of any substance) to the density of water. The ratio depends on the temperature and pressure of both the sample and water. The pressure is always considered (in brewing) to be and the temperature is usually for both sample and water but in some parts of the world different temperatures may be used and there are hydrometers sold calibrated to, for example, . It is important, where any conversion to °P is involved, that the proper pair of temperatures be used for the conversion table or formula being employed. The current ASBC table is (20 °C/20 °C) meaning that the density is measured at and referenced to the density of water at (i.e. ). Mathematically
This formula gives the true specific gravity i.e. based on densities. Brewers cannot (unless using a U-tube meter) measure density directly and so must use a hydrometer, whose stem is bathed in air, or pycnometer weighings which are also done in air. Hydrometer readings and the ratio of pycnometer weights are influenced by air (see article Specific Gravity for details) and are called "apparent" readings. True readings are easily obtained from apparent readings by
However, the ASBC table uses apparent specific gravities, so many electronic density meters will produce the correct °P numbers automatically.
Original gravity (OG); original extract (OE)
The original gravity is the specific gravity measured before fermentation. From it the analyst can compute the original extract which is the mass (grams) of sugar in of wort (°P) by use of the Plato scale. The symbol will denote OE in the formulas which follow.
Final gravity (FG); apparent extract (AE)
The final gravity is the specific gravity measured at the completion of fermentation. The apparent extract, denoted , is the °P obtained by inserting the FG into the formulas or tables in the Plato scale article. The use of "apparent" here is not to be confused with the use of that term to describe specific gravity readings which have not been corrected for the effects of air.
True extract (TE)
The amount of extract which was not converted to yeast biomass, carbon dioxide or ethanol can be estimated by removing the alcohol from beer which has been degassed and clarified by filtration or other means. This is often done as part of a distillation in which the alcohol is collected for quantitative analysis but can also be done by evaporation in a water bath. If the residue is made back up to the original volume of beer which was subject to the evaporation process, the specific gravity of that reconstituted beer measured and converted to Plato using the tables and formulas in the Plato article then the TE is
See the Plato article for details. TE is denoted by the symbol . This is the number of grams of extract remaining in of beer at the completion of fermentation.
Alcohol content
Knowing the amount of extract in of wort before fermentation and the number of grams of extract in of beer at its completion, the amount alcohol (in grams) formed during the fermentation can be determined. The formula follows, attributed to Balling
where gives the number of grams of alcohol per of beer i.e. the ABW. Note that the alcohol content depends not only on the diminution of extract but also on the multiplicative factor which depends on the OE. De Clerck tabulated Ballings values for but they can be calculated simply from p
This formula is fine for those who wish to go to the trouble to compute TE (whose real value lies in determining attenuation) which is only a small fraction of brewers. Others want a simpler, quicker route to determining alcoholic strength. This lies in Tabarie's Principle which states that the depression of specific gravity in beer to which ethanol is added is the same as the depression of water to which an equal amount of alcohol (on a w/w basis) has been added. Use of Tabarie's principle lets us calculate the true extract of a beer with apparent extract as
where is a function that converts SG to °P (see Plato) and (see Plato) its inverse and is the density of an aqueous ethanol solution of strength by weight at 20 °C. Inserting this into the alcohol formula the result, after rearrangement, is
Which can be solved, albeit iteratively, for as a function of OE and AE. It is again possible to come up with a relationship of the form
De Clerk also tabulates values for .
Most brewers and consumers are used to having alcohol content reported by volume (ABV) rather than weight. Interconversion is simple but the specific gravity of the beer must be known:
This is the number of cubic centimetres of ethanol in of beer.
Because ABV depends on multiplicative factors (one of which depends on the original extract and one on the final) as well as the difference between OE and AE it is impossible to come up with a formula of the form
where is a simple constant. Because of the near linear relationship between extract and (SG − 1) (see specific gravity) in particular because the ABV formula is written as
If the value given above for corresponds to an OE of 12°P which is 0.4187, and 1.010 can be taken as a typical FG then this simplifies to
With typical values of 1.050 and 1.010 for, respectively, OG and FG this simplified formula gives an ABV of 5.31% as opposed to 5.23% for the more accurate formula. Formulas for alcohol similar to this last simple one abound in the brewing literature and are very popular among home brewers. Formulas such as this one make it possible to mark hydrometers with "potential alcohol" scales based on the assumption that the FG will be close to 1 which is more likely to be the case in wine making than in brewing and it is to vintners that these are usually sold.
Attenuation
The drop in extract during the fermentation divided by the OE represents the percentage of sugar which has been consumed. The real degree of attenuation (RDF) is based on TE
and the apparent degree of fermentation (ADF) is based on AE
Because of the near linear relationship between (SG − 1) and °P specific gravities can be used in the ADF formula as shown.
Brewer's points
The relationship between SG and °P can be roughly approximated using the rule-of-thumb conversion equation "brewer's points divided by four", where the "Brewing" or "Gravity points" are the thousandths of SG above 1:
,
The amount of extract in degrees Plato are thus approximately given by the points divided by 4:
As an example, a wort of SG 1.050 would be said to have 1000(1.050 − 1) = 50 points, and contain 50/4 = 12.5 °P of extract. This is simply the linear approximation to the true relationship between SG and °P.
However, the above approximation has increasingly larger error for increasing values of specific gravity and deviates e.g. by 0.67°P when SG = 1.080. A much more accurate (mean average error less than 0.02°P) conversion can be made using the following formula:
where the specific gravity is to be measured at a temperature of T = 20 °C. The equivalent relation giving SG at 20 °C for a given °P is:
Points can be used in the ADF and RDF formulas. Thus a beer with OG 1.050 which fermented to 1.010 would be said to have attenuated 100 × (50 − 10)/50 = 80%. Points can also be used in the SG versions of the alcohol formulas. It is simply necessary to multiply by 1000 as points are 1000 times (SG − 1).
Software tools are available to brewers to convert between the various units of measurement and to adjust mash ingredients and schedules to meet target values. The resulting data can be exchanged via BeerXML to other brewers to facilitate accurate replication.
See also
Plato scale
References
Equations
Brewing | Gravity (alcoholic beverage) | [
"Mathematics"
] | 2,185 | [
"Mathematical objects",
"Equations"
] |
4,187,137 | https://en.wikipedia.org/wiki/Higman%27s%20lemma | In mathematics, Higman's lemma states that the set of finite sequences over a finite alphabet , as partially ordered by the subsequence relation, is well-quasi-ordered. That is, if is an infinite sequence of words over a finite alphabet , then there exist indices such that can be obtained from by deleting some (possibly none) symbols. More generally this remains true when is not necessarily finite, but is itself well-quasi-ordered, and the subsequence relation is generalized into an "embedding" relation that allows the replacement of symbols by earlier symbols in the well-quasi-ordering of . This is a special case of the later Kruskal's tree theorem. It is named after Graham Higman, who published it in 1952.
Proof
Let be a well-quasi-ordered alphabet of symbols (in particular, could be finite and ordered by the identity relation). Suppose for a contradiction that there exist infinite bad sequences, i.e. infinite sequences of words such that no embeds into a later . Then there exists an infinite bad sequence of words that is minimal in the following sense: is a word of minimum length from among all words that start infinite bad sequences; is a word of minimum length from among all infinite bad sequences that start with ; is a word of minimum length from among all infinite bad sequences that start with ; and so on. In general, is a word of minimum length from among all infinite bad sequences that start with .
Since no can be the empty word, we can write for and . Since is well-quasi-ordered, the sequence of leading symbols must contain an infinite increasing sequence with .
Now consider the sequence of words
Because is shorter than ,
this sequence is "more minimal" than , and so it must contain a word that embeds into a later word . But and cannot both be 's, because then the original sequence would not be bad. Similarly, it cannot be that is a and is a , because then would also embed into . And similarly, it cannot be that and , , because then would embed into . In every case we arrive at a contradiction.
Ordinal type
The ordinal type of is related to the ordinal type of as follows:
Reverse-mathematical calibration
Higman's lemma has been reverse mathematically calibrated (in terms of subsystems of second-order arithmetic) as equivalent to over the base theory .
References
Citations
Wellfoundedness
Order theory
Lemmas | Higman's lemma | [
"Mathematics"
] | 513 | [
"Mathematical induction",
"Order theory",
"Wellfoundedness",
"Combinatorics",
"Combinatorics stubs",
"Mathematical problems",
"Mathematical theorems",
"Lemmas"
] |
4,187,401 | https://en.wikipedia.org/wiki/Lozani%C4%87%27s%20triangle | Lozanić's triangle (sometimes called Losanitsch's triangle) is a triangular array of binomial coefficients in a manner very similar to that of Pascal's triangle. It is named after the Serbian chemist Sima Lozanić, who researched it in his investigation into the symmetries exhibited by rows of paraffins (archaic term for alkanes) and isomer types and number of alkanes.
The first few lines of Lozanić's triangle are
1
1 1
1 1 1
1 2 2 1
1 2 4 2 1
1 3 6 6 3 1
1 3 9 10 9 3 1
1 4 12 19 19 12 4 1
1 4 16 28 38 28 16 4 1
1 5 20 44 66 66 44 20 5 1
1 5 25 60 110 126 110 60 25 5 1
1 6 30 85 170 236 236 170 85 30 6 1
1 6 36 110 255 396 472 396 255 110 36 6 1
1 7 42 146 365 651 868 868 651 365 146 42 7 1
1 7 49 182 511 1001 1519 1716 1519 1001 511 182 49 7 1
1 8 56 231 693 1512 2520 3235 3235 2520 1512 693 231 56 8 1
listed in .
Like Pascal's triangle, outer edge diagonals of Lozanić's triangle are all 1s, and most of the enclosed numbers are the sum of the two numbers above. But for numbers at odd positions k in even-numbered rows n (starting the numbering for both with 0), after adding the two numbers above, subtract the number at position (k − 1)/2 in row n/2 − 1 of Pascal's triangle.
The diagonals next to the edge diagonals contain the positive integers in order, but with each integer stated twice .
Moving inwards, the next pair of diagonals contain the "quarter-squares" (), or the square numbers and pronic numbers interleaved.
The next pair of diagonals contain the alkane numbers l(6, n) (). And the next pair of diagonals contain the alkane numbers l(7, n) (), while the next pair has the alkane numbers l(8, n) (), then alkane numbers l(9, n) (), then l(10, n) (), l(11, n) (), l(12, n) (), etc.
The sum of the nth row of Lozanić's triangle is ( lists the first thirty values or so).
The sums of the diagonals of Lozanić's triangle intermix with (where Fx is the xth Fibonacci number).
As expected, laying Pascal's triangle over Lozanić's triangle and subtracting yields a triangle with the outer diagonals consisting of zeroes (, or for a version without the zeroes). This particular difference triangle has applications in the chemical study of catacondensed polygonal systems.
References
S. M. Losanitsch, Die Isomerie-Arten bei den Homologen der Paraffin-Reihe, Chem. Ber. 30 (1897), 1917 - 1926.
N. J. A. Sloane, Classic Sequences
Factorial and binomial topics
Triangles of numbers | Lozanić's triangle | [
"Mathematics"
] | 698 | [
"Factorial and binomial topics",
"Triangles of numbers",
"Combinatorics"
] |
4,187,554 | https://en.wikipedia.org/wiki/Chinese%20magic%20mirror | The Chinese magic mirror () traces back to at least the 5th century, although their existence during the Han dynasty (206 BC – 24 AD) has been claimed. The mirrors were made out of solid bronze. The front was polished and could be used as a mirror, while the back has a design cast in the bronze, or other decoration. When sunlight or other bright light shines onto the mirror, the mirror appears to become transparent. If that light is reflected from the mirror onto a wall, the pattern on the back of the mirror is then projected onto the wall.
Bronze mirrors were the standard in many Eurasian cultures, but most lacked this characteristic, as did most Chinese bronze mirrors.
Construction
Robert Temple describes their construction:
The basic mirror shape, with the design on the back, was cast flat, and the convexity of the surface produced afterwards by elaborate scraping and scratching. The surface was then polished to become shiny. The stresses set up by these processes caused the thinner parts of the surface to bulge outwards and become more convex than the thicker portions. Finally, a mercury amalgam was laid over the surface; this created further stresses and preferential buckling. The result was that imperfections of the mirror surface matched the patterns on the back, although they were too minute to be seen by the eye. But when the mirror reflected bright sunlight against a wall, with the resultant magnification of the whole image, the effect was to reproduce the patterns as if they were passing through the solid bronze by way of light beams.
History
China
In about 800 AD, during the Tang dynasty (618–907), a book entitled Record of Ancient Mirrors described the method of crafting solid bronze mirrors with decorations, written characters, or patterns on the reverse side that could cast these in a reflection on a nearby surface as light struck the front, polished side of the mirror; due to this seemingly transparent effect, they were called "light-penetration mirrors" by the Chinese.
This Tang-era book was lost over the centuries, but magic mirrors were described in the Dream Pool Essays by Shen Kuo (1031–1095), who owned three of them as family heirlooms. Perplexed as to how solid metal could be transparent, Shen guessed that some sort of quenching technique was used to produce tiny wrinkles on the face of the mirror too small to be observed by the eye. Although his explanation of different cooling rates was incorrect, he was right to suggest the surface contained minute variations which the naked eye could not detect; these mirrors also had no transparent quality at all, as discovered by the British scientist William Bragg in 1932. Bragg noted that "Only the magnifying effect of reflection makes them [the designs] plain".
Japan
As the manufacture of mirrors in China increased, it expanded to Korea and Japan. In fact, Emperor Cao Rui and the Wei Kingdom of China gave numerous bronze mirrors (known as Shinju-kyo in Japan) to Queen Himiko of Wa (Japan), where they were received as rare and mysterious objects. They were described as "sources of honesty" as they were said to reflect all good and evil without error. That is why Japan considers a sacred mirror called Yata-no-Kagami to be one of the three great imperial treasures.
Today, Yamamoto Akihisa is said to be the last manufacturer of magic mirrors in Japan. The Kyoto Journal interviewed the craftsman and he explained a small portion of the technique, that he learned from his father.
Western Europe
The first magic mirror to appear in Western Europe was owned by the director of the Paris Observatory, who, on his return from China, brought several mirrors and one of them was magical. The latter was presented as an unknown object to the French Academy of Sciences in 1844. In total, just four magic mirrors brought from China to Europe, but in 1878 two engineering professors presented to the Royal Society of London several models they had brought from Japan. The English called the artefacts "open mirrors" and for the first time made technical observations regarding their construction.
In 2022, the Cincinnati Art Museum discovered that they had a Chinese magic mirror in their collection. The curator, Hou-mei Sung, discovered that a mirror in their collection reflected an image of Amitabha, an important figure in Chinese Buddhism, his name being inscribed on the back of the mirror.
See also
TLV mirror
References
Chinese art
Optical illusions
Chinese inventions
Bronze mirrors | Chinese magic mirror | [
"Physics"
] | 901 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
4,187,559 | https://en.wikipedia.org/wiki/Dan%20Milisavljevic | Dan Milisavljevic (born January 31, 1980) is a Canadian astronomer and assistant professor of physics and astronomy at Purdue University.
Milisavljevic received his undergraduate education at McMaster University, where he was enrolled in the prestigious McMaster Arts and Science Programme. Upon graduation in 2004, he was awarded the Commonwealth Scholarship to study at the London School of Economics. There he pursued an MSc in the Philosophy and History of Science, and completed a dissertation on the interpretation of quantum mechanics. In June 2011, Milisavljevic obtained a PhD in physics and astronomy from Dartmouth College. Afterwards, he was a postdoctoral fellow at the Center for Astrophysics Harvard & Smithsonian before joining the faculty at Purdue.
Milisavljevic specializes in observational work in supernovae and supernova remnants. He is also known for aiding in the discovery of Uranus's moons Ferdinand, Trinculo, and Francisco; and Neptune's moons Halimede, Sao, Laomedeia and Neso.
References
External links
Personal Homepage of Dan Milisavljevic at Dartmouth College
Alumni of the London School of Economics
21st-century Canadian astronomers
Canadian expatriate academics in the United States
Dartmouth College alumni
McMaster University alumni
Living people
1980 births | Dan Milisavljevic | [
"Astronomy"
] | 255 | [
"Astronomers",
"Astronomer stubs",
"Astronomy stubs"
] |
4,187,653 | https://en.wikipedia.org/wiki/TVU%20Networks%20Corporation | TVU Networks Corporation is a privately held company headquartered in Mountain View, California, specializing in hardware and software products for the television broadcasting industry.
History
Early years
In April of 2010, the TVU Networks published its first product, TVU Player, a live streaming viewer client that provided free live television programming to computers with broadband connections. The service was discontinued on February 25, 2013.
Later that year, on September 11, 2010, TVU introduced its first internet protocol (IP)-based hardware device, the TVU Pack TM8000, a mobile backpack transmitter. This allowed broadcasters to deliver a live high-definition (HD) signals over the internet with a latency of two seconds, even in low-bandwidth environments. Over the next two years, the TVUPack had significant improvements, resulting in a smaller and lighter design along with additional features. The aggregated cellular transmission technology used in the TVUPack and similar devices offered an alternative to traditional satellite trucks, revolutionizing on-location live reporting for television stations.
The technology behind backpack-style cellular transmitters is known in the broadcast industry as "bonded cellular" or "aggregated cellular." This term refers to the synchronization of multiple connections that work together to provide a reliable signal.
The company expanded its IP-based product line, which included the launch of TVU anywhere, a mobile app for news-gathering, that works on iOS, Android devices, and TVU Grid, a cloud-based solution for Point-to-multipoint live video distribution. Gray Television was the first national station group to deploy TVU Grid at launch.
Recent developments
In 2015, the company released TVU One, a portable transmitter, as the successor to the original TVU Packs. The transmitter is 90% smaller than the first-generation cellular packs. TVU also entered into a partnership with drone manufacturer DJI in the same year, with the two companies collaborating on integrating their products for drone applications.
Starting in 2020, TVU shifted its focus to the development of Cloud-native applications, to address the transition of the broadcast industry to IP from transitional SDI.
On July 11, 2024, TVU Networks announced an alliance with the BBC to cover the 2024 UK elections. This partnership used TVU's technology to deliver 369 live feeds intended to enhance the BBC's ability to provide comprehensive election coverage across various platforms.
References
Television technology
Companies based in Mountain View, California
Software companies established in 2005
2005 establishments in California
Electronics companies established in 2005
American companies established in 2005 | TVU Networks Corporation | [
"Technology"
] | 515 | [
"Information and communications technology",
"Television technology"
] |
4,187,856 | https://en.wikipedia.org/wiki/Primary%20instrument | A primary instrument is a scientific instrument, which by its physical characteristics is accurate and is not calibrated against anything else. A primary instrument must be able to be exactly duplicated anywhere, anytime with identical results.
Example
Pressure. A U tube filled with water is a primary instrument as the water column differential is unchangeable as water is a basic physical substance. It is accurate due to its nature. Similarly a liquid in glass thermometer is a primary instrument as temperature change causes change in height of mercury column differential of which is unchangeable.
Secondary instruments
Secondary instruments must be calibrated against a primary standard. For example:
a dial bourdon tube type pressure gauge must be calibrated against a water or mercury U tube to assure good accuracy.
Time. The earth moving in its orbit is primary. Clocks must be calibrated against it.
Measurement | Primary instrument | [
"Physics",
"Mathematics"
] | 179 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
4,187,880 | https://en.wikipedia.org/wiki/Interlocus%20contest%20evolution | Interlocus contest evolution (ICE) is a process of intergenomic conflict by which different loci within a single genome antagonistically coevolve. ICE supposes that the Red Queen process, which is characterized by a never-ending antagonistic evolutionary arms race, does not only apply to species but also to genes within the genome of a species.
Because sexual recombination allows different gene loci to evolve semi-autonomously, genes have the potential to coevolve antagonistically. ICE occurs when "an allelic substitution at one locus selects for a new allele at the interacting locus, and vice versa." As a result, ICE can lead to a chain reaction of perpetual gene substitution at antagonistically interacting loci, and no stable equilibrium can be achieved. The rate of evolution thus increases at that locus.
ICE is thought to be the dominant mode of evolution for genes controlling social behavior. The ICE process can explain many biological phenomena, including intersexual conflict, parent offspring conflict, and interference competition.
Intersexual conflict
A fundamental conflict between the sexes lies in differences in investment: males generally invest predominantly in fertilization while females invest predominantly in offspring. This conflict manifests itself in many traits associated with sexual reproduction. Genes expressed in only one sex are selectively neutral in the other sex; male- and female-linked genes can therefore be acted upon separated by selection and will evolve semi-autonomously. Thus, one sex of a species may evolve to better itself rather than better the species as a whole, sometimes with negative results for the opposite sex: loci will antagonistically coevolve to enhance male reproductive success at females’ expense on the one hand, and to enhance female resistance to male coercion on the other. This is an example of intralocus sexual conflict, and is unlikely to be resolved fully throughout the genome. However, in some cases this conflict may be resolved by the restriction of the gene’s expression to only the sex that it benefits, resulting in sexual dimorphism.
The ICE theory can explain the differentiation of the human X- and Y-chromosomes. Semi-autonomous evolution may have promoted genes beneficial to females in the X-chromosome even when detrimental to males, and genes beneficial to males in the Y-chromosome, even when detrimental to females. As the distribution of the X-chromosome is three times as large as the Y-chromosome (the X-chromosome occurs in 3/4 of offspring genes, while the Y-chromosome occurs in only 1/4), the Y-chromosome has a reduced opportunity for rapid evolution. Thus, the Y-chromosome has "shed" its genes to leave only the essential ones (such as the SRY gene), which gives rise to the differences in the X- and Y-chromosomes.
Parent–offspring conflict
A father, mother and offspring may differ in the optimal resource allocation to the offspring. This co-evolutionary conflict can be considered in the context of ICE. Selection will favor genes in the male to maximize female investment in the current offspring, no matter the consequences to the female's reproduction later in life, while selection will favor genes in the female that increase her overall lifetime fitness. Genes expressed in the offspring will be selected to produce an intermediary level of resource allocation between the male-benefit and female-benefit loci. This three-way conflict again occurs when parents feed their offspring, as the optimum feeding rate and optimum point in time to discontinue feeding differ between father, mother and offspring.
Interference competition
ICE can also explain the theory of interference competition, which is most likely to be associated with opposing sets of genes that determine the outcome of competition between individuals. Different sets of genes may code for signal or receiver phenotypes, such as in the context of threat displays: when a competing male can win more contests by intimidation, rather than by fighting, selection will favor the accumulation of deceitful genes that may not be honest indicators of the male’s fighting capability.
For example, primitive male elephant seals may have used the lowest frequencies in the threat call of a rival as an indication of body size. The elephant seal's enormous nose may have evolved as a resonating device to amplify low frequencies, illustrating selection that favors the production of low-frequency threat vocalizations. However, this counter-selects for receptor systems that provide an increased threshold required for intimidation, which in turn selects for deeper threat vocalizations. The rapid divergence of threat displays among closely related species provides further evidence in support of the co-evolutionary arms race within the genome of a single species, driven by the ICE process.
References
Genetics
Evolution | Interlocus contest evolution | [
"Biology"
] | 952 | [
"Genetics"
] |
4,188,497 | https://en.wikipedia.org/wiki/St.%20Louis%20Union%20Station | St. Louis Union Station is a National Historic Landmark and former train station in St. Louis, Missouri, United States. At its 1894 opening, the station was the largest in the world. Traffic peaked at 100,000 people a day in the 1940s. The last Amtrak passenger train left the station in 1978.
In the 1980s, it was renovated as a hotel, shopping center, and entertainment complex. The 2010s and 2020s saw more renovation and expansion of entertainment and office capacity. The current hotel portion of the station is currently a member of Historic Hotels of America, the official program of the National Trust for Historic Preservation.
An adjacent station serves the light-rail MetroLink Red and Blue Lines, which run under the station in the Union Station subway tunnel. The city's intercity train station sits to the south, serving MetroLink, Amtrak, and Greyhound Bus.
History
19th century
The station was opened on September 1, 1894, by the Terminal Railroad Association of St. Louis. The station was designed by Theodore Link, and included three main areas: the Headhouse and the Midway, and the Train Shed designed by civil engineer George H. Pegram. The headhouse originally housed a hotel, a restaurant, passenger waiting rooms and railroad ticketing offices. It featured a gold-leafed Grand Hall, Romanesque arches, a barrel-vaulted ceiling and stained-glass windows. The clock tower is high.
Union Station's headhouse and midway are constructed of Indiana limestone and initially included 32 tracks under its vast trainshed terminating in the stub-end terminal. Its Grand Hall, which cost around $6.5 million and was about large, was considered to be one of the most beautiful public lobbies.
At its opening, it was the world's largest and busiest railroad station and its trainshed was the largest roof span in the world.
20th century
In 1903, Union Station was expanded to accommodate visitors to the 1904 St. Louis World's Fair. In the 1920s, it remained the largest American railroad terminal.
At its height, the station combined the St. Louis passenger services of 22 railroads, the most of any single terminal in the world. In the 1940s, it handled 100,000 passengers a day. During World War II, German actor Til Kiwe, was recaptured in the station's waiting room after escaping from a POW camp in Colorado.
The 1940s expansion added a new ticket counter designed as a half-circle and a mural by Louis Grell could be found atop the customer waiting area which depicted the history of St. Louis with an old fashion steam engine, two large steamboats and the Eads Bridge in the background.
The famous photograph of Harry S. Truman holding aloft the erroneous Chicago Tribune headline, "Dewey Defeats Truman", was shot at the station as Truman headed back to Washington, D.C., from Independence, Missouri, after the 1948 Presidential election.
As airliners became the primary mode of long-distance travel and railroad passenger services declined in the 1950s and 1960s, the massive station became obsolete and too expensive to maintain for its original purpose. By 1961, several tracks had been paved over for parking. Amtrak took over passenger service in 1971 but abandoned Union Station on October 31, 1978. By then, Amtrak had cut back service to four routes per day–the State House, the Ann Rutledge, the National Limited (formerly the Spirit of St. Louis) and the Inter-American. The eight total trains were nowhere near enough to justify the use of such a large facility. The last train to leave Union Station was a Chicago-bound Inter-American. Passenger service shifted to a temporary-style "Amshack" two blocks east. Amtrak has since moved its St. Louis service to the Gateway Transportation Center, one block east of Union Station.
The station was designated a National Historic Landmark in 1970, as an important surviving example of large-scale railroad architecture from the late 19th century. It was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1981.
In August 1985, after a $150 million renovation designed by HOK, Union Station was reopened with a 539-room hotel, shopping mall, restaurants and food court. Federal historic rehabilitation tax credits were used to transform Union Station into one of the city's most visited attractions. The station rehabilitation by Conrad Schmitt Studios remains one of the largest adaptive re-use projects in the United States. The hotel is housed in the headhouse and part of the train shed, which also houses a lake and shopping, entertainment and dining establishments. Omni Hotels was the original hotel operator, followed by Hyatt Regency Hotel chain and Marriott Hotels.
21st century
In 2010–11, the station's Marriott Hotel in the main terminal building was expanded. It took over the station's Midway area; all stores were moved to the train shed shopping arcade. In 2012, Lodging Hospitality Management bought Union Station and rebranded the hotel as a DoubleTree. In August 2016, Lodging Hospitality Management announced plans to renovate Union Station once again, included plans for an aquarium. The Memories Museum features artifacts and displays about the history of St. Louis Union Station and rail travel in the United States. Located on the upper level of the train shed, the museum is a joint project of Union Station Associates and the National Museum of Transportation. The original architectural drawings and blueprints for Union Station and the original hotel are available to researchers at the Washington University Archives at Washington University in St. Louis. Some architectural elements from the building have been removed in renovations and taken to the Sauget, Illinois, storage site of the National Building Arts Center.
St. Louis Union Station was the venue for the FIRST Tech Challenge World Championship component of the FIRST Championship, hosted in St. Louis every April until 2017, after which it was moved to Detroit.
The station's train shed area features The St. Louis Wheel, a high, 42 gondola observation wheel.
Inside the station is The St. Louis Rope Course, a , 3-story indoor ropes and zip line course.
Union Station has two light show features: one in the train shed area, and another inside Union Station Hotel's lobby.
In January 2020, Build-A-Bear Workshop, Inc. moved their global headquarters to downtown St. Louis inside the Grand Central Building inside the Union Station complex. The company also opened their new Build-A-Bear Workshop Union Station headquarters store and also operates a Build-A-Bear Radio studio and other experiential elements at their new headquarters. Additionally, a Ferris wheel, aquarium, and an abundance of restaurants have been added to Union Station in 2020.
In 2020, the St. Louis Aquarium opened in the former shopping mall space in the building. At , the aquarium is home to more than 13,000 animals representing over 250 species.
Transportation
MetroLink
MetroLink, the St. Louis region's light rail system, serves Union Station via the Red and Blue lines. The station is located beneath the train shed in the historic Union Station Baggage Tunnel. This tunnel was originally constructed in the 1890s as a below grade transfer area for baggage between trains. It was converted and opened for MetroLink usage in 1993 and has seen several renovations over the years, most notably in 2010 and 2016. The tunnel is expected to see another major renovation in 2024.
It takes about 30 minutes to travel to either terminal at St. Louis Lambert International Airport via the Red Line.
Gateway Transportation Station
The city's major transportation hub, Gateway Multimodal Transportation Center, is located two blocks from Union Station. It also serves MetroLink in addition to local buses and national connections with Amtrak, Greyhound and other services.
Taxi and rideshare
St. Louis Union Station has 24-hour taxi service at its north entrance on Market Street.
Filming
In 1981, areas of the then disused station were used in the filming of John Carpenter's movie Escape from New York. A scene involving the captured President was shot in the station's train shed and the film's gladiatorial fight was staged in the Grand Hall.
Gallery
See also
List of railway stations
References
Further reading
The St. Louis Union Station: a monograph by the architect and officers of the Terminal Railroad Association of St. Louis.
External links
Transportation buildings and structures in St. Louis
St. Louis
Clock towers in Missouri
Saint Louis
Former railway stations in Missouri
Historic American Engineering Record in Missouri
Historic Civil Engineering Landmarks
History of St. Louis
Landmarks of St. Louis
Saint Louis
Museums in St. Louis
National Historic Landmarks in Missouri
Rail in St. Louis
Railway hotels in the United States
Railway stations on the National Register of Historic Places in Missouri
Saint Louis
Romanesque Revival architecture in Missouri
Shopping districts and streets in the United States
Saint Louis
Saint Louis
Saint Louis
Saint Louis
Saint Louis
Saint Louis
Saint Louis
Saint Louis
St. Louis
Saint Louis
Former New York, Chicago and St. Louis Railroad stations
Saint Louis
Saint Louis
Towers in Missouri
Saint Louis
Tourist attractions in St. Louis
Railroad-related National Historic Landmarks
Railroad museums in Missouri
National Register of Historic Places in St. Louis
Art Nouveau architecture in Missouri
Art Nouveau railway stations
Downtown West, St. Louis
Saint Louis
Saint Louis
1894 establishments in Missouri
Shopping malls in Missouri
Railway stations in the United States closed in 1978 | St. Louis Union Station | [
"Engineering"
] | 1,867 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
4,189,047 | https://en.wikipedia.org/wiki/224%20%28number%29 | 224 (two hundred [and] twenty-four) is the natural number following 223 and preceding 225.
In mathematics
224 is a practical number,
and a sum of two positive cubes . It is also , making it one of the smallest numbers to be the sum of distinct positive cubes in more than one way.
224 is the smallest k with λ(k) = 24, where λ(k) is the Carmichael function.
The mathematician and philosopher Alex Bellos suggested in 2014 that a candidate for the lowest uninteresting number would be 224 because it was, at the time, "the lowest number not to have its own page on [the English-language version of] Wikipedia". That distinction now belongs to 315.
In other areas
In the SHA-2 family of six cryptographic hash functions, the weakest is SHA-224, named because it produces 224-bit hash values. It was defined in this way so that the number of bits of security it provides (half of its output length, 112 bits) would match the key length of two-key Triple DES.
The ancient Phoenician shekel was a standardized measure of silver, equal to 224 grains, although other forms of the shekel employed in other ancient cultures (including the Babylonians and Hebrews) had different measures. Likely not coincidentally, as far as ancient Burma and Thailand, silver was measured in a unit called a tikal, equal to 224 grains.
See also
224 (disambiguation)
References
Integers | 224 (number) | [
"Mathematics"
] | 307 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
4,189,127 | https://en.wikipedia.org/wiki/River%20ecosystem | River ecosystems are flowing waters that drain the landscape, and include the biotic (living) interactions amongst plants, animals and micro-organisms, as well as abiotic (nonliving) physical and chemical interactions of its many parts. River ecosystems are part of larger watershed networks or catchments, where smaller headwater streams drain into mid-size streams, which progressively drain into larger river networks. The major zones in river ecosystems are determined by the river bed's gradient or by the velocity of the current. Faster moving turbulent water typically contains greater concentrations of dissolved oxygen, which supports greater biodiversity than the slow-moving water of pools. These distinctions form the basis for the division of rivers into upland and lowland rivers.
The food base of streams within riparian forests is mostly derived from the trees, but wider streams and those that lack a canopy derive the majority of their food base from algae. Anadromous fish are also an important source of nutrients. Environmental threats to rivers include loss of water, dams, chemical pollution and introduced species. A dam produces negative effects that continue down the watershed. The most important negative effects are the reduction of spring flooding, which damages wetlands, and the retention of sediment, which leads to the loss of deltaic wetlands.
River ecosystems are prime examples of lotic ecosystems. Lotic refers to flowing water, from the Latin , meaning washed. Lotic waters range from springs only a few centimeters wide to major rivers kilometers in width. Much of this article applies to lotic ecosystems in general, including related lotic systems such as streams and springs. Lotic ecosystems can be contrasted with lentic ecosystems, which involve relatively still terrestrial waters such as lakes, ponds, and wetlands. Together, these two ecosystems form the more general study area of freshwater or aquatic ecology.
The following unifying characteristics make the ecology of running waters unique among aquatic habitats: the flow is unidirectional, there is a state of continuous physical change, and there is a high degree of spatial and temporal heterogeneity at all scales (microhabitats), the variability between lotic systems is quite high and the biota is specialized to live with flow conditions.
Abiotic components (non-living)
The non-living components of an ecosystem are called abiotic components.
E.g. stone, air, soil, etc.
Water flow
Unidirectional water flow is the key factor in lotic systems influencing their ecology. Streamflow can be continuous or intermittent, though. Streamflow is the result of the summative inputs from groundwater, precipitation, and overland flow. Water flow can vary between systems, ranging from torrential rapids to slow backwaters that almost seem like lentic systems. The speed or velocity of the water flow of the water column can also vary within a system and is subject to chaotic turbulence, though water velocity tends to be highest in the middle part of the stream channel (known as the thalveg). This turbulence results in divergences of flow from the mean downslope flow vector as typified by eddy currents. The mean flow rate vector is based on the variability of friction with the bottom or sides of the channel, sinuosity, obstructions, and the incline gradient. In addition, the amount of water input into the system from direct precipitation, snowmelt, and/or groundwater can affect the flow rate. The amount of water in a stream is measured as discharge (volume per unit time). As water flows downstream, streams and rivers most often gain water volume, so at base flow (i.e., no storm input), smaller headwater streams have very low discharge, while larger rivers have much higher discharge. The "flow regime" of a river or stream includes the general patterns of discharge over annual or decadal time scales, and may capture seasonal changes in flow.
While water flow is strongly determined by slope, flowing waters can alter the general shape or direction of the stream bed, a characteristic also known as geomorphology. The profile of the river water column is made up of three primary actions: erosion, transport, and deposition. Rivers have been described as "the gutters down which run the ruins of continents". Rivers are continuously eroding, transporting, and depositing substrate, sediment, and organic material. The continuous movement of water and entrained material creates a variety of habitats, including riffles, glides, and pools.
Light
Light is important to lotic systems, because it provides the energy necessary to drive primary production via photosynthesis, and can also provide refuge for prey species in shadows it casts. The amount of light that a system receives can be related to a combination of internal and external stream variables. The area surrounding a small stream, for example, might be shaded by surrounding forests or by valley walls. Larger river systems tend to be wide so the influence of external variables is minimized, and the sun reaches the surface. These rivers also tend to be more turbulent, however, and particles in the water increasingly attenuate light as depth increases. Seasonal and diurnal factors might also play a role in light availability because the angle of incidence, the angle at which light strikes water can lead to light lost from reflection. Known as Beer's Law, the shallower the angle, the more light is reflected and the amount of solar radiation received declines logarithmically with depth. Additional influences on light availability include cloud cover, altitude, and geographic position.
Temperature
Most lotic species are poikilotherms whose internal temperature varies with their environment, thus temperature is a key abiotic factor for them. Water can be heated or cooled through radiation at the surface and conduction to or from the air and surrounding substrate. Shallow streams are typically well mixed and maintain a relatively uniform temperature within an area. In deeper, slower moving water systems, however, a strong difference between the bottom and surface temperatures may develop. Spring fed systems have little variation as springs are typically from groundwater sources, which are often very close to ambient temperature. Many systems show strong diurnal fluctuations and seasonal variations are most extreme in arctic, desert and temperate systems. The amount of shading, climate and elevation can also influence the temperature of lotic systems.
Chemistry
Water chemistry in river ecosystems varies depending on which dissolved solutes and gases are present in the water column of the stream. Specifically river water can include, apart from the water itself,
dissolved inorganic matter and major ions (calcium, sodium, magnesium, potassium, bicarbonate, sulphide, chloride)
dissolved inorganic nutrients (nitrogen, phosphorus, silica)
suspended and dissolved organic matter
gases (nitrogen, nitrous oxide, carbon dioxide, oxygen)
trace metals and pollutants
Dissolved ions and nutrients
Dissolved stream solutes can be considered either reactive or conservative. Reactive solutes are readily biologically assimilated by the autotrophic and heterotrophic biota of the stream; examples can include inorganic nitrogen species such as nitrate or ammonium, some forms of phosphorus (e.g., soluble reactive phosphorus), and silica. Other solutes can be considered conservative, which indicates that the solute is not taken up and used biologically; chloride is often considered a conservative solute. Conservative solutes are often used as hydrologic tracers for water movement and transport. Both reactive and conservative stream water chemistry is foremost determined by inputs from the geology of its watershed, or catchment area. Stream water chemistry can also be influenced by precipitation, and the addition of pollutants from human sources. Large differences in chemistry do not usually exist within small lotic systems due to a high rate of mixing. In larger river systems, however, the concentrations of most nutrients, dissolved salts, and pH decrease as distance increases from the river's source.
Dissolved gases
In terms of dissolved gases, oxygen is likely the most important chemical constituent of lotic systems, as all aerobic organisms require it for survival. It enters the water mostly via diffusion at the water-air interface. Oxygen's solubility in water decreases as water pH and temperature increases. Fast, turbulent streams expose more of the water's surface area to the air and tend to have low temperatures and thus more oxygen than slow, backwaters. Oxygen is a byproduct of photosynthesis, so systems with a high abundance of aquatic algae and plants may also have high concentrations of oxygen during the day. These levels can decrease significantly during the night when primary producers switch to respiration. Oxygen can be limiting if circulation between the surface and deeper layers is poor, if the activity of lotic animals is very high, or if there is a large amount of organic decay occurring.
Suspended matter
Rivers can also transport suspended inorganic and organic matter. These materials can include sediment or terrestrially-derived organic matter that falls into the stream channel. Often, organic matter is processed within the stream via mechanical fragmentation, consumption and grazing by invertebrates, and microbial decomposition. Leaves and woody debris recognizable coarse particulate organic matter (CPOM) into particulate organic matter (POM), down to fine particulate organic matter. Woody and non-woody plants have different instream breakdown rates, with leafy plants or plant parts (e.g., flower petals) breaking down faster than woody logs or branches.
Substrate
The inorganic substrate of lotic systems is composed of the geologic material present in the catchment that is eroded, transported, sorted, and deposited by the current. Inorganic substrates are classified by size on the Wentworth scale, which ranges from boulders, to pebbles, to gravel, to sand, and to silt. Typically, substrate particle size decreases downstream with larger boulders and stones in more mountainous areas and sandy bottoms in lowland rivers. This is because the higher gradients of mountain streams facilitate a faster flow, moving smaller substrate materials further downstream for deposition. Substrate can also be organic and may include fine particles, autumn shed leaves, large woody debris such as submerged tree logs, moss, and semi-aquatic plants. Substrate deposition is not necessarily a permanent event, as it can be subject to large modifications during flooding events.
Biotic components (living)
The living components of an ecosystem are called the biotic components. Streams have numerous types of biotic organisms that live in them, including bacteria, primary producers, insects and other invertebrates, as well as fish and other vertebrates.
Microorganisms
Bacteria are present in large numbers in lotic waters. Free-living forms are associated with decomposing organic material, biofilm on the surfaces of rocks and vegetation, in between particles that compose the substrate, and suspended in the water column. Other forms are also associated with the guts of lotic organisms as parasites or in commensal relationships. Bacteria play a large role in energy recycling (see below).
Diatoms are one of the main dominant groups of periphytic algae in lotic systems and have been widely used as efficient indicators of water quality, because they respond quickly to environmental changes, especially organic pollution and eutrophication, with a broad spectrum of tolerances to conditions ranging, from oligotrophic to eutrophic.
Biofilm
A biofilm is a combination of algae (diatoms etc.), fungi, bacteria, and other small microorganisms that exist in a film along the streambed or the benthos. Biofilm assemblages themselves are complex, and add to the complexity of a streambed.
The different biofilm components (algae and bacteria are the principal components) are embedded in an exopolysaccharide matrix (EPS), and are net receptors of inorganic and organic elements and remain submitted to the influences of the different environmental factors.
Biofilms are one of the main biological interphases in river ecosystems, and probably the most important in intermittent rivers, where the importance of the water column is reduced during extended low-activity periods of the hydrological cycle. Biofilms can be understood as microbial consortia of autotrophs and heterotrophs, coexisting in a matrix of hydrated extracellular polymeric substances (EPS). These two main biological components are respectively mainly algae and cyanobacteria on one side, and bacteria and fungi on the other. Micro- and meiofauna also inhabit the biofilm, predating on the organisms and organic particles and contributing to its evolution and dispersal. Biofilms therefore form a highly active biological consortium, ready to use organic and inorganic materials from the water phase, and also ready to use light or chemical energy sources. The EPS immobilize the cells and keep them in close proximity allowing for intense interactions including cell-cell communication and the formation of synergistic consortia. The EPS is able to retain extracellular enzymes and therefore allows the utilization of materials from the environment and the transformation of these materials into dissolved nutrients for the use by algae and bacteria. At the same time, the EPS contributes to protect the cells from desiccation as well from other hazards (e.g., biocides, UV radiation, etc.) from the outer world. On the other hand, the packing and the EPS protection layer limits the diffusion of gases and nutrients, especially for the cells far from the biofilm surface, and this limits their survival and creates strong gradients within the biofilm. Both the biofilm physical structure, and the plasticity of the organisms that live within it, ensure and support their survival in harsh environments or under changing environmental conditions.
Primary producers
Algae, consisting of phytoplankton and periphyton, are the most significant sources of primary production in most streams and rivers. Phytoplankton float freely in the water column and thus are unable to maintain populations in fast flowing streams. They can, however, develop sizeable populations in slow moving rivers and backwaters. Periphyton are typically filamentous and tufted algae that can attach themselves to objects to avoid being washed away by fast currents. In places where flow rates are negligible or absent, periphyton may form a gelatinous, unanchored floating mat.
Plants exhibit limited adaptations to fast flow and are most successful in reduced currents. More primitive plants, such as mosses and liverworts attach themselves to solid objects. This typically occurs in colder headwaters where the mostly rocky substrate offers attachment sites. Some plants are free floating at the water's surface in dense mats like duckweed or water hyacinth. Others are rooted and may be classified as submerged or emergent. Rooted plants usually occur in areas of slackened current where fine-grained soils are found. These rooted plants are flexible, with elongated leaves that offer minimal resistance to current.
Living in flowing water can be beneficial to plants and algae because the current is usually well aerated and it provides a continuous supply of nutrients. These organisms are limited by flow, light, water chemistry, substrate, and grazing pressure. Algae and plants are important to lotic systems as sources of energy, for forming microhabitats that shelter other fauna from predators and the current, and as a food resource.
Insects and other invertebrates
Up to 90% of invertebrates in some lotic systems are insects. These species exhibit tremendous diversity and can be found occupying almost every available habitat, including the surfaces of stones, deep below the substratum in the hyporheic zone, adrift in the current, and in the surface film.
Insects have developed several strategies for living in the diverse flows of lotic systems. Some avoid high current areas, inhabiting the substratum or the sheltered side of rocks. Others have flat bodies to reduce the drag forces they experience from living in running water. Some insects, like the giant water bug (Belostomatidae), avoid flood events by leaving the stream when they sense rainfall. In addition to these behaviors and body shapes, insects have different life history adaptations to cope with the naturally-occurring physical harshness of stream environments. Some insects time their life events based on when floods and droughts occur. For example, some mayflies synchronize when they emerge as flying adults with when snowmelt flooding usually occurs in Colorado streams. Other insects do not have a flying stage and spend their entire life cycle in the river.
Like most of the primary consumers, lotic invertebrates often rely heavily on the current to bring them food and oxygen. Invertebrates are important as both consumers and prey items in lotic systems.
The common orders of insects that are found in river ecosystems include Ephemeroptera (also known as a mayfly), Trichoptera (also known as a caddisfly), Plecoptera (also known as a stonefly, Diptera (also known as a true fly), some types of Coleoptera (also known as a beetle), Odonata (the group that includes the dragonfly and the damselfly), and some types of Hemiptera (also known as true bugs).
Additional invertebrate taxa common to flowing waters include mollusks such as snails, limpets, clams, mussels, as well as crustaceans like crayfish, amphipoda and crabs.
Fish and other vertebrates
Fish are probably the best-known inhabitants of lotic systems. The ability of a fish species to live in flowing waters depends upon the speed at which it can swim and the duration that its speed can be maintained. This ability can vary greatly between species and is tied to the habitat in which it can survive. Continuous swimming expends a tremendous amount of energy and, therefore, fishes spend only short periods in full current. Instead, individuals remain close to the bottom or the banks, behind obstacles, and sheltered from the current, swimming in the current only to feed or change locations. Some species have adapted to living only on the system bottom, never venturing into the open water flow. These fishes are dorso-ventrally flattened to reduce flow resistance and often have eyes on top of their heads to observe what is happening above them. Some also have sensory barrels positioned under the head to assist in the testing of substratum.
Lotic systems typically connect to each other, forming a path to the ocean (spring → stream → river → ocean), and many fishes have life cycles that require stages in both fresh and salt water. Salmon, for example, are anadromous species that are born in freshwater but spend most of their adult life in the ocean, returning to fresh water only to spawn. Eels are catadromous species that do the opposite, living in freshwater as adults but migrating to the ocean to spawn.
Other vertebrate taxa that inhabit lotic systems include amphibians, such as salamanders, reptiles (e.g. snakes, turtles, crocodiles and alligators) various bird species, and mammals (e.g., otters, beavers, hippos, and river dolphins). With the exception of a few species, these vertebrates are not tied to water as fishes are, and spend part of their time in terrestrial habitats. Many fish species are important as consumers and as prey species to the larger vertebrates mentioned above.
Trophic level dynamics
The concept of trophic levels are used in food webs to visualise the manner in which energy is transferred from one part of an ecosystem to another. Trophic levels can be assigned numbers determining how far an organism is along the food chain.
Level one: Producers, plant-like organisms that generate their own food using solar radiation, including algae, phytoplankton, mosses and lichens.
Level two: Consumers, animal-like organism that get their energy from eating producers, such as zooplankton, small fish, and crustaceans.
Level three: Decomposers, organisms that break down the dead matter of consumers and producers and return the nutrients back to the system. Example are bacteria and fungi.
All energy transactions within an ecosystem derive from a single external source of energy, the sun. Some of this solar radiation is used by producers (plants) to turn inorganic substances into organic substances which can be used as food by consumers (animals). Plants release portions of this energy back into the ecosystem through a catabolic process. Animals then consume the potential energy that is being released from the producers. This system is followed by the death of the consumer organism which then returns nutrients back into the ecosystem. This allow further growth for the plants, and the cycle continues. Breaking cycles down into levels makes it easier for ecologists to understand ecological succession when observing the transfer of energy within a system.
Top-down and bottom-up affect
A common issue with trophic level dynamics is how resources and production are regulated. The usage and interaction between resources have a large impact on the structure of food webs as a whole. Temperature plays a role in food web interactions including top-down and bottom-up forces within ecological communities. Bottom-up regulations within a food web occur when a resource available at the base or bottom of the food web increases productivity, which then climbs the chain and influence the biomass availability to higher trophic organism. Top-down regulations occur when a predator population increases. This limits the available prey population, which limits the availability of energy for lower trophic levels within the food chain. Many biotic and abiotic factors can influence top-down and bottom-up interactions.
Trophic cascade
Another example of food web interactions are trophic cascades. Understanding trophic cascades has allowed ecologists to better understand the structure and dynamics of food webs within an ecosystem. The phenomenon of trophic cascades allows keystone predators to structure entire food web in terms of how they interact with their prey. Trophic cascades can cause drastic changes in the energy flow within a food web. For example, when a top or keystone predator consumes organisms below them in the food web, the density and behavior of the prey will change. This, in turn, affects the abundance of organisms consumed further down the chain, resulting in a cascade down the trophic levels. However, empirical evidence shows trophic cascades are much more prevalent in terrestrial food webs than aquatic food webs.
Food chain
A food chain is a linear system of links that is part of a food web, and represents the order in which organisms are consumed from one trophic level to the next. Each link in a food chain is associated with a trophic level in the ecosystem. The numbered steps it takes for the initial source of energy starting from the bottom to reach the top of the food web is called the food chain length. While food chain lengths can fluctuate, aquatic ecosystems start with primary producers that are consumed by primary consumers which are consumed by secondary consumers, and those in turn can be consumed by tertiary consumers so on and so forth until the top of the food chain has been reached.
Primary producers
Primary producers start every food chain. Their production of energy and nutrients comes from the sun through photosynthesis. Algae contributes to a lot of the energy and nutrients at the base of the food chain along with terrestrial litter-fall that enters the stream or river. Production of organic compounds like carbon is what gets transferred up the food chain. Primary producers are consumed by herbivorous invertebrates that act as the primary consumers. Productivity of these producers and the function of the ecosystem as a whole are influenced by the organism above it in the food chain.
Primary consumers
Primary consumers are the invertebrates and macro-invertebrates that feed upon the primary producers. They play an important role in initiating the transfer of energy from the base trophic level to the next. They are regulatory organisms which facilitate and control rates of nutrient cycling and the mixing of aquatic and terrestrial plant materials. They also transport and retain some of those nutrients and materials. There are many different functional groups of these invertebrate, including grazers, organisms that feed on algal biofilm that collects on submerged objects, shredders that feed on large leaves and detritus and help break down large material. Also filter feeders, macro-invertebrates that rely on stream flow to deliver them fine particulate organic matter (FPOM) suspended in the water column, and gatherers who feed on FPOM found on the substrate of the river or stream.
Secondary consumers
The secondary consumers in a river ecosystem are the predators of the primary consumers. This includes mainly insectivorous fish. Consumption by invertebrate insects and macro-invertebrates is another step of energy flow up the food chain. Depending on their abundance, these predatory consumers can shape an ecosystem by the manner in which they affect the trophic levels below them. When fish are at high abundance and eat lots of invertebrates, then algal biomass and primary production in the stream is greater, and when secondary consumers are not present, then algal biomass may decrease due to the high abundance of primary consumers. Energy and nutrients that starts with primary producers continues to make its way up the food chain and depending on the ecosystem, may end with these predatory fish.
Food web complexity
Diversity, productivity, species richness, composition and stability are all interconnected by a series of feedback loops. Communities can have a series of complex, direct and/or indirect, responses to major changes in biodiversity. Food webs can include a wide array of variables, the three main variables ecologists look at regarding ecosystems include species richness, biomass of productivity and stability/resistant to change. When a species is added or removed from an ecosystem it will have an effect on the remaining food web, the intensity of this effect is related to species connectedness and food web robustness. When a new species is added to a river ecosystem the intensity of the effect is related to the robustness or resistance to change of the current food web. When a species is removed from a river ecosystem the intensity of the effect is related to the connectedness of the species to the food web. An invasive species could be removed with little to no effect, but if important and native primary producers, prey or predatory fish are removed you could have a negative trophic cascade. One highly variable component to river ecosystems is food supply (biomass of primary producers). Food supply or type of producers is ever changing with the seasons and differing habitats within the river ecosystem. Another highly variable component to river ecosystems is nutrient input from wetland and terrestrial detritus. Food and nutrient supply variability is important for the succession, robustness and connectedness of river ecosystem organisms.
Trophic relationships
Energy inputs
Energy sources can be autochthonous or allochthonous.
Autochthonous (from the Latin "auto" = "self) energy sources are those derived from within the lotic system. During photosynthesis, for example, primary producers form organic carbon compounds out of carbon dioxide and inorganic matter. The energy they produce is important for the community because it may be transferred to higher trophic levels via consumption. Additionally, high rates of primary production can introduce dissolved organic matter (DOM) to the waters. Another form of autochthonous energy comes from the decomposition of dead organisms and feces that originate within the lotic system. In this case, bacteria decompose the detritus or coarse particulate organic material (CPOM; >1 mm pieces) into fine particulate organic matter (FPOM; <1 mm pieces) and then further into inorganic compounds that are required for photosynthesis. This process is discussed in more detail below.
Allochthonous energy sources are those derived from outside the lotic system, that is, from the terrestrial environment. Leaves, twigs, fruits, etc. are typical forms of terrestrial CPOM that have entered the water by direct litter fall or lateral leaf blow. In addition, terrestrial animal-derived materials, such as feces or carcasses that have been added to the system are examples of allochthonous CPOM. The CPOM undergoes a specific process of degradation. Allan gives the example of a leaf fallen into a stream. First, the soluble chemicals are dissolved and leached from the leaf upon its saturation with water. This adds to the DOM load in the system. Next microbes such as bacteria and fungi colonize the leaf, softening it as the mycelium of the fungus grows into it. The composition of the microbial community is influenced by the species of tree from which the leaves are shed (Rubbo and Kiesecker 2004). This combination of bacteria, fungi, and leaf are a food source for shredding invertebrates, which leave only FPOM after consumption. These fine particles may be colonized by microbes again or serve as a food source for animals that consume FPOM. Organic matter can also enter the lotic system already in the FPOM stage by wind, surface runoff, bank erosion, or groundwater. Similarly, DOM can be introduced through canopy drip from rain or from surface flows.
Invertebrates
Invertebrates can be organized into many feeding guilds in lotic systems. Some species are shredders, which use large and powerful mouth parts to feed on non-woody CPOM and their associated microorganisms. Others are suspension feeders, which use their setae, filtering aparati, nets, or even secretions to collect FPOM and microbes from the water. These species may be passive collectors, utilizing the natural flow of the system, or they may generate their own current to draw water, and also, FPOM in Allan. Members of the gatherer-collector guild actively search for FPOM under rocks and in other places where the stream flow has slackened enough to allow deposition. Grazing invertebrates utilize scraping, rasping, and browsing adaptations to feed on periphyton and detritus. Finally, several families are predatory, capturing and consuming animal prey. Both the number of species and the abundance of individuals within each guild is largely dependent upon food availability. Thus, these values may vary across both seasons and systems.
Fish
Fish can also be placed into feeding guilds. Planktivores pick plankton out of the water column. Herbivore-detritivores are bottom-feeding species that ingest both periphyton and detritus indiscriminately. Surface and water column feeders capture surface prey (mainly terrestrial and emerging insects) and drift (benthic invertebrates floating downstream). Benthic invertebrate feeders prey primarily on immature insects, but will also consume other benthic invertebrates. Top predators consume fishes and/or large invertebrates. Omnivores ingest a wide range of prey. These can be floral, faunal, and/or detrital in nature. Finally, parasites live off of host species, typically other fishes. Fish are flexible in their feeding roles, capturing different prey with regard to seasonal availability and their own developmental stage. Thus, they may occupy multiple feeding guilds in their lifetime. The number of species in each guild can vary greatly between systems, with temperate warm water streams having the most benthic invertebrate feeders, and tropical systems having large numbers of detritus feeders due to high rates of allochthonous input.
Community patterns and diversity
Local species richness
Large rivers have comparatively more species than small streams. Many relate this pattern to the greater area and volume of larger systems, as well as an increase in habitat diversity. Some systems, however, show a poor fit between system size and species richness. In these cases, a combination of factors such as historical rates of speciation and extinction, type of substrate, microhabitat availability, water chemistry, temperature, and disturbance such as flooding seem to be important.
Resource partitioning
Although many alternate theories have been postulated for the ability of guild-mates to coexist (see Morin 1999), resource partitioning has been well documented in lotic systems as a means of reducing competition. The three main types of resource partitioning include habitat, dietary, and temporal segregation.
Habitat segregation was found to be the most common type of resource partitioning in natural systems (Schoener, 1974). In lotic systems, microhabitats provide a level of physical complexity that can support a diverse array of organisms (Vincin and Hawknis, 1998). The separation of species by substrate preferences has been well documented for invertebrates. Ward (1992) was able to divide substrate dwellers into six broad assemblages, including those that live in: coarse substrate, gravel, sand, mud, woody debris, and those associated with plants, showing one layer of segregation. On a smaller scale, further habitat partitioning can occur on or around a single substrate, such as a piece of gravel. Some invertebrates prefer the high flow areas on the exposed top of the gravel, while others reside in the crevices between one piece of gravel and the next, while still others live on the bottom of this gravel piece.
Dietary segregation is the second-most common type of resource partitioning. High degrees of morphological specializations or behavioral differences allow organisms to use specific resources. The size of nets built by some species of invertebrate suspension feeders, for example, can filter varying particle size of FPOM from the water (Edington et al. 1984). Similarly, members in the grazing guild can specialize in the harvesting of algae or detritus depending upon the morphology of their scraping apparatus. In addition, certain species seem to show a preference for specific algal species.
Temporal segregation is a less common form of resource partitioning, but it is nonetheless an observed phenomenon. Typically, it accounts for coexistence by relating it to differences in life history patterns and the timing of maximum growth among guild mates. Tropical fishes in Borneo, for example, have shifted to shorter life spans in response to the ecological niche reduction felt with increasing levels of species richness in their ecosystem (Watson and Balon 1984).
Persistence and succession
Over long time scales, there is a tendency for species composition in pristine systems to remain in a stable state. This has been found for both invertebrate and fish species. On shorter time scales, however, flow variability and unusual precipitation patterns decrease habitat stability and can all lead to declines in persistence levels. The ability to maintain this persistence over long time scales is related to the ability of lotic systems to return to the original community configuration relatively quickly after a disturbance (Townsend et al. 1987). This is one example of temporal succession, a site-specific change in a community involving changes in species composition over time. Another form of temporal succession might occur when a new habitat is opened up for colonization. In these cases, an entirely new community that is well adapted to the conditions found in this new area can establish itself.
River continuum concept
The River continuum concept (RCC) was an attempt to construct a single framework to describe the function of temperate lotic ecosystems from the headwaters to larger rivers and relate key characteristics to changes in the biotic community (Vannote et al. 1980). The physical basis for RCC is size and location along the gradient from a small stream eventually linked to a large river. Stream order (see characteristics of streams) is used as the physical measure of the position along the RCC.
According to the RCC, low ordered sites are small shaded streams where allochthonous inputs of CPOM are a necessary resource for consumers. As the river widens at mid-ordered sites, energy inputs should change. Ample sunlight should reach the bottom in these systems to support significant periphyton production. Additionally, the biological processing of CPOM (coarse particulate organic matter larger than 1 mm) inputs at upstream sites is expected to result in the transport of large amounts of FPOM (fine particulate organic matter smaller than 1 mm) to these downstream ecosystems. Plants should become more abundant at edges of the river with increasing river size, especially in lowland rivers where finer sediments have been deposited and facilitate rooting. The main channels likely have too much current and turbidity and a lack of substrate to support plants or periphyton. Phytoplankton should produce the only autochthonous inputs here, but photosynthetic rates will be limited due to turbidity and mixing. Thus, allochthonous inputs are expected to be the primary energy source for large rivers. This FPOM will come from both upstream sites via the decomposition process and through lateral inputs from floodplains.
Biota should change with this change in energy from the headwaters to the mouth of these systems. Namely, shredders should prosper in low-ordered systems and grazers in mid-ordered sites. Microbial decomposition should play the largest role in energy production for low-ordered sites and large rivers, while photosynthesis, in addition to degraded allochthonous inputs from upstream will be essential in mid-ordered systems. As mid-ordered sites will theoretically receive the largest variety of energy inputs, they might be expected to host the most biological diversity (Vannote et al. 1980).
Just how well the RCC actually reflects patterns in natural systems is uncertain and its generality can be a handicap when applied to diverse and specific situations. The most noted criticisms of the RCC are: 1. It focuses mostly on macroinvertebrates, disregarding that plankton and fish diversity is highest in high orders; 2. It relies heavily on the fact that low ordered sites have high CPOM inputs, even though many streams lack riparian habitats; 3. It is based on pristine systems, which rarely exist today; and 4. It is centered around the functioning of temperate streams. Despite its shortcomings, the RCC remains a useful idea for describing how the patterns of ecological functions in a lotic system can vary from the source to the mouth.
Disturbances such as congestion by dams or natural events such as shore flooding are not included in the RCC model. Various researchers have since expanded the model to account for such irregularities. For example, J.V. Ward and J.A. Stanford came up with the Serial Discontinuity Concept in 1983, which addresses the impact of geomorphologic disorders such as congestion and integrated inflows. The same authors presented the Hyporheic Corridor concept in 1993, in which the vertical (in depth) and lateral (from shore to shore) structural complexity of the river were connected. The flood pulse concept, developed by W. J. Junk in 1989, further modified by P. B. Bayley in 1990 and K. Tockner in 2000, takes into account the large amount of nutrients and organic material that makes its way into a river from the sediment of surrounding flooded land.
Human impacts
Humans exert a geomorphic force that now rivals that of the natural Earth. The period of human dominance has been termed the Anthropocene, and several dates have been proposed for its onset. Many researchers have emphasised the dramatic changes associated with the Industrial Revolution in Europe after about 1750 CE (Common Era) and the Great Acceleration in technology at about 1950 CE.
However, a detectable human imprint on the environment extends back for thousands of years, and an emphasis on recent changes minimises the enormous landscape transformation caused by humans in antiquity. Important earlier human effects with significant environmental consequences include megafaunal extinctions between 14,000 and 10,500 cal yr BP; domestication of plants and animals close to the start of the Holocene at 11,700 cal yr BP; agricultural practices and deforestation at 10,000 to 5000 cal yr BP; and widespread generation of anthropogenic soils at about 2000 cal yr BP. Key evidence of early anthropogenic activity is encoded in early fluvial successions, long predating anthropogenic effects that have intensified over the past centuries and led to the modern worldwide river crisis.
Pollution
River pollution can include but is not limited to: increasing sediment export, excess nutrients from fertilizer or urban runoff, sewage and septic inputs, plastic pollution, nano-particles, pharmaceuticals and personal care products, synthetic chemicals, road salt, inorganic contaminants (e.g., heavy metals), and even heat via thermal pollutions. The effects of pollution often depend on the context and material, but can reduce ecosystem functioning, limit ecosystem services, reduce stream biodiversity, and impact human health.
Pollutant sources of lotic systems are hard to control because they can derive, often in small amounts, over a very wide area and enter the system at many locations along its length. While direct pollution of lotic systems has been greatly reduced in the United States under the government's Clean Water Act, contaminants from diffuse non-point sources remain a large problem. Agricultural fields often deliver large quantities of sediments, nutrients, and chemicals to nearby streams and rivers. Urban and residential areas can also add to this pollution when contaminants are accumulated on impervious surfaces such as roads and parking lots that then drain into the system. Elevated nutrient concentrations, especially nitrogen and phosphorus which are key components of fertilizers, can increase periphyton growth, which can be particularly dangerous in slow-moving streams. Another pollutant, acid rain, forms from sulfur dioxide and nitrous oxide emitted from factories and power stations. These substances readily dissolve in atmospheric moisture and enter lotic systems through precipitation. This can lower the pH of these sites, affecting all trophic levels from algae to vertebrates. Mean species richness and total species numbers within a system decrease with decreasing pH.
Flow modification
Flow modification can occur as a result of dams, water regulation and extraction, channel modification, and the destruction of the river floodplain and adjacent riparian zones.
Dams alter the flow, temperature, and sediment regime of lotic systems. Additionally, many rivers are dammed at multiple locations, amplifying the impact. Dams can cause enhanced clarity and reduced variability in stream flow, which in turn cause an increase in periphyton abundance. Invertebrates immediately below a dam can show reductions in species richness due to an overall reduction in habitat heterogeneity. Also, thermal changes can affect insect development, with abnormally warm winter temperatures obscuring cues to break egg diapause and overly cool summer temperatures leaving too few acceptable days to complete growth. Finally, dams fragment river systems, isolating previously continuous populations, and preventing the migrations of anadromous and catadromous species.
Invasive species
Invasive species have been introduced to lotic systems through both purposeful events (e.g. stocking game and food species) as well as unintentional events (e.g. hitchhikers on boats or fishing waders). These organisms can affect natives via competition for prey or habitat, predation, habitat alteration, hybridization, or the introduction of harmful diseases and parasites. Once established, these species can be difficult to control or eradicate, particularly because of the connectivity of lotic systems. Invasive species can be especially harmful in areas that have endangered biota, such as mussels in the Southeast United States, or those that have localized endemic species, like lotic systems west of the Rocky Mountains, where many species evolved in isolation.
See also
Betty's Brain software that "learns" about river ecosystems
Flood pulse concept
Lake ecosystem
Rheophile
Riparian zone
River continuum concept
River drainage system
RIVPACS
The Riverkeepers
Upland and lowland rivers
References
Further reading
Brown, A. L. 1987. Freshwater Ecology. Heinimann Educational Books, London. P. 163.
Carlisle, D. M. and M. D. Woodside. 2013. Ecological health in the nation's streams, United States Geological Survey. P. 6.
Edington, J. M., Edington, M. A., and J. A. Dorman. 1984. Habitat partitioning amongst hydrophyschid larvae of a Malaysian stream. Entomologica 30: 123–129.
Hynes, H. B. N. 1970. Ecology of Running Waters. Originally published in Toronto by University of Toronto Press, 555 p.
Morin, P. J. 1999. Community Ecology. Blackwell Science, Oxford. P. 424.
Ward, J. V. 1992. Aquatic Insect Ecology: biology and habitat. Wiley, New York. P. 456.
External links
USGS real time stream flow data for gauged systems nationwide
Aquatic ecology
Ecosystems
Freshwater ecology
Limnology
Riparian zone
Rivers
Water streams | River ecosystem | [
"Biology",
"Environmental_science"
] | 9,102 | [
"Hydrology",
"Symbiosis",
"Aquatic ecology",
"Ecosystems",
"Riparian zone"
] |
4,189,252 | https://en.wikipedia.org/wiki/IBM%2037xx | IBM 37xx (or 37x5) is a family of IBM Systems Network Architecture (SNA) programmable communications controllers used mainly in mainframe environments.
All members of the family ran one of three IBM-supplied programs.
Emulation Program (EP) mimicked the operation of the older IBM 270x non-programmable controllers.
Network Control Program (NCP) supported Systems Network Architecture devices.
Partitioned Emulation Program (PEP) combined the functions of the two.
Models
370x series
3705 — the oldest of the family, introduced in 1972 to replace the non-programmable IBM 270x family. The 3705 could control up to 352 communications lines.
3704 was a smaller version, introduced in 1973. It supported up to 32 lines.
371x
The 3710 communications controller was introduced in 1984.
372x series
The 3725 and the 3720 systems were announced in 1983. The 3725 replaced the hardware line scanners used on previous 370x machines with multiple microcoded processors.
The 3725 was a large-scale node and front end processor.
The 3720 was a smaller version of the 3725, which was sometimes used as a remote concentrator.
The 3726 was an expansion unit for the 3725.
With the expansion unit, the 3725 could support up to 256 lines at data rates up to 256 kbit/s, and connect to up to eight mainframe channels.
Marketing of the 372x machines was discontinued in 1989.
IBM discontinued support for the 3705, 3720, 3725 in 1999.
374x series
The 3745, announced in 1988, provides up to eight T1 circuits. At the time of the announcement, IBM was estimated to have nearly 85% of the over US$825 million market for communications controllers over rivals such as NCR Comten and Amdahl Corporation. The 3745 is no longer marketed, but still supported and used.
The 3746 "Nways Controller" model 900, unveiled in 1992, was an expansion unit for the 3745 supporting additional Token Ring and ESCON connections. A stand-alone model 950 appeared in 1995.
Successors
IBM no longer manufactures 37xx processors. The last models, the 3745/46, were withdrawn from marketing in 2002. Replacement software products are Communications Controller for Linux on System z and Enterprise Extender.
Clones
Several companies produced clones of 37xx controllers, including NCR COMTEN and Amdahl Corporation.
References
37xx
Computer networks
37xx
37xx | IBM 37xx | [
"Engineering"
] | 508 | [
"Computer networks engineering",
"Networking hardware"
] |
4,189,435 | https://en.wikipedia.org/wiki/Dill%20oil | Dill oil is an essential oil extracted from the seeds or leaves/stems (dillweed) of the Dill plant. It can be used with water to create dill water. Dill (Anethum graveolens) is an annual herb in the celery family Apiaceae. It is the sole species of the genus Anethum.
Origin
Also known as Indian Dill, originally from Southwest Asia, Dill is an annual or biennial herb that grows up to 1 meter (3 feet). It has green feathery leaves and umbels of small yellow flowers, followed by tiny compressed seeds.
It was popular with the Egyptians, Greeks and Romans, who called it "Anethon" from which the botanical name was derived. The common name comes from the Anglo-Saxon dylle or dylla, which then changed to dill. The word means 'to lull' – referring to its soothing properties. In the Middle Ages it was used as a charm against witchcraft.
From 812 onwards, when Charlemagne, King of the Franks, Emperor of the Romans, ordered the extensive cultivation of this herb, it has been widely used, especially, as a culinary herb.
Properties
Dill oil is known for its grass-like smell and its pale yellow color, with a watery viscosity.
Production
Dill oil is extracted by steam distillation, mainly from the seeds, or the whole herb, fresh or partly dried.
References
Essential oils | Dill oil | [
"Chemistry"
] | 301 | [
"Essential oils",
"Natural products"
] |
4,189,719 | https://en.wikipedia.org/wiki/Ethnopsychopharmacology | A growing body of research has begun to highlight differences in the way racial and ethnic groups respond to psychiatric medication.
Understanding the relevance between mental health and cultural associations is key to attempting to understand more about how the brain works for people of different ethnic and cultural groups. Mental health can be attributed to both the brain function but it can also be associated with environmental factors which can have a physiological effect.
It has been noted that there are "dramatic cross-ethnic and cross-national variations in the dosing practices and side-effect profiles in response to practically all classes of psychotropics."
Epidemiology
It is important to understand epidemiology briefly since it can be connected with ethnopsychopharmacology. Studying how culture impacts the way disease is spread is important to in order to fully understand the racial disparities that impact how Western medication is used and perceived.
Differences in drug metabolism
Drug metabolism is controlled by a number of specific enzymes, and the action of these enzymes varies among individuals. For example, most individuals show normal activity of the IID6 isoenzyme that is responsible for the metabolism of many tricyclic antidepressant medications and most antipsychotic drugs. However, studies have found that one-third of Asian Americans and African Americans have a genetic alteration that decreases the metabolic rate of the IID6 isoenzyme, leading to a greater risk of side effects and toxicity. The CYP2D6 enzyme, important for the way in which the liver clears many drugs from the body, varies greatly between individuals in ways that can be ethnically specific.
Though enzyme activity is genetically influenced, it can also be altered by cultural and environmental factors such as diet, the use of other medications, alcohol and disease states.
Differences in pharmacodynamics
If two individuals have the same blood level of a medication there may still be differences in the way that the body responds due to pharmacodynamic differences; pharmacodynamic responses may also be influenced by racial and cultural factors.
In addition to biology and environment, culturally determined attitudes toward illness may affect how an individual responds to psychiatric medication.
Cultural factors
In addition to biology and environment, culturally determined attitudes toward illness and its treatment may affect how an individual responds to psychiatric medication. Some cultures see suffering and illness as unavoidable and not amenable to medication, while others treat symptoms with polypharmacy, often mixing medications with herbal drugs. Cultural differences may have an effect on adherence to medication regimes as well as influence the placebo effect.
Further, the way an individual expresses and reacts to the symptoms of psychiatric illness, and the cultural expectations of the physician, may affect the diagnosis a patient receives. For example, bipolar disorder often is misdiagnosed as schizophrenia in people of color.
Recommendations for research and practice
The differential response of many ethnic minorities to certain psychiatric medications raises important concerns for both research and practice.
Include Ethnic Groups. Most studies of psychiatric medications have white male subjects. Because there is often a greater difference within racial and ethnic groups than between them, researchers must be certain they choose prototypical representatives of these groups, or use a larger random sample.
Further, because broad racial and ethnic groups have many different subgroups. For example, in North American research it may not be enough to characterize individuals as Asian, Hispanic, Native American, or African American. Even within the same ethnic group, there are no reliable measures to determine important cultural differences.
"Start Low and Go Slow." Individuals who receive a higher dose of psychiatric medication than needed may discontinue treatment because of side effects, or they may develop toxic levels that lead to serious complications. A reasonable approach to prescribing medication to any psychiatric patient, regardless of race or culture, is to "start low and go slow".
Someday there may be a simple blood test to predict how an individual will respond to a specific class of drugs; research in these fields fall in the domain of pharmacogenomics and pharmacometabolomics.
See also
Pharmacognosy
Race and health
References
External links
Culture and Ethnicity, National Mental Health Information Center
Pharmacokinetics
Ethnobiology
Psychopharmacology
Race and health | Ethnopsychopharmacology | [
"Chemistry",
"Biology",
"Environmental_science"
] | 865 | [
"Psychopharmacology",
"Pharmacology",
"Pharmacokinetics",
"Environmental social science",
"Ethnobiology"
] |
4,189,740 | https://en.wikipedia.org/wiki/Plant%20defense%20against%20herbivory | Plant defense against herbivory or host-plant resistance is a range of adaptations evolved by plants which improve their survival and reproduction by reducing the impact of herbivores. Many plants produce secondary metabolites, known as allelochemicals, that influence the behavior, growth, or survival of herbivores. These chemical defenses can act as repellents or toxins to herbivores or reduce plant digestibility. Another defensive strategy of plants is changing their attractiveness. Plants can sense being touched, and they can respond with strategies to defend against herbivores. Plants alter their appearance by changing their size or quality in a way that prevents overconsumption by large herbivores, reducing the rate at which they are consumed.
Other defensive strategies used by plants include escaping or avoiding herbivores at any time in any placefor example, by growing in a location where plants are not easily found or accessed by herbivores or by changing seasonal growth patterns. Another approach diverts herbivores toward eating non-essential parts or enhances the ability of a plant to recover from the damage caused by herbivory. Some plants support the presence of natural enemies of herbivores, which protect the plant. Each type of defense can be either constitutive (always present in the plant) or induced (produced in reaction to damage or stress caused by herbivores).
Historically, insects have been the most significant herbivores, and the evolution of land plants is closely associated with the evolution of insects. While most plant defenses are directed against insects, other defenses have evolved that are aimed at vertebrate herbivores, such as birds and mammals. The study of plant defenses against herbivory is important from an evolutionary viewpoint; for the direct impact that these defenses have on agriculture, including human and livestock food sources; as beneficial 'biological control agents' in biological pest control programs; and in the search for plants of medical importance.
Evolution of defensive traits
The earliest land plants evolved from aquatic plants around (Ma) in the Ordovician period. Many plants have adapted to an iodine-deficient terrestrial environment by removing iodine from their metabolism; in fact, iodine is essential only for animal cells. An important antiparasitic action is caused by the blockage in the transport of iodide of animal cells, inhibiting sodium-iodide symporter (NIS). Many plant pesticides are glycosides (such as cardiac digitoxin) and cyanogenic glycosides that liberate cyanide, which, by blocking cytochrome c oxidase and NIS, is poisonous only for a large part of parasites and herbivores and not for the plant cells, in which it seems useful in the seed dormancy phase. Iodide is not itself a pesticide, but is oxidized by vegetable peroxidase to iodine, which is a strong oxidant able to kill bacteria, fungi, and protozoa.
The Cretaceous period saw the appearance of more plant defense mechanisms. The diversification of flowering plants (angiosperms) at that time is associated with the sudden burst of speciation in insects. This diversification of insects represented a major selective force in plant evolution and led to the selection of plants that had defensive adaptations. Early insect herbivores were mandibulate and bit or chewed vegetation, but the evolution of vascular plants led to the co-evolution of other forms of herbivory, such as sap-sucking, leaf mining, gall forming, and nectar-feeding.
The relative abundance of different species of plants in ecological communities including forests and grasslands may be determined in part by the level of defensive compounds in the different species. Since the cost of replacing damaged leaves is higher in conditions where resources are scarce, it may be that plants growing in areas where water and nutrients are scarce invest more resources into anti-herbivore defenses, resulting in slower plant growth.
Records of herbivores
Knowledge of herbivory in geological time comes from three sources: fossilized plants, which may preserve evidence of defense (such as spines) or herbivory-related damage; the observation of plant debris in fossilised animal feces; and the structure of herbivore mouthparts.
Long thought to be a Mesozoic phenomenon, evidence for herbivory is found almost as soon as fossils can show it. As previously discussed, the first land plants emerged around 450 million years ago; however, herbivory, and therefore the need for plant defenses, undoubtedly evolved among aquatic organisms in ancient lakes and oceans. Within 20 million years of the first fossils of sporangia and stems towards the close of the Silurian, around , there is evidence that plants were being consumed. Animals fed on the spores of early Devonian plants, and the Rhynie chert provides evidence that organisms fed on plants using a "pierce and suck" technique.
During the ensuing 75 million years, plants evolved a range of more complex organs, from roots to seeds. There was a gap of 50 to 100 million years between each organ's evolution and its being eaten. Hole feeding and skeletonization are recorded in the early Permian, with surface fluid feeding evolving by the end of that period.
Co-evolution
Herbivores are dependent on plants for food and have evolved mechanisms to obtain this food despite the evolution of a diverse arsenal of plant defenses. Herbivore adaptations to plant defense have been likened to offensive traits and consist of adaptations that allow increased feeding and use of a host plant. Relationships between herbivores and their host plants often result in reciprocal evolutionary change, called co-evolution. When an herbivore eats a plant, it selects for plants that can mount a defensive response. In cases where this relationship demonstrates specificity (the evolution of each trait is due to the other) and reciprocity (both traits must evolve), the species are thought to have co-evolved.
The "escape and radiation" mechanism for co-evolution presents the idea that adaptations in herbivores and their host plants have been the driving force behind speciation and have played a role in the radiation of insect species during the age of angiosperms. Some herbivores have evolved ways to hijack plant defenses to their own benefit by sequestering these chemicals and using them to protect themselves from predators. Plant defenses against herbivores are generally not complete, so plants tend to evolve some tolerance to herbivory.
Types
Plant defenses can be classified as constitutive or induced. Constitutive defenses are always present, while induced defenses are produced or mobilized to the site where a plant is injured. There is wide variation in the composition and concentration of constitutive defenses; these range from mechanical defenses to digestibility reducers and toxins. Many external mechanical defenses and quantitative defenses are constitutive, as they require large amounts of resources to produce and are costly to mobilize. A variety of molecular and biochemical approaches are used to determine the mechanisms of constitutive and induced defensive responses.
Induced defenses include secondary metabolites and morphological and physiological changes. An advantage of inducible, as opposed to constitutive defenses, is that they are only produced when needed, and are therefore potentially less costly, especially when herbivory is variable. Modes of induced defence include systemic acquired resistance and plant-induced systemic resistance.
Chemical defenses
The evolution of chemical defenses in plants is linked to the emergence of chemical substances that are not involved in the essential photosynthetic and metabolic activities. These substances, secondary metabolites, are organic compounds that are not directly involved in the normal growth, development or reproduction of organisms, and often produced as by-products during the synthesis of primary metabolic products. Examples of these byproducts include phenolics, flavonoids, and tannins. Although these secondary metabolites have been thought to play a major role in defenses against herbivores, a meta-analysis of recent relevant studies has suggested that they have either a more minimal (when compared to other non-secondary metabolites, such as primary chemistry and physiology) or more complex involvement in defense.
Plants can communicate through the air. Pheromone release and other scents can be detected by leaves and regulate plant immune response. In other words, plants produce volatile organic compounds (VOC) to warn other plants of danger and change their behavioral state to better respond to threats and survival. These warning signals produced by infected neighboring trees allow the undamaged trees to provocatively activate the necessary defense mechanisms. Within the plant itself, it transmits warning, nonvolatile signals as well as airborne signals to surrounding undamaged trees to strengthen their defense/immune system. For instance, poplar and sugar maple trees demonstrated that they received tannins from nearby damaged trees. In sagebrush, damaged plants send out airborne compounds, such as methyl jasmonate, to undamaged plants to increase proteinase inhibitor production and resistance to herbivory.
The release of unique VOCs and extrafloral nectar (EFN) allow plants to protect themselves against herbivores by attracting animals from the third trophic level. For example, caterpillar-damaged plants guide parasitic wasps to prey on victims through the release of chemical signals.The sources of these compounds are most likely from glands in the leaves which are ruptured upon the chewing of an herbivore. The injury by herbivores induces the release of linolenic acid and other enzymatic reactions in an octadecanoid cascade, leading to the synthesis of jasmonic acid, a hormone which plays a central role in regulating immune responses. Jasmonic acid induces the release of VOCs and EFN which attract parasitic wasps and predatory mites to detect and feed on herbivores. These volatile organic compounds can also be released to other nearby plants to be prepared for the potential threats. The volatile compounds emitted by plants are easily detected by third trophic level organisms as these signals are unique to herbivore damage. An experiment conducted to measure the VOCs from growing plants shows that signals are released instantaneously upon the herbivory damage and slowly dropped after the damage stopped. It was also observed that plants release the strongest signals during the time of day which animals tend to forage.
Since trees are sessile, they have established unique internal defense systems. For instance, when some trees experience herbivory, they release compounds that make their vegetation less palatable. The herbivores saliva left on the leaves of the tree sends a chemical signal to the tree's cells. The tree cells respond by increasing the concentration of salicylic acid (hormone) production. Salicylic acid is a phytohormone that is one of the essential hormones for regulating plants' immune systems. This hormone then signals to increase the production of tree chemicals called tannins within its leaves.
Antiherbivory compounds
Plants have evolved many secondary metabolites involved in plant defense, which are collectively known as antiherbivory compounds and can be classified into three sub-groups: nitrogen compounds (including alkaloids, cyanogenic glycosides, glucosinolates and benzoxazinoids), terpenoids, and phenolics.
Alkaloids are derived from various amino acids. Over 3,000 alkaloids are known, including nicotine, caffeine, morphine, cocaine, colchicine, ergolines, strychnine, and quinine. Alkaloids have pharmacological effects on humans and other animals. Some alkaloids can inhibit or activate enzymes, or alter carbohydrate and fat storage by inhibiting the formation phosphodiester bonds involved in their breakdown. Certain alkaloids bind to nucleic acids and can inhibit synthesis of proteins and affect DNA repair mechanisms. Alkaloids can also affect cell membrane and cytoskeletal structure causing the cells to weaken, collapse, or leak, and can affect nerve transmission. Although alkaloids act on a diversity of metabolic systems in humans and other animals, they almost uniformly invoke an aversively bitter taste.
Cyanogenic glycosides are stored in inactive forms in plant vacuoles. They become toxic when herbivores eat the plant and break cell membranes allowing the glycosides to come into contact with enzymes in the cytoplasm releasing hydrogen cyanide which blocks cellular respiration. Glucosinolates are activated in much the same way as cyanogenic glucosides, and the products can cause gastroenteritis, salivation, diarrhea, and irritation of the mouth. Benzoxazinoids, such as DIMBOA, are secondary defence metabolites characteristic of certain grasses (Poaceae). Like cyanogenic glycosides, they are stored as inactive glucosides in the plant vacuole. Upon tissue disruption they get into contact with β-glucosidases from the chloroplasts, which enzymatically release the toxic aglucones. Whereas some benzoxazinoids are constitutively present, others are only synthesized following herbivore infestation, and thus, considered inducible plant defenses against herbivory.
The terpenoids, sometimes referred to as isoprenoids, are organic chemicals similar to terpenes, derived from five-carbon isoprene units. There are over 10,000 known types of terpenoids. Most are multicyclic structures which differ from one another in both functional groups, and in basic carbon skeletons. Monoterpenoids, containing 2 isoprene units, are volatile essential oils such as citronella, limonene, menthol, camphor, and pinene. Diterpenoids, 4 isoprene units, are widely distributed in latex and resins, and can be quite toxic. Diterpenes are responsible for making Rhododendron leaves poisonous. Plant steroids and sterols are also produced from terpenoid precursors, including vitamin D, glycosides (such as digitalis) and saponins (which lyse red blood cells of herbivores).
Phenolics, sometimes called phenols, consist of an aromatic 6-carbon ring bonded to a hydroxy group. Some phenols have antiseptic properties, while others disrupt endocrine activity. Phenolics range from simple tannins to the more complex flavonoids that give plants much of their red, blue, yellow, and white pigments. Complex phenolics called polyphenols are capable of producing many different types of effects on humans, including antioxidant properties. Some examples of phenolics used for defense in plants are: lignin, silymarin and cannabinoids. Condensed tannins, polymers composed of 2 to 50 (or more) flavonoid molecules, inhibit herbivore digestion by binding to consumed plant proteins and making them more difficult for animals to digest, and by interfering with protein absorption and digestive enzymes.
In addition, some plants use fatty acid derivatives, amino acids and even peptides as defenses. The cholinergic toxin cicutoxin of water hemlock is a polyyne derived from the fatty acid metabolism. Oxalyldiaminopropionic acid is a neurotoxic amino acid produced as a defensive metabolite in the grass pea (Lathyrus sativus). The synthesis of fluoroacetate in several plants is an example of the use of small molecules to disrupt the metabolism of herbivores, in this case the citric acid cycle.
Plants interact by producing allelochemicals which interfere with the growth of other plants (allelopathy). These have a role in plant defense and may be used to suppress competitors such as weeds of crops. A result may be larger plants better able to survive damage by herbivores.
Enzymes
Premier examples are substances activated by the enzyme myrosinase. This enzyme converts glucosinolates to various compounds that are toxic to herbivorous insects. One product of this enzyme is allyl isothiocyanate, the pungent ingredient in horseradish sauces.
The myrosinase is released only upon crushing the flesh of horseradish. Since allyl isothiocyanate is harmful to the plant as well as the insect, it is stored in the harmless form of the glucosinolate, separate from the myrosinase enzyme.
Mechanical defenses
See the review of mechanical defenses by Lucas et al., 2000, which remains relevant and well regarded in the subject . Many plants have external structural defenses that discourage herbivory. Structural defenses can be described as morphological or physical traits that give the plant a fitness advantage by deterring herbivores from feeding. Depending on the herbivore's physical characteristics (i.e. size and defensive armor), plant structural defenses on stems and leaves can deter, injure, or kill the grazer. Some defensive compounds are produced internally but are released onto the plant's surface; for example, resins, lignins, silica, and wax cover the epidermis of terrestrial plants and alter the texture of the plant tissue. The leaves of holly plants, for instance, are very smooth and slippery making feeding difficult. Some plants produce gummosis or sap that traps insects.
Spines and thorns
A plant's leaves and stem may be covered with sharp prickles, spines, thorns or trichomes- hairs on the leaf often with barbs, sometimes containing irritants or poisons. Plant structural features such as spines, thorns and awns reduce feeding by large ungulate herbivores (e.g. kudu, impala, and goats) by restricting the herbivores' feeding rate, or by wearing down the molars. Trichomes are frequently associated with lower rates of plant tissue digestion by insect herbivores. Raphides are sharp needles of calcium oxalate or calcium carbonate in plant tissues, making ingestion painful, damaging a herbivore's mouth and gullet and causing more efficient delivery of the plant's toxins. The structure of a plant, its branching and leaf arrangement may also be evolved to reduce herbivore impact. The shrubs of New Zealand have evolved special wide-branching adaptations believed to be a response to browsing birds such as moas. Similarly, African Acacia trees have long spines low in the canopy, but very short spines in the high canopy, which is comparatively safe from herbivores such as giraffes.
Trees such as palms protect their fruit by multiple layers of armor, needing efficient tools to break through to the seed contents. Some plants, notably the grasses, use indigestible silica (and many plants use other relatively indigestible materials such as lignin) to defend themselves against vertebrate and invertebrate herbivores. Plants take up silicon from the soil and deposit it in their tissues in the form of solid silica phytoliths. These mechanically reduce the digestibility of plant tissue, causing rapid wear to vertebrate teeth and to insect mandibles, and are effective against herbivores above and below ground. The mechanism may offer future sustainable pest-control strategies.
Thigmonastic movements
Thigmonastic movements, those that occur in response to touch, are used as a defense in some plants. The leaves of the sensitive plant, Mimosa pudica, close up rapidly in response to direct touch, vibration, or even electrical and thermal stimuli. The proximate cause of this mechanical response is an abrupt change in the turgor pressure in the pulvini at the base of leaves resulting from osmotic phenomena. This is then spread via both electrical and chemical means through the plant; only a single leaflet need be disturbed. This response lowers the surface area available to herbivores, which are presented with the underside of each leaflet, and results in a wilted appearance. It may also physically dislodge small herbivores, such as insects.
Carnivorous plants
Carnivory in plants has evolved at least six times independently. Some examples include the Venus flytrap, pitcher plant, and butterwort. Many of these plants have evolved in nutrient-poor soil, and must procure nutrients from other sources. They use insects and small birds as a way to gain the minerals they need through carnivory. Carnivorous plants do not use carnivory as defense, but to get the nutrients they need.
Mimicry and camouflage
Some plants make use of various forms of mimicry to reduce herbivory. One mechanism is to mimic the presence of insect eggs on their leaves, dissuading insect species from laying their eggs there. Because female butterflies are less likely to lay their eggs on plants that already have butterfly eggs, some species of neotropical vines of the genus Passiflora (passion flowers) make use of Gilbertian mimicry: they possess physical structures resembling the yellow eggs of Heliconius butterflies on their leaves, which discourage oviposition by butterflies. Other plants make use of Batesian mimicry, with structures that imitate thorns or other objects to dissuade herbivores directly. A further approach is camouflage; the vine Boquila trifoliolata mimics the leaves of its host plant, while the pebble plant Lithops makes itself hard to spot among the stones of the Southern African environment.
Indirect defenses
Another category of plant defenses are those features that indirectly protect the plant by enhancing the probability of attracting the natural enemies of herbivores. Such an arrangement is known as mutualism, in this case of the "enemy of my enemy" variety. One such feature are semiochemicals, given off by plants. Semiochemicals are a group of volatile organic compounds involved in interactions between organisms. One group of semiochemicals are allelochemicals; consisting of allomones, which play a defensive role in interspecies communication, and kairomones, which are used by members of higher trophic levels to locate food sources. When a plant is attacked it releases allelochemics containing an abnormal ratio of these s (HIPVs). Predators sense these volatiles as food cues, attracting them to the damaged plant, and to feeding herbivores. The subsequent reduction in the number of herbivores confers a fitness benefit to the plant and demonstrates the indirect defensive capabilities of semiochemicals. Induced volatiles also have drawbacks, however; some studies have suggested that these volatiles attract herbivores. Crop domestication has increased yield, sometimes at the expense of HIPV production. Orre Gordon et al 2013 tests several methods of artificially restoring the plant-predator partnership, by combining companion planting and synthetic predator attractants. They describe several strategies which work and several which do not.
Plants sometimes provide housing and food items for natural enemies of herbivores, known as "biotic" defense mechanisms, to maintain their presence. For example, trees from the genus Macaranga have adapted their thin stem walls to create ideal housing for ants (genus Crematogaster), which, in turn, protects the plant from herbivores. In addition to providing housing, the plant also provides the ant with its exclusive food source; from the food bodies produced by the plant. Similarly, several Acacia tree species have developed stipular spines (direct defenses) that are swollen at the base, forming a hollow structure that provides housing for protective ants. These Acacia trees also produce nectar in extrafloral nectaries on their leaves as food for the ants.
Plant use of endophytic fungi in defense is common. Most plants have endophytes, microbial organisms that live within them. While some cause disease, others protect plants from herbivores and pathogenic microbes. Endophytes can help the plant by producing toxins harmful to other organisms that would attack the plant, such as alkaloid producing fungi which are common in grasses such as tall fescue (Festuca arundinacea), which is infected by Neotyphodium coenophialum.
Trees of the same species form alliances with other tree species to improve their survival rate. They communicate and have dependent relationships through connections below the soil called underground mycorrhiza networks, which allows them to share water/nutrients and various signals for predatory attacks while also protecting the immune system. Within a forest of trees, the ones getting attacked send communication distress signals that alerts neighboring trees to alter their behavior (defense). Trees and fungi have a symbiotic relationship: fungi, intertwined with the trees' roots, support communication between trees to locate nutrients; in return, the fungi receive some of the sugar that trees photosynthesize. Trees send out several forms of communication including chemical, hormonal, and slow pulsing electric signals. Farmers investigated the electrical signals between trees, using a voltage-based signal system, similar to an animal's nervous system, where a tree faces distress and releases a warning signal to surrounding trees.
Leaf shedding and color
There have been suggestions that leaf shedding may be a response that provides protection against diseases and certain kinds of pests such as leaf miners and gall forming insects. Other responses such as the change of leaf colors prior to fall have also been suggested as adaptations that may help undermine the camouflage of herbivores. Autumn leaf color has also been suggested to act as an honest warning signal of defensive commitment towards insect pests that migrate to the trees in autumn.
Costs and benefits
Defensive structures and chemicals are costly as they require resources that could otherwise be used by plants to maximize growth and reproduction. In some situations, plant growth slows down when most of the nutrients are being used for the generation of toxins or regeneration of plant parts. Many models have been proposed to explore how and why some plants make this investment in defenses against herbivores.
Optimal defense hypothesis
The optimal defense hypothesis attempts to explain how the kinds of defenses a particular plant might use reflect the threats each individual plant faces. This model considers three main factors: risk of attack, value of the plant part, and the cost of defense.
The first factor determining optimal defense is risk: how likely is it that a plant or certain plant parts will be attacked? This is also related to the plant apparency hypothesis, which states that a plant will invest heavily in broadly effective defenses when the plant is easily found by herbivores. Examples of apparent plants that produce generalized protections include long-living trees, shrubs, and perennial grasses. Unapparent plants, such as short-lived plants of early successional stages, on the other hand, preferentially invest in small amounts of qualitative toxins that are effective against all but the most specialized herbivores.
The second factor is the value of protection: would the plant be less able to survive and reproduce after removal of part of its structure by a herbivore? Not all plant parts are of equal evolutionary value, thus valuable parts contain more defenses. A plant's stage of development at the time of feeding also affects the resulting change in fitness. Experimentally, the fitness value of a plant structure is determined by removing that part of the plant and observing the effect. In general, reproductive parts are not as easily replaced as vegetative parts, terminal leaves have greater value than basal leaves, and the loss of plant parts mid-season has a greater negative effect on fitness than removal at the beginning or end of the season. Seeds in particular tend to be very well protected. For example, the seeds of many edible fruits and nuts contain cyanogenic glycosides such as amygdalin. This results from the need to balance the effort needed to make the fruit attractive to animal dispersers while ensuring that the seeds are not destroyed by the animal.
The final consideration is cost: how much will a particular defensive strategy cost a plant in energy and materials? This is particularly important, as energy spent on defense cannot be used for other functions, such as reproduction and growth. The optimal defense hypothesis predicts that plants will allocate more energy towards defense when the benefits of protection outweigh the costs, specifically in situations where there is high herbivore pressure.
Carbon:nutrient balance hypothesis
The carbon:nutrient balance hypothesis, also known as the environmental constraint hypothesis or Carbon Nutrient Balance Model (CNBM), states that the various types of plant defenses are responses to variations in the levels of nutrients in the environment. This hypothesis predicts the Carbon/Nitrogen ratio in plants determines which secondary metabolites will be synthesized. For example, plants growing in nitrogen-poor soils will use carbon-based defenses (mostly digestibility reducers), while those growing in low-carbon environments (such as shady conditions) are more likely to produce nitrogen-based toxins. The hypothesis further predicts that plants can change their defenses in response to changes in nutrients. For example, if plants are grown in low-nitrogen conditions, then these plants will implement a defensive strategy composed of constitutive carbon-based defenses. If nutrient levels subsequently increase, by for example the addition of fertilizers, these carbon-based defenses will decrease.
Growth rate hypothesis
The growth rate hypothesis, also known as the resource availability hypothesis, states that defense strategies are determined by the inherent growth rate of the plant, which is in turn determined by the resources available to the plant. A major assumption is that available resources are the limiting factor in determining the maximum growth rate of a plant species. This model predicts that the level of defense investment will increase as the potential of growth decreases. Additionally, plants in resource-poor areas, with inherently slow-growth rates, tend to have long-lived leaves and twigs, and the loss of plant appendages may result in a loss of scarce and valuable nutrients.
One test of this model involved a reciprocal transplants of seedlings of 20 species of trees between clay soils (nutrient rich) and white sand (nutrient poor) to determine whether trade-offs between growth rate and defenses restrict species to one habitat. When planted in white sand and protected from herbivores, seedlings originating from clay outgrew those originating from the nutrient-poor sand, but in the presence of herbivores the seedlings originating from white sand performed better, likely due to their higher levels of constitutive carbon-based defenses. These finding suggest that defensive strategies limit the habitats of some plants.
Growth-differentiation balance hypothesis
The growth-differentiation balance hypothesis states that plant defenses are a result of a tradeoff between "growth-related processes" and "differentiation-related processes" in different environments. Differentiation-related processes are defined as "processes that enhance the structure or function of existing cells (i.e. maturation and specialization)." A plant will produce chemical defenses only when energy is available from photosynthesis, and plants with the highest concentrations of secondary metabolites are the ones with an intermediate level of available resources.
Synthesis tradeoffs
The vast majority of plant resistances to herbivores are either unrelated to each other, or are positively correlated. However there are some negative correlations: In Pastinaca sativa resistances to various biotypes of Depressaria pastinacella, because the secondary metabolites involved are negatively correlated with each other; and in the resistances of Diplacus aurantiacus.
In Brassica rapa, resistance to Peronospora parasitica and growth rate are negatively correlated.
Mutualism and overcompensation of plants
Many plants do not have secondary metabolites, chemical processes, or mechanical defenses to help them fend off herbivores. Instead, these plants rely on overcompensation (which is regarded as a form of mutualism) when they are attacked by herbivores. Overcompensation is defined as having higher fitness when attacked by a herbivore. This a mutual relationship; the herbivore is satisfied with a meal, while the plant starts growing the missing part quickly. These plants have a higher chance of reproducing, and their fitness is increased.
Importance to humans
Agriculture
Crop plants can be bred for their ability to resist herbivory, thus protecting themselves from damage with reduced use of pesticides.
In addition, biological pest control sometimes makes use of plant defenses to reduce crop damage by herbivores. Techniques include polyculture, the planting together of two or more species such as a primary crop and a secondary plant. This can allow the secondary plant's defensive chemicals to protect the crop planted with it.
The variation of plant susceptibility to pests was probably known even in the early stages of agriculture in humans. In historic times, the observation of such variations in susceptibility have provided solutions for major socio-economic problems. The hemipteran pest insect phylloxera was introduced from North America to France in 1860 and in 25 years it destroyed nearly a third (100,000 km2) of French vineyards. Charles Valentine Riley noted that the American species Vitis labrusca was resistant to Phylloxera. Riley, with J. E. Planchon, helped save the French wine industry by suggesting the grafting of the susceptible but high quality grapes onto Vitis labrusca root stocks. The formal study of plant resistance to herbivory was first covered extensively in 1951 by Reginald Henry Painter, who is widely regarded as the founder of this area of research, in his book Plant Resistance to Insects. While this work pioneered further research in the US, the work of Chesnokov was the basis of further research in the USSR.
Fresh growth of grass is sometimes high in prussic acid content and can cause poisoning of grazing livestock. The production of cyanogenic chemicals in grasses is primarily a defense against herbivores.
The human innovation of cooking may have been particularly helpful in overcoming many of the defensive chemicals of plants. Many enzyme inhibitors in cereal grains and pulses, such as trypsin inhibitors prevalent in pulse crops, are denatured by cooking, making them digestible.
It has been known since the late 17th century that plants contain noxious chemicals which are avoided by insects. These chemicals have been used by man as early insecticides; in 1690 nicotine was extracted from tobacco and used as a contact insecticide. In 1773, insect infested plants were treated with nicotine fumigation by heating tobacco and blowing the smoke over the plants. The flowers of Chrysanthemum species contain pyrethrin which is a potent insecticide. In later years, the applications of plant resistance became an important area of research in agriculture and plant breeding, particularly because they can serve as a safe and low-cost alternative to the use of pesticides. The important role of secondary plant substances in plant defense was described in the late 1950s by Vincent Dethier and G.S. Fraenkel. The use of botanical pesticides is widespread, including azadirachtin from the neem (Azadirachta indica), d-Limonene from Citrus species, rotenone from Derris, capsaicin from chili pepper, and pyrethrum from Chrysanthemum.
The selective breeding of crop plants often involves selection against the plant's intrinsic resistance strategies. This makes crop plant varieties particularly susceptible to pests unlike their wild relatives. In breeding for host-plant resistance, it is often the wild relatives that provide the source of resistance genes. These genes are incorporated using conventional approaches to plant breeding, but have been augmented by recombinant techniques, which allow introduction of genes from completely unrelated organisms. The most famous transgenic approach is the introduction of genes from the bacterial species, Bacillus thuringiensis, into plants. The bacterium produces proteins that, when ingested, kill lepidopteran caterpillars. The gene encoding for these highly toxic proteins, when introduced into the host plant genome so that it produces the same toxic proteins, confers resistance against caterpillars. This approach is controversial, however, due to the possibility of ecological and toxicological side effects.
Pharmaceutical
Many currently available pharmaceuticals are derived from the secondary metabolites plants use to protect themselves from herbivores, including opium, aspirin, cocaine, and atropine. These chemicals have evolved to affect the biochemistry of insects in very specific ways. However, many of these biochemical pathways are conserved in vertebrates, including humans, and the chemicals act on human biochemistry in ways similar to that of insects. It has therefore been suggested that the study of plant-insect interactions may help in bioprospecting.
There is evidence that humans began using plant alkaloids in medical preparations as early as 3000 B.C. Although the active components of most medicinal plants have been isolated only relatively recently (beginning in the early 19th century) these substances have been used as drugs throughout the human history in potions, medicines, teas and as poisons. For example, to combat herbivory by the larvae of some Lepidoptera species, Cinchona trees produce a variety of alkaloids, the most familiar of which is quinine, which is extremely bitter, making the bark of the tree quite unpalatable.
Throughout history mandrakes (Mandragora officinarum) have been highly sought after for their reputed aphrodisiac properties. However, the roots of the mandrake plant also contain large quantities of the alkaloid scopolamine, which, at high doses, acts as a central nervous system depressant, and makes the plant highly toxic to herbivores. Scopolamine was later found to be medicinally used for pain management prior to and during labor; in smaller doses it is used to prevent motion sickness. One of the best-known medicinally valuable terpenes is an anticancer drug, taxol, isolated from the bark of the Pacific yew, Taxus brevifolia, in the early 1960s.
See also
Anti-predator adaptation
Biopesticide
Chemical ecology
List of beneficial weeds
List of companion plants
List of pest-repelling plants
Plant disease resistance
Plant tolerance to herbivory
Plant communication
Tritrophic interactions in plant defense
References
Further reading
External links
Bruce A. Kimball Evolutionary Plant Defense Strategies Life Histories and Contributions to Future Generations
Plant Defense Systems & Medicinal Botany
Herbivore Defenses of Senecio viscosus L.
Sue Hartley Royal Institution Christmas Lectures 2009: The Animals Strike Back
Herbivory
Plant physiology
Biological pest control
Ecological restoration
Habitat management equipment and methods
Sustainable agriculture
Antipredator adaptations
Chemical ecology | Plant defense against herbivory | [
"Chemistry",
"Engineering",
"Biology"
] | 7,956 | [
"Plant physiology",
"Chemical ecology",
"Ecological restoration",
"Plants",
"Biological defense mechanisms",
"Herbivory",
"Antipredator adaptations",
"Environmental engineering",
"Biochemistry",
"Eating behaviors"
] |
4,189,764 | https://en.wikipedia.org/wiki/Richard%20Soley | Richard Mark Soley (born c. 1960, in Baltimore, Maryland, died 8 Nov., 2023, in Lexington, Massachusetts) was an American computer scientist and businessman, and chairman and CEO of the Object Management Group, Inc. (OMG). He was also the executive director of the Cloud Standards Customer Council, and executive director of the Industrial Internet Consortium, managed by the OMG.
Life and work
Soley studied Computer Science and Engineering at the Massachusetts Institute of Technology, where he obtained his S.B. in 1982, his S.M. in 1985 and his Ph.D. in 1989. He began his professional life at Honeywell, working on the Multics operating system.
Soley joined OMG as Technical Director in 1989, leading the development of OMG's standardization process and the original CORBA specification.
In 1996, he led the effort to move into vertical market standards (starting with healthcare, finance, telecommunications and manufacturing) and modeling. Those efforts made OMG a major early adopter of Unified Modeling Language (UML) and model-driven architecture (MDA).
Soley was co-founder, former chairman, and CEO of A.I. Architects, Inc., a firm which manufactured hardware and software for personal computers and workstations. He has also served as a consultant on matters relating to software investment opportunities for several corporations including IBM, Motorola, and Texas Instruments.
He also played a very significant role within the SEMAT initiative, launched in December 2009.
Selected publications
Soley, Richard Mark. Object Management Architecture Guide: Revision 2.0. Wiley, 1995.
References
External links
Richard Mark Soley personal website
Living people
Year of birth uncertain
American computer scientists
Businesspeople from Baltimore
Stanford University School of Engineering alumni
Multics people
Year of birth missing (living people) | Richard Soley | [
"Technology"
] | 370 | [
"Computing stubs",
"Computer specialist stubs"
] |
4,190,174 | https://en.wikipedia.org/wiki/Limitation%20of%20size | In the philosophy of mathematics, specifically the philosophical foundations of set theory, limitation of size is a concept developed by Philip Jourdain and/or Georg Cantor to avoid Cantor's paradox. It identifies certain "inconsistent multiplicities", in Cantor's terminology, that cannot be sets because they are "too large". In modern terminology these are called proper classes.
Use
The axiom of limitation of size is an axiom in some versions of von Neumann–Bernays–Gödel set theory or Morse–Kelley set theory. This axiom says that any class that is not "too large" is a set, and a set cannot be "too large". "Too large" is defined as being large enough that the class of all sets can be mapped one-to-one into it.
References
Philosophy of mathematics
History of mathematics
Basic concepts in infinite set theory | Limitation of size | [
"Mathematics"
] | 180 | [
"Basic concepts in infinite set theory",
"Mathematical objects",
"Infinity",
"Basic concepts in set theory",
"nan"
] |
4,190,350 | https://en.wikipedia.org/wiki/Overdispersion | In statistics, overdispersion is the presence of greater variability (statistical dispersion) in a data set than would be expected based on a given statistical model.
A common task in applied statistics is choosing a parametric model to fit a given set of empirical observations. This necessitates an assessment of the fit of the chosen model. It is usually possible to choose the model parameters in such a way that the theoretical population mean of the model is approximately equal to the sample mean. However, especially for simple models with few parameters, theoretical predictions may not match empirical observations for higher moments. When the observed variance is higher than the variance of a theoretical model, overdispersion has occurred. Conversely, underdispersion means that there was less variation in the data than predicted. Overdispersion is a very common feature in applied data analysis because in practice, populations are frequently heterogeneous (non-uniform) contrary to the assumptions implicit within widely used simple parametric models.
Examples
Poisson
Overdispersion is often encountered when fitting very simple parametric models, such as those based on the Poisson distribution. The Poisson distribution has one free parameter and does not allow for the variance to be adjusted independently of the mean. The choice of a distribution from the Poisson family is often dictated by the nature of the empirical data. For example, Poisson regression analysis is commonly used to model count data. If overdispersion is a feature, an alternative model with additional free parameters may provide a better fit. In the case of count data, a Poisson mixture model like the negative binomial distribution can be proposed instead, in which the mean of the Poisson distribution can itself be thought of as a random variable drawn – in this case – from the gamma distribution thereby introducing an additional free parameter (note the resulting negative binomial distribution is completely characterized by two parameters).
Binomial
As a more concrete example, it has been observed that the number of boys born to families does not conform faithfully to a binomial distribution as might be expected. Instead, the sex ratios of families seem to skew toward either boys or girls (see, for example the Trivers–Willard hypothesis for one possible explanation) i.e. there are more all-boy families, more all-girl families and not enough families close to the population 51:49 boy-to-girl mean ratio than expected from a binomial distribution, and the resulting empirical variance is larger than specified by a binomial model.
In this case, the beta-binomial model distribution is a popular and analytically tractable alternative model to the binomial distribution since it provides a better fit to the observed data. To capture the heterogeneity of the families, one can think of the probability parameter of the binomial model (say, probability of being a boy) is itself a random variable (i.e. random effects model) drawn for each family from a beta distribution as the mixing distribution. The resulting compound distribution (beta-binomial) has an additional free parameter.
Another common model for overdispersion—when some of the observations are not Bernoulli—arises from introducing a normal random variable into a logistic model. Software is widely available for fitting this type of multilevel model. In this case, if the variance of the normal variable is zero, the model reduces to the standard (undispersed) logistic regression. This model has an additional free parameter, namely the variance of the normal variable.
With respect to binomial random variables, the concept of overdispersion makes sense only if n>1 (i.e. overdispersion is nonsensical for Bernoulli random variables).
Normal distribution
As the normal distribution (Gaussian) has variance as a parameter, any data with finite variance (including any finite data) can be modeled with a normal distribution with the exact variance – the normal distribution is a two-parameter model, with mean and variance. Thus, in the absence of an underlying model, there is no notion of data being overdispersed relative to the normal model, though the fit may be poor in other respects (such as the higher moments of skew, kurtosis, etc.). However, in the case that the data is modeled by a normal distribution with an expected variation, it can be over- or under-dispersed relative to that prediction.
For example, in a statistical survey, the margin of error (determined by sample size) predicts the sampling error and hence dispersion of results on repeated surveys. If one performs a meta-analysis of repeated surveys of a fixed population (say with a given sample size, so margin of error is the same), one expects the results to fall on normal distribution with standard deviation equal to the margin of error. However, in the presence of study heterogeneity where studies have different sampling bias, the distribution is instead a compound distribution and will be overdistributed relative to the predicted distribution. For example, given repeated opinion polls all with a margin of error of 3%, if they are conducted by different polling organizations, one expects the results to have standard deviation greater than 3%, due to pollster bias from different methodologies.
Differences in terminology among disciplines
Over- and underdispersion are terms which have been adopted in branches of the biological sciences. In parasitology, the term 'overdispersion' is generally used as defined here – meaning a distribution with a higher than expected variance.
In some areas of ecology, however, meanings have been transposed, so that overdispersion is actually taken to mean more even (lower variance) than expected. This confusion has caused some ecologists to suggest that the terms 'aggregated', or 'contagious', would be better used in ecology for 'overdispersed'. Such preferences are creeping into parasitology too. Generally this suggestion has not been heeded, and confusion persists in the literature.
Furthermore in demography, overdispersion is often evident in the analysis of death count data, but demographers prefer the term 'unobserved heterogeneity'.
See also
Index of dispersion
Compound probability distribution
Quasi-likelihood
References
Probability distribution fitting
Point processes
Spatial analysis | Overdispersion | [
"Physics",
"Mathematics"
] | 1,298 | [
"Point (geometry)",
"Spatial analysis",
"Point processes",
"Space",
"Spacetime"
] |
7,282,499 | https://en.wikipedia.org/wiki/Surface-area-to-volume%20ratio | The surface-area-to-volume ratio or surface-to-volume ratio (denoted as SA:V, SA/V, or sa/vol) is the ratio between surface area and volume of an object or collection of objects.
SA:V is an important concept in science and engineering. It is used to explain the relation between structure and function in processes occurring through the surface the volume. Good examples for such processes are processes governed by the heat equation, that is, diffusion and heat transfer by thermal conduction. SA:V is used to explain the diffusion of small molecules, like oxygen and carbon dioxide between air, blood and cells, water loss by animals, bacterial morphogenesis, organism's thermoregulation, design of artificial bone tissue, artificial lungs and many more biological and biotechnological structures. For more examples see Glazier.
The relation between SA:V and diffusion or heat conduction rate is explained from flux and surface perspective, focusing on the surface of a body as the place where diffusion, or heat conduction, takes place, i.e., the larger the SA:V there is more surface area per unit volume through which material can diffuse, therefore, the diffusion or heat conduction, will be faster. Similar explanation appears in the literature: "Small size implies a large ratio of surface area to volume, thereby helping to maximize the uptake of nutrients across the plasma membrane", and elsewhere.
For a given volume, the object with the smallest surface area (and therefore with the smallest SA:V) is a ball, a consequence of the isoperimetric inequality in 3 dimensions. By contrast, objects with acute-angled spikes will have very large surface area for a given volume.
For solid spheres
A solid sphere or ball is a three-dimensional object, being the solid figure bounded by a sphere. (In geometry, the term sphere properly refers only to the surface, so a sphere thus lacks volume in this context.)
For an ordinary three-dimensional ball, the SA:V can be calculated using the standard equations for the surface and volume, which are, respectively, and . For the unit case in which r = 1 the SA:V is thus 3. For the general case, SA:V equals 3/r, in an inverse relationship with the radius - if the radius is doubled, the SA:V halves (see figure).
For n-dimensional balls
Balls exist in any dimension and are generically called n-balls or hyperballs, where n is the number of dimensions.
The same reasoning can be generalized to n-balls using the general equations for volume and surface area, which are:
So the ratio equals . Thus, the same linear relationship between area and volume holds for any number of dimensions (see figure): doubling the radius always halves the ratio.
Dimension and units
The surface-area-to-volume ratio has physical dimension inverse length (L−1) and is therefore expressed in units of inverse metre (m−1) or its prefixed unit multiples and submultiples. As an example, a cube with sides of length 1 cm will have a surface area of 6 cm2 and a volume of 1 cm3. The surface to volume ratio for this cube is thus
.
For a given shape, SA:V is inversely proportional to size. A cube 2 cm on a side has a ratio of 3 cm−1, half that of a cube 1 cm on a side. Conversely, preserving SA:V as size increases requires changing to a less compact shape.
Applications
Physical chemistry
Materials with high surface area to volume ratio (e.g. very small diameter, very porous, or otherwise not compact) react at much faster rates than monolithic materials, because more surface is available to react. An example is grain dust: while grain is not typically flammable, grain dust is explosive. Finely ground salt dissolves much more quickly than coarse salt.
A high surface area to volume ratio provides a strong "driving force" to speed up thermodynamic processes that minimize free energy.
Biology
The ratio between the surface area and volume of cells and organisms has an enormous impact on their biology, including their physiology and behavior. For example, many aquatic microorganisms have increased surface area to increase their drag in the water. This reduces their rate of sink and allows them to remain near the surface with less energy expenditure.
An increased surface area to volume ratio also means increased exposure to the environment. The finely-branched appendages of filter feeders such as krill provide a large surface area to sift the water for food.
Individual organs like the lung have numerous internal branchings that increase the surface area; in the case of the lung, the large surface supports gas exchange, bringing oxygen into the blood and releasing carbon dioxide from the blood. Similarly, the small intestine has a finely wrinkled internal surface, allowing the body to absorb nutrients efficiently.
Cells can achieve a high surface area to volume ratio with an elaborately convoluted surface, like the microvilli lining the small intestine.
Increased surface area can also lead to biological problems. More contact with the environment through the surface of a cell or an organ (relative to its volume) increases loss of water and dissolved substances. High surface area to volume ratios also present problems of temperature control in unfavorable environments.
The surface to volume ratios of organisms of different sizes also leads to some biological rules such as Allen's rule, Bergmann's rule and gigantothermy.
Fire spread
In the context of wildfires, the ratio of the surface area of a solid fuel to its volume is an important measurement. Fire spread behavior is frequently correlated to the surface-area-to-volume ratio of the fuel (e.g. leaves and branches). The higher its value, the faster a particle responds to changes in environmental conditions, such as temperature or moisture. Higher values are also correlated to shorter fuel ignition times, and hence faster fire spread rates.
Planetary cooling
A body of icy or rocky material in outer space may, if it can build and retain sufficient heat, develop a differentiated interior and alter its surface through volcanic or tectonic activity. The length of time through which a planetary body can maintain surface-altering activity depends on how well it retains heat, and this is governed by its surface area-to-volume ratio. For Vesta (r=263 km), the ratio is so high that astronomers were surprised to find that it did differentiate and have brief volcanic activity. The moon, Mercury and Mars have radii in the low thousands of kilometers; all three retained heat well enough to be thoroughly differentiated although after a billion years or so they became too cool to show anything more than very localized and infrequent volcanic activity. As of April 2019, however, NASA has announced the detection of a "marsquake" measured on April 6, 2019, by NASA's InSight lander. Venus and Earth (r>6,000 km) have sufficiently low surface area-to-volume ratios (roughly half that of Mars and much lower than all other known rocky bodies) so that their heat loss is minimal.
Mathematical examples
See also
Compactness measure of a shape
Dust explosion
Square–cube law
Specific surface area
References
Specific
External links
Sizes of Organisms: The Surface Area:Volume Ratio
National Wildfire Coordinating Group: Surface Area to Volume Ratio
Previous link not working, references are in this document, PDF
Further reading
On Being the Right Size, J.B.S. Haldane
Chemical kinetics
Cell biology
Physiology
Ratios | Surface-area-to-volume ratio | [
"Chemistry",
"Mathematics",
"Biology"
] | 1,544 | [
"Chemical reaction engineering",
"Cell biology",
"Physiology",
"Arithmetic",
"Chemical kinetics",
"Ratios"
] |
7,282,654 | https://en.wikipedia.org/wiki/Trailing%20zero | In mathematics, trailing zeros are a sequence of 0 in the decimal representation (or more generally, in any positional representation) of a number, after which no other digits follow.
Trailing zeros to the right of a decimal point, as in 12.340, don't affect the value of a number and may be omitted if all that is of interest is its numerical value. This is true even if the zeros recur infinitely. For example, in pharmacy, trailing zeros are omitted from dose values to prevent misreading. However, trailing zeros may be useful for indicating the number of significant figures, for example in a measurement. In such a context, "simplifying" a number by removing trailing zeros would be incorrect.
The number of trailing zeros in a non-zero base-b integer n equals the exponent of the highest power of b that divides n. For example, 14000 has three trailing zeros and is therefore divisible by 1000 = 103, but not by 104. This property is useful when looking for small factors in integer factorization. Some computer architectures have a count trailing zeros operation in their instruction set for efficiently determining the number of trailing zero bits in a machine word.
Factorial
The number of trailing zeros in the decimal representation of n!, the factorial of a non-negative integer n, is simply the multiplicity of the prime factor 5 in n!. This can be determined with this special case of de Polignac's formula:
where k must be chosen such that
more precisely
and denotes the floor function applied to a. For n = 0, 1, 2, ... this is
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 6, ... .
For example, 53 > 32, and therefore 32! = 263130836933693530167218012160000000 ends in
zeros. If n < 5, the inequality is satisfied by k = 0; in that case the sum is empty, giving the answer 0.
The formula actually counts the number of factors 5 in n!, but since there are at least as many factors 2, this is equivalent to the number of factors 10, each of which gives one more trailing zero.
Defining
the following recurrence relation holds:
This can be used to simplify the computation of the terms of the summation, which can be stopped as soon as q i reaches zero. The condition is equivalent to
See also
Leading zero
Trailing digit
References
External links
Why are trailing fractional zeros important? for some examples of when trailing zeros are significant
Number of trailing zeros for any factorial Python program to calculate the number of trailing zeros for any factorial
Elementary arithmetic
0 (number) | Trailing zero | [
"Mathematics"
] | 608 | [
"Elementary mathematics",
"Arithmetic",
"Elementary arithmetic"
] |
7,282,792 | https://en.wikipedia.org/wiki/Shapley%20Supercluster | The Shapley Supercluster or Shapley Concentration (SCl 124) is the largest concentration of galaxies in our nearby universe that forms a gravitationally interacting unit, thereby pulling itself together instead of expanding with the universe. It appears as a striking overdensity in the distribution of galaxies in the constellation of Centaurus. It is 650 million light-years away (z=0.046).
History
In 1930, Harlow Shapley and his colleagues at the Harvard College Observatory started a survey of galaxies in the southern sky, using photographic plates obtained at the 24-inch Bruce telescope at Bloemfontein, South Africa. By 1932, Shapley reported the discovery of 76,000 galaxies brighter than 18th apparent magnitude in a third of the southern sky, based on galaxy counts from his plates. Some of this data was later published as part of the Harvard galaxy counts, intended to map galactic obscuration and to find the space density of galaxies.
In this catalog, Shapley could see most of the 'Coma-Virgo cloud' (now known to be a superposition of the Coma Supercluster and the Virgo Supercluster), but found a 'cloud' in the constellation of Centaurus to be the most striking concentration of galaxies. He found it particularly interesting because of its "great linear dimension, the numerous population and distinctly elongated form". This can be identified with what we now know as the core of the Shapley Supercluster. Shapley estimated the distance to this cloud to be 14 times that to the Virgo Cluster, from the average diameters of the galaxies. This would place the Shapley Supercluster at a distance of 231 Mpc, based on the current estimate of the distance to Virgo.
In recent times, the Shapley Supercluster was named by Somak Raychaudhury, from a survey of galaxies from UK Schmidt Telescope Sky survey plates, using the Automated Plate Measuring Facility (APM) at the University of Cambridge in England. In this paper, the supercluster was named after Harlow Shapley, in recognition of his pioneering survey of galaxies in which this concentration of galaxies was first seen. Around the same time, Roberto Scaramella and co-workers had also noticed the Shapley Supercluster in the Abell catalogue of clusters of galaxies: they had named it the Alpha concentration.
Current interest
The Shapley Supercluster lies very close to the direction in which the Local Group of galaxies (including our galaxy) is moving with respect to the cosmic microwave background (CMB) frame of reference. This has led many to speculate that the Shapley Supercluster may be one of the major causes of our galaxy's peculiar motion—the Great Attractor may be another—and has led to a surge of interest in this supercluster. It has been found that the Great Attractor and all the galaxies in our region of the universe (including our galaxy, the Milky Way) are moving toward the Shapley Supercluster.
In 2017 it was proposed that the movement towards attractors like the Shapley Attractor in the supercluster creates a relative movement away from underdense areas, that may be visualized as a virtual repeller. This approach enables new ways of understanding and modelling variations in galactic movements. The nearest large underdense area has been labelled the dipole repeller.
See also
References
External links
Shapley Supercluster at Atlas of the Universe
Harvard College Observatory (HCO)
Great Attractor
Centaurus
Galaxy superclusters | Shapley Supercluster | [
"Astronomy"
] | 750 | [
"Astronomical objects",
"Centaurus",
"Constellations",
"Galaxy superclusters"
] |
7,283,182 | https://en.wikipedia.org/wiki/History%20of%20IBM | International Business Machines Corporation (IBM) is a multinational corporation specializing in computer technology and information technology consulting. Headquartered in Armonk, New York, the company originated from the amalgamation of various enterprises dedicated to automating routine business transactions, notably pioneering punched card-based data tabulating machines and time clocks. In 1911, these entities were unified under the umbrella of the Computing-Tabulating-Recording Company (CTR).
Thomas J. Watson (1874–1956) assumed the role of general manager within the company in 1914 and ascended to the position of President in 1915. By 1924, the company rebranded as "International Business Machines". IBM diversified its offerings to include electric typewriters and other office equipment. Watson, a proficient salesman, aimed to cultivate a highly motivated, well-compensated sales force capable of devising solutions for clients unacquainted with the latest technological advancements.
In the 1940s and 1950s, IBM began its initial forays into computing, which constituted incremental improvements to the prevailing card-based system. A pivotal moment arrived in the 1960s with the introduction of the System/360 family of mainframe computers. IBM provided a comprehensive spectrum of hardware, software, and service agreements, fostering client loyalty and solidifying its moniker "Big Blue". The customized nature of end-user software, tailored by in-house programmers for a specific brand of computers, deterred brand switching due to its associated costs. Despite challenges posed by clone makers like Amdahl and legal confrontations, IBM leveraged its esteemed reputation, assuring clients with both hardware and system software solutions, earning acclaim as one of the esteemed American corporations during the 1970s and 1980s.
However, IBM encountered difficulties in the late 1980s and 1990s, marked by substantial losses surpassing $8 billion in 1993. The mainframe-centric corporation grappled with adapting swiftly to the burgeoning Unix open systems and personal computer revolutions. Desktop machines and Unix midrange computers emerged as cost-effective and easily manageable alternatives, overshadowing multi-million-dollar mainframes. IBM responded by introducing a Unix line and a range of personal computers. The competitive edge was gradually lost to clone manufacturers who offered cost-effective alternatives, while chip manufacturers like Intel and software corporations like Microsoft reaped significant profits.
Through a series of strategic reorganizations, IBM managed to sustain its status as one of the world's largest computer companies and systems integrators. As of 2014, the company boasted a workforce exceeding 400,000 employees globally and held the distinction of possessing the highest number of patents among U.S.-based technology firms. IBM maintained a robust presence with research laboratories dispersed across twelve locations worldwide. Its extensive network comprised scientists, engineers, consultants, and sales professionals spanning over 175 countries. IBM employees were recognized for their outstanding contributions with numerous accolades, including five Nobel Prizes, four Turing Awards, five National Medals of Technology, and five National Medals of Science.
Chronology
1880s–1924: The origin of IBM
IBM traces its roots to the 1880s through the consolidation of four predecessor companies:
Bundy Manufacturing Company:
Founded in 1889 by Harlow Bundy in Binghamton, New York, as the first manufacturer of time clocks.
Tabulating Machine Company:
Initiated by Herman Hollerith, who began building punch card-based data processing machines as early as 1884.
Founded the Tabulating Machine Company in 1896 in Washington, D.C.
International Time Recording Company:
Founded in 1900 by George Winthrop Fairchild in Jersey City, New Jersey, and reincorporated in 1901 in Binghamton, later relocating to Endicott, New York in 1906.
Computing Scale Company of America:
Established in 1901 in Dayton, Ohio.
The U.S. Census Bureau contracted to use Herman Hollerith's punched card tabulating technology on the 1890 United States census. That census was completed in 6-years and estimated to have saved the government $5 million. The total population of 62,947,714, the family, or rough, count, was announced after only six weeks of processing (punched cards were not used for this tabulation). Hollerith's punched cards become the tabulating industry standard for input for the next 70 years, and were initially sold as The Tabulating Machine Company. In 1906, Hollerith made the first tabulator with an automatic card feed and control panel. Hollerith later expanded to private businesses in the United States and abroad. In 1911, due to declining health, Hollerith sold the business to financier Charles Flint for $2.3 million.
On June 16, 1911, Flint merged the four companies into a new holding company named the Computing-Tabulating-Recording Company (CTR), headquartered in Endicott. The consolidation aimed to diversify the company's revenue sources and mitigate risks associated with dependence on a single industry. The consolidated entity initially had 1,300 employees and offices/plants in several locations across the United States and Toronto, Ontario. The amalgamated companies started manufacturing, and selling or leasing machinery such as commercial scales, industrial time recorders, meat and cheese slicers, tabulators, and punched cards. The individual companies continued operating under their established names as subsidiaries of CTR until the holding company was dissolved in 1933.
To manage the diversified businesses of CTR, Flint sought assistance from Thomas J. Watson Sr., the former No. 2 executive at the National Cash Register Company (NCR). In 1914, Watson was made general manager of CTR. Less than a year later the court verdict was set aside. A consent decree was drawn up which Watson refused to sign, gambling that there would not be a retrial. He became president of the firm Monday, March 15, 1915. Watson's managerial strategies and emphasis on customer service and large-scale tabulating solutions propelled revenue growth and expanded the company's operations globally.
In 1916, CTR started investing in its subsidiary's employees, creating an education program. Over the next two decades, the program expanded to include management education, volunteer study clubs, and the construction of the IBM Schoolhouse in 1933. In 1917, CTR expanded to Brazil, invited by the Brazilian Government to conduct the census. In 1920, the Tabulating Machine Co. made their printing tabulator. With prior tabulators the results were displayed and had to be copied by hand. In 1923, CTR acquired majority ownership of the German tabulating firm Deutsche Hollerith Maschinen Groupe (Dehomag).
Watson had never liked the hyphenated title of Computing-Tabulating-Recording Company and chose the new name of "International Business Machines Corporation" (IBM) both for its aspirations and to escape the confines of "office appliance". The new name was first used for the company's Canadian subsidiary in 1917, and was formally changed on February 14, 1924. The subsidiaries' names did not change; there would be no IBM labeled products until 1933 (below) when the subsidiaries are merged into IBM. Under Watson's leadership, he established key initiatives that shaped IBM's organizational culture, including hiring disabled workers, promoting employee education, and fostering a culture of thinking ("THINK" was a slogan made in 1915). His Open Door Policy and initiatives to support employees and their families became integral aspects of IBM's culture.
1925–1929: IBM's early growth
Thomas J. Watson, during his tenure at IBM, implemented strict guidelines for employees, encompassing a dress code stipulating dark suits, white shirts, and striped ties. The consumption of alcohol, whether during working hours or otherwise, was prohibited. Watson actively led singing sessions during meetings, featuring songs such as "Ever Onward" from the official IBM songbook. Additionally, the company initiated the publication of an employee newspaper named Business Machines, consolidating coverage of all IBM businesses into one publication.
Several employee recognition programs were introduced, including the Quarter Century Club to honor those with 25 years of service and the Hundred Percent Club to reward sales personnel meeting annual quotas. In 1928, IBM launched the Suggestion Plan program, providing cash rewards to employees for valuable ideas aimed at improving IBM products and procedures. Over a span of 70 years, IBM and its predecessor companies specialized in manufacturing clocks and other time recording products, culminating in the 1958 sale of the IBM Time Equipment Division to Simplex Time Recorder Company. This division produced a range of equipment, including dial recorders, job recorders, recording door locks, time stamps, and traffic recorders.
IBM expanded its product line through innovative engineering, driven by notable inventors such as James W. Bryce, Clair Lake, Fred Carroll, and Royden Pierce. Significant product innovations were introduced, including the first complete school time control system and the first printing tabulator in 1920. In 1923, the company pioneered the first electric keypunch. The Carroll Rotary Press introduced in 1924 revolutionized the production of punched cards by achieving record-setting speeds. In 1928, IBM introduced the 80-column punched card, known as the "IBM Card", effectively doubling its information capacity. This format became an industry standard until the 1970s.
Key events in IBM's history during this period include the first tabulator sold to Japan in 1925, through a partnership with Morimura-Brothers. IBM established its presence in Italy by opening its first office in Milan in 1927, facilitating operations with national insurance and banks. A significant advancement in tabulator technology occurred in 1928 with the introduction of the Hollerith Type IV tabulator capable of subtraction. This year also marked the debut of the Hollerith 80-column punched card, a format that became an industry standard, superseding the prior 45-column card and eventually ending vendor compatibility.
1930–1938: The Great Depression
The 1930s Great Depression posed an extraordinary economic test, yet IBM displayed resilience by maintaining investments in personnel, manufacturing, and technological advancements during this challenging period. Rather than downsizing its workforce, Watson opted to hire additional salesmen and engineers in alignment with President Franklin Roosevelt's National Recovery Administration plan.
During this era, IBM emerged as a pioneering corporation by instituting employee benefits such as group life insurance (1934), survivor benefits (1935), and paid vacations (1936). The company furthered its commitment to education and research by establishing the IBM Schoolhouse in Endicott and constructing a modern research laboratory at the same location. Watson's strategic decisions during this time represented IBM's initial 'Bet the Company' gamble, marked by substantial internal investments to secure the future.
In an effort to manage the strain on resources caused by factories running at maximum capacity for six years without a market to sell to, IBM opted to sell the struggling Dayton Scale Division (food services equipment business) to Hobart Manufacturing in 1933. An opportune moment arrived with the enactment of the Social Security Act of 1935, hailed as "the biggest accounting operation of all time", wherein IBM secured the exclusive bid by promptly providing the necessary equipment. This landmark government contract involved maintaining employment records for 26 million individuals, propelling IBM's success and paving the way for additional government orders. By the decade's end, IBM had not only navigated through the Depression but had also ascended to a prominent position in the industry.
Watson's visionary focus on international expansion emerged as a pivotal aspect of IBM's 20th-century growth and triumph. Influenced by the devastating impact of World War I on society and businesses, he advocated for commerce as a deterrent to war, emphasizing the compatibility of business interests and peace. Watson's belief was so strong that he inscribed his slogan "World Peace Through World Trade" on the façade of IBM's new World Headquarters (1938) in New York City. This phrase became a fundamental IBM business tenet, and Watson actively campaigned for this idea with international business and government leaders. He played a role as an informal government host to visiting world leaders in New York and received numerous awards from foreign governments in recognition of his efforts to enhance international relations through the establishment of business connections.
In 1936, following a loss at the US Supreme Court, IBM agreed to a consent decree which created a separate market for the punched cards and in effect for subsequent computer supplies such as magnetic tapes and disk packs.
Key events
1931
The first Hollerith punched card machine capable of multiplication is introduced, known as the Hollerith 600 Multiplying Punch.
The first Hollerith alphabetical accounting machine, the Alphabetic Tabulator Model B, was swiftly followed by the full alphabet ATC.
The New York World newspaper coins the term "Super Computing Machine" to describe the Columbia Difference Tabulator, a specialized tabulator-based machine created for the Columbia Statistical Bureau. It was exceptionally massive and earned the nickname "Packard". Institutions such as the Carnegie Foundation, Yale University, Harvard University, and others became users.
1933
Subsidiary companies merge into IBM, leading to the disappearance of names like the Tabulating Machine Company.
IBM introduces removable control panels.
IBM implements a standard 40-hour work week for both manufacturing and office locations.
IBM purchases the Electromatic Typewriter Co., primarily to secure essential patents. Electric typewriters later become one of IBM's prominent products.
1934
IBM establishes a group life insurance plan for all employees with at least one year of service.
Watson Sr. transitions IBM's factory employees to a salary-based payment system, eliminating piece work and enhancing economic stability for employees and their families.
IBM introduces the IBM 801 Bank Proof machine, a new type of proof machine that improved the efficiency of the check clearing process.
1935
During the Great Depression, IBM maintains production of new machines, positioning the company to win a significant government contract related to the Social Security Act, termed "the biggest accounting operation of all time".
1936
IBM agreed to 1936 consent decree
1937
IBM establishes a tabulating machine data center at Columbia University, known as the Thomas J. Watson Astronomical Computing Bureau, dedicated to scientific research.
IBM introduces the first collator, the IBM 077 Collator.
IBM produces five to ten million punched cards every day, employing 32 presses in Endicott, N.Y., for this purpose.
Rey Johnson of IBM designs the IBM 805 Test Scoring Machine, revolutionizing the test scoring process with innovative pencil-mark sensing technology and the phrase, "Please completely fill in the oval."
Watson Sr., as president of the International Chamber of Commerce, presides over the ICC's 9th Congress in Berlin and received a Merit Cross of the German Eagle with Star medal from the Nazi government, later returned.
IBM announces a policy of paying employees for six annual holidays, marking one of the first instances of holiday pay in U.S. companies. Paid vacations also commenced.
Japan Wattoson Statistics Accounting Machinery Co., Ltd. (now IBM Japan) is established.
1938
IBM dedicates its new World Headquarters at 590 Madison Avenue, New York City, and by that time, the company had operations in 79 countries.
1939–1945: World War II
In the years preceding the commencement of World War II, the International Business Machines Corporation (IBM) had established operational presences across various nations that later became embroiled in the global conflict, aligning with either the Allies or the Axis powers. IBM maintained the financially significant subsidiary, DEHOMAG, in Germany, where it held a majority ownership stake (from 1922 to 1949), alongside operations in Poland, Switzerland, and several other European countries. In line with the fate of numerous enterprises under enemy ownership in Axis-controlled territories, these IBM subsidiaries were seized by the Nazi regime and other Axis-affiliated governments early in the war. Concurrently, the corporation's central headquarters in New York redirected its efforts towards supporting the American war endeavor.
IBM in America
During World War II, IBM underwent a significant transformation in its product line and operations to support the war effort. Originally known for its tabulating equipment and time recording devices, IBM shifted its focus to manufacturing various military ordnance items and essential products. The product line expanded to include Sperry and Norden bombsights, Browning Automatic Rifles, the M1 Carbine, and engine parts, comprising over three dozen major ordnance items and 70 products overall. Thomas J. Watson, the president of IBM at the time, set a nominal one percent profit on these war-related products. The profits generated were used to establish a fund dedicated to assisting the widows and orphans of IBM employee war casualties.
The contributions of IBM during this period were instrumental in aiding Allied military forces. The company's tabulating equipment found extensive use in mobile records units, ballistics, accounting, logistics, and other war-related purposes. Particularly notable was the use of IBM punched-card machines at Los Alamos National Laboratory in the Manhattan Project for speeding up calculations necessary for the development of the first atomic bombs.
IBM also played a vital role in technological advancements during the war. In collaboration with the U.S. Navy, IBM built the Automatic Sequence Controlled Calculator, also known as the Harvard Mark I, which was the first large-scale electromechanical calculator in the United States.
In the early 1930s, IBM had acquired the rights to Radiotype, an electric typewriter attached to a radio transmitter. This technology proved to be crucial during the war, as Admiral Richard E. Byrd successfully sent a test Radiotype message over 11,000 miles from Antarctica to an IBM receiving station in Ridgewood, New Jersey in 1935. During the war, Radiotype installations were extensively used, processing up to 50,000,000 words a day, and were selected by the Signal Corps for war-related communications.
To meet the demands of wartime production, IBM significantly expanded its manufacturing capacity. New buildings were constructed at its Endicott, New York plant in 1941, and new facilities were established in Poughkeepsie, New York (1941), Washington, D.C. (1942), and San Jose, California (1943). The decision to establish a presence on the West Coast, particularly in San Jose, was strategic and capitalized on the burgeoning electronics research and high technology innovation base in the region, which later became known as Silicon Valley.
Additionally, IBM was subcontracted by the U.S. government for a critical project related to the Japanese internment camps. IBM provided punched card equipment and services for the administration and management of these camps.
IBM's punched card equipment also played a vital role in code breaking and cryptanalysis efforts by various U.S. Army and Navy organizations, including Arlington Hall, OP-20-G, Central Bureau, Far East Combined Bureau, and similar Allied organizations. These efforts were essential for intelligence and information decryption during the war.
IBM in Germany and Nazi-occupied Europe
During the 1930s and throughout World War II, the Nazi regime extensively utilized Hollerith punch-card equipment, a technology developed by IBM, for various administrative and discriminatory purposes. IBM's majority-owned German subsidiary, Deutsche Hollerith Maschinen GmbH (Dehomag), played a crucial role in supplying and maintaining this equipment for the Nazis. The machinery facilitated the categorization and identification of individuals in Germany and territories under Nazi control, aiding in the execution of oppressive policies, particularly the persecution and deportation of Jews and other targeted groups during the Holocaust, leading to their internment in Nazi concentration camps.
Dehomag, like numerous foreign-owned enterprises operating in Germany during that era, fell under Nazi control prior to and during World War II. An individual associated with the Nazi regime, Hermann Fellinger, assumed a prominent role within the subsidiary as an enemy-property custodian appointed by the Germans. The control over Dehomag was asserted by the Nazis, although Edwin Black, a journalist and historian, contends in IBM and the Holocaust that the appearance of seizure was a deceptive maneuver. He asserts that the company was not plundered, its leased machinery was not confiscated, and IBM continued to receive funds through its Geneva-based subsidiary. Black argues that IBM persisted in its business relations with the Nazi regime beyond the point where they should have ceased, maintaining and expanding services to the Third Reich, until the United States declared war against Germany in 1941, at which point Germany took control of Dehomag and appointed Hermann Fellinger as enemy-property custodian. IBM countered these claims by stating that the allegations were based on known facts and previously disclosed documents, asserting the absence of new revelations. The company further denied any withholding of relevant documentation. Notable historians have expressed varying views on IBM's complicity and awareness of Nazi utilization of tabulating machines as asserted by Black.
In parallel to these events during World War II, key developments within IBM included initiatives beyond the geopolitical context of the war. Noteworthy events included IBM's launch of a program in 1942 to train and employ disabled individuals, beginning in Topeka, Kansas, and expanding to New York City the following year. Also in 1943, IBM appointed its first female vice president, marking a significant milestone. In the realm of technology, IBM introduced the world's first large-scale calculating computer, the Automatic Sequence Control Calculator (ASCC), in 1944, developed in collaboration with Harvard University. This electromechanical machine, also known as the Mark I, revolutionized calculation speed. Moreover, during 1944, IBM actively participated in supporting education through its involvement with the United Negro College Fund(UNCF). Following the war, in 1945, IBM established its first research facility, the Watson Scientific Computing Laboratory, signifying a pivotal step in the evolution of the company's research endeavors. In 1961, IBM relocated its research headquarters to the T.J. Watson Research Center in Yorktown Heights, New York.
1946–1959: Postwar
IBM experienced significant growth in the aftermath of World War II. The company anticipated potential challenges due to a potential decrease in military spending after the war. To address this concern, IBM initiated an ambitious international expansion, leading to the establishment of the World Trade Corporation in 1949, tasked with managing and expanding foreign operations. Under the leadership of Arthur K. 'Dick' Watson, the youngest son of Watson Sr., the World Trade Corporation played a crucial role in contributing to half of IBM's profits by the 1970s.
IBM introduced its first computer in 1951, closely following Remington Rand's UNIVAC. Remarkably, within five years, IBM captured 85% of the computer market, prompting a UNIVAC executive to express dissatisfaction at the competitive advantage IBM had garnered through effective sales strategies. Tragically, the passing of Thomas J. Watson., the company's founding father, on June 19, 1956, marked a significant shift in IBM's leadership. His eldest son, Thomas J. Watson, Jr., took over as the chief executive, after being president since 1952.
The new CEO faced formidable challenges, navigating a rapidly evolving technological landscape with emerging computer technologies like electronic computers, magnetic tape storage, disk drives, and programming, creating both competitors and market uncertainties. Internally, the company experienced substantial growth, leading to organizational and management complexities. The absence of Watson Sr.'s charismatic leadership raised concerns among senior executives about managing IBM effectively during this transformative period. In response, Watson Jr. undertook a radical restructuring of the organization, implementing a modern management structure to enhance oversight and efficiency.
Watson Jr. institutionalized IBM's well-known but unwritten practices and philosophies into formal corporate policies and programs, such as the Three Basic Beliefs, Open Door, and Speak Up! He notably introduced the company's first equal opportunity policy letter in 1953, preceding the U.S. Supreme Court decision in Brown v. Board of Education by a year and anticipating the Civil Rights Act of 1964. by 11 years.
Furthermore, Watson Jr. expanded the company's physical capabilities, establishing key research and development laboratories in various locations. Acknowledging the need to embrace transistor technology, he mandated a corporate policy in 1957, advocating the use of solid-state circuitry in all machine developments and discouraging the use of tube circuitry in new commercial machines or devices.
IBM continued its collaboration with the U.S. government, driving computational innovation, particularly during the Cold War. This collaboration was instrumental in projects like the SAGE interceptor early detection air defense system. Beginning in 1952, IBM collaborated with MIT's Lincoln Laboratory to design an air defense computer, and later became the primary computer hardware contractor for developing SAGE for the United States Air Force. This initiative enabled IBM to access groundbreaking research on real-time, digital computers and various technological advancements.
These strategic government partnerships, combined with pioneering computer technology research and successful commercial products, including the IBM 700 series of computer systems, IBM 650, IBM 305 RAMAC with disk drive memory, and IBM 1401, positioned IBM as the world's leading technology firm by the end of the 1950s. In the five years following Watson Sr.'s passing, IBM's size had more than doubled, its stock had quintupled, and a significant majority of computers in operation in the United States were IBM machines.
Key events
During the period from 1946 to 1959, International Business Machines Corporation (IBM) witnessed several significant events and developments that played a crucial role in shaping the company's trajectory and influence in the emerging computer and technology industry. These events are outlined below:
1946
IBM 603 Electronic Multiplier: IBM announces the IBM 603 Electronic Multiplier, marking the company's first commercial product to incorporate electronic arithmetic circuits.
Chinese Character Typewriter: IBM introduces an electric Chinese ideographic character typewriter, enabling users to type at a rate of 40 to 45 Chinese words per minute. The machine utilized a cylinder with engraved ideographic type faces, showcasing IBM's early forays into diverse language processing technologies.
First Black Salesman: IBM hires its first black salesman, demonstrating an early commitment to diversity and inclusion, occurring well before the enactment of the Civil Rights Act of 1964.
1948
IBM SSEC: IBM announced the Selective Sequence Electronic Calculator (SSEC), its initial large-scale digital calculating machine. The SSEC, employing vacuum tubes and electromechanical relays, was the first computer capable of modifying a stored program, representing a landmark in computing technology.
1950s
IBM's Involvement in Space Exploration: IBM played a crucial role in space exploration endeavors, ranging from developing ballistics tables during World War II to designing intercontinental missiles and supporting satellite launching and tracking, marking a significant contribution to the aerospace industry.
1952
IBM 701 Commercial Computer: IBM entered the commercial computer market with the introduction of the IBM 701, its first large-scale electronic computer manufactured in quantity. The IBM 701 played a pivotal role in establishing IBM's presence in the electronics industry.
Magnetic Tape Vacuum column: IBM introduced the magnetic tape drive vacuum column, revolutionizing data storage by enabling fragile magnetic tape to become a viable medium. This innovation set the stage for the widespread adoption of magnetic storage technology.
First California Research Lab: IBM opened its first West Coast laboratory in San Jose, California, a significant step that eventually contributed to the development of Silicon Valley. Within a few years, this lab played a pivotal role in inventing the hard disk drive.
1953
Equal Opportunity Policy Letter: IBM's president, Thomas J. Watson Jr., published the company's first written equal opportunity policy letter, showcasing an early commitment to promoting equality within the workplace.
IBM 650 Magnetic Drum Data-Processing Machine: IBM announced the IBM 650 Magnetic Drum Data-Processing Machine, an intermediate-sized electronic computer designed to handle both business and scientific computations. It became highly popular during the 1950s.
1954
Development of NORC: IBM developed and built the Naval Ordnance Research Computer (NORC), the fastest and most powerful electronic computer of its time, for the U.S. Bureau of Ordnance.
1956
First Magnetic Hard Disk Drive: IBM introduced the world's first magnetic hard disk for data storage, the IBM 350 disk storage unit, which stored 5 million 6-bit characters (3.75 MB) on fifty-two 24-inch diameter disks. This innovation marked the beginning of an era of efficient data storage.
Consent Decree: The United States Justice Department entered a consent decree against IBM, preventing the company from monopolizing the market for punched-card tabulating and electronic data-processing machines. The decree established regulations for IBM's operations in this domain.
Corporate Design Initiative: IBM initiated a formal Corporate Design Program under the guidance of design consultant Eliot Noyes, seeking to create a consistent, world-class look and feel for IBM products and structures. This marked a significant step towards branding and design standardization.
First European Research Lab: IBM expanded its research capabilities by opening its first research lab outside the United States, in Zurich, Switzerland, further enhancing its global research and development footprint.
Leadership Transition and Williamsburg Conference: Thomas J. Watson Sr. retired, passing the leadership of IBM to his son, Watson Jr. This transition was marked by a significant organizational restructuring during the Williamsburg conference, paving the way for the second generation of IBM leadership.
Artificial intelligence: Arthur L. Samuel of IBM's Poughkeepsie, New York, laboratory demonstrated an early form of artificial intelligence by programming an IBM 704 to play checkers, showcasing the potential for machines to "learn" from their experiences.
1957
IBM introduces the FORTRAN programming language, contributing to numerical analysis and scientific computing.
1958
SAGE AN/FSQ-7 Computer: IBM was contracted to build the SAGE (Semi-Automatic Ground Environment) AN/FSQ-7 computer for MIT's Lincoln Laboratory, a critical component of the North American Air Defense System.
1959
IBM 1401: IBM introduced the IBM 1401, the first high-volume, stored-program, core-memory, transistorized computer. Its versatility in running enterprise applications made it highly popular in the early 1960s.
IBM 1403 Chain Printer: IBM launched the 1403 chain printer, marking the advent of high-speed, high-volume impact printing, a significant advancement in the field of data output and document processing.
These events collectively reflect IBM's prominent role in the evolution of computing technology, its commitment to innovation, and its pioneering contributions to various aspects of the emerging computer industry during the late 1940s and 1950s.
1960–1969: The System/360 era, Unbundling software and services
On April 7, 1964, IBM introduced the revolutionary System/360, the first large "family" of computers to use interchangeable software and peripheral equipment, a departure from IBM's existing product line of incompatible machines, each of which was designed to solve specific customer requirements. The idea of a general-purpose machine was considered a gamble at the time.
Within two years, the System/360 became the dominant mainframe computer in the marketplace and its architecture became a de facto industry standard. During this time, IBM transformed from a medium-sized maker of tabulating equipment and typewriters into the world's largest computer company.
In 1969 IBM "unbundled" software and services from hardware sales. Until this time customers did not pay for software or services separately from the high price for the hardware. Software was provided at no additional charge, generally in source code form. Services (systems engineering, education and training, system installation) were provided free of charge at the discretion of the IBM Branch office. This practice existed throughout the industry.
IBM's unbundling is widely credited with leading to the growth of the software industry. After the unbundling, IBM software was divided into two main categories: System Control Programming (SCP), which remained free to customers, and Program Products (PP), which were charged for. This transformed the customer's value proposition for computer solutions, giving a significant monetary value to something that had essentially been free. This helped enable the creation of the software industry. Similarly, IBM services were divided into two categories: general information, which remained free and provided at the discretion of IBM, and on-the-job assistance and training of customer personnel, which were subject to a separate charge and were open to non-IBM customers. This decision vastly expanded the market for independent computing services companies.
The company began four decades of Olympic sponsorship with the 1960 Winter Games in Squaw Valley, California. It became a recognized leader in corporate social responsibility, joining federal equal opportunity programs in 1962, opening an inner-city manufacturing plant in 1968, and creating a minority supplier program. It led efforts to improve data security and protect privacy. It set environmental air/water emissions standards that exceeded those dictated by law and brought all its facilities into compliance with those standards. It opened one of the world's most advanced research centers in Yorktown, New York. Its international operations produced more than half of IBM's revenues by the early 1970s. The resulting technology transfer shaped the way governments and businesses operated around the world. IBM personnel and technology played an integral role in the space program and landing the first humans on the Moon in 1969. In that same year, it changed the way it marketed its technology to customers, unbundling hardware from software and services, effectively starting today's software and services industry. See unbundling of software and services, below. IBM was massively profitable, with a nearly fivefold increase in revenues and earnings during the 1960s.
In 1967, Thomas John Watson Jr. announced that IBM would open a large-scale manufacturing plant at Boca Raton, Florida, to produce its System/360 Model 20 midsized computer. On March 16, 1967, a headline in the Boca Raton News announced "IBM to hire 400 by year's end." The plan was for IBM to lease facilities to start making computers until the new site could be developed. A few months later, hiring began for assembly and production control trainees. IBM's Juan Rianda moved from Poughkeepsie, New York, to become the first plant manager at IBM's new Boca operations. To design its new campus, IBM commissioned architect Marcel Breuer, who worked closely with American architect Robert Gatje. In September 1967, the Boca team shipped the first IBM System/360 Model 20 to the City of Clearwater – the first computer in its production run. A year later, IBM 1130 Computing Systems were being produced and shipped. By 1970, IBM's Boca workforce grew to around 1,300 in part due to a Systems Development Engineering Laboratory being added to the division's operations.
Key events
1961
IBM delivers its first 7030 Stretch supercomputer. Stretch falls short of its original design objectives, and is not a commercial success. But it is a product that pioneers numerous revolutionary computing technologies which are soon widely adopted by the computer industry.
IBM moves its research headquarters from Poughkeepsie, NY to Westchester County, NY, opening the Thomas J. Watson Research Center which remains IBM's largest research facility, centering on semiconductors, computer science, physical science, and mathematics. The lab which IBM established at Columbia University in 1945 was closed and moved to the Yorktown Heights laboratory in 1970.
IBM introduces the Selectric typewriter product line. Later Selectric models feature memory, giving rise to the concepts of word processing and desktop publishing. The machine won numerous awards for its design and functionality. Selectrics and their descendants eventually captured 75 percent of the United States market for electric typewriters used in business. IBM replaced the Selectric line with the IBM Wheelwriter in 1984 and transferred its typewriter business to the newly formed Lexmark in 1991.
IBM offers its Report Program Generator, an application that allows IBM 1401 users to produce reports. This capability was adopted throughout the industry, becoming a feature offered in subsequent generations of computers. It played a role in the introduction of computers into small businesses.
1962
Basic beliefs. Drawing on established IBM policies, Thomas J. Watson Jr., codifies three IBM basic beliefs: respect for the individual, customer service, and excellence.
SABRE. Two IBM 7090 mainframes formed the backbone of the SABRE reservation system for American Airlines. As the first airline reservation system to work live over phone lines, SABRE linked high-speed computers and data communications to handle seat inventory and passenger records.
1964
IBM System/360. IBM introduces the IBM System/360 which creates a "family" of small to large computers, incorporating IBM Solid Logic Technology (SLT) microelectronics and using the same programming instructions. The concept of a compatible "family" of computers transforms the industry.
Word processing. IBM introduces the IBM Magnetic Tape Selectric Typewriter, a product that pioneered the application of magnetic recording devices to typewriting, and gave rise to desktop word processing. Referred to then as "power typing", the feature of revising stored text improved office efficiency by allowing typists to type at "rough draft" speed without the pressure of worrying about mistakes.
New corporate headquarters. IBM moves its corporate headquarters from New York City to Armonk, New York.
1965
Gemini space flights. A 59-pound onboard IBM guidance computer is used on all Gemini space flights, including the first spaceship rendezvous. IBM scientists complete the most precise computation of the Moon's orbit and develop a fabrication technique to connect hundreds of circuits on a silicon wafer.
New York World's Fair. The IBM Pavilion at the New York World's Fair closes, having hosted more than 10 million visitors during its two-year existence.
1966
Dynamic Random-Access Memory (DRAM). IBM invents one-transistor DRAM cells which permit major increases in memory capacity. DRAM chips become the mainstay of modern computer memory systems.
IBM System/4 Pi. IBM ships its first System/4Pi computer, designed to meet U.S. Department of Defense and NASA requirements. More than 9000 units of the 4Pi systems are delivered by the 1980s for use in the air, sea, and in space.
IBM Information Management System (IMS). IBM designed the Information Management System (IMS) with Rockwell and Caterpillar starting in 1966 for the Apollo program, where it was used to inventory the very large bill of materials (BOM) for the Saturn V Moon rocket and Apollo space vehicle.
1967
Fractal geometry. IBM researcher Benoit Mandelbrot conceives fractal geometry – the concept that seemingly irregular shapes can have identical structure at all scales. This new geometry makes it possible to mathematically describe the kinds of irregularities existing in nature. The concept greatly impacts the fields of engineering, economics, metallurgy, art, health sciences, and computer graphics and animation.
1968
IBM Customer Information Control System (CICS). IBM introduces the CICS transaction monitor. CICS remains to this day the industry's most popular transaction monitor.
1969
Antitrust. The United States government launches what would become a 13-year-long antitrust suit against IBM. The suit is controversially dropped by the U.S. government in 1982.
Unbundling. IBM adopts a new marketing policy that charges separately for most systems engineering activities, future computer programs, and customer education courses. This "unbundling" gives rise to the software and services industry.
Magnetic stripe cards. The American National Standards Institute makes the IBM-developed magnetic stripe technology a national standard, making possible new business models such as the credit card industry. Two years later, the International Organization for Standardization adopts the IBM design, making it a world standard.
First Moon landing. IBM personnel and computers help NASA land the first men on the Moon.
1970–1974: The challenges of success
The Golden Decade of the 1960s was a hard act to follow, and the 1970s got off to a troubling start when CEO Thomas J. Watson Jr. suffered a heart attack and retired in 1971. For the first time since 1914 – nearly six decades – IBM would not have a Watson at the helm. Moreover, after just one leadership change over those nearly 60 years, IBM would endure two in two years. T. Vincent Learson succeeded Watson as CEO, then quickly retired upon reaching the mandatory retirement age of 60 in 1973. Following Learson in the CEO office was Frank T. Cary, a 25-year IBMer who had run the data processing division in the 1960s.
Datamation in 1971 stated that "the perpetual, ominous force called IBM rolls on". The company's dominance let it keep prices high and rarely update products, all built with only IBM components. During Cary's tenure as CEO, the IBM System/370 was introduced in 1970 as IBM's new mainframe. The S/370 did not prove as technologically revolutionary as its predecessor, the System/360. From a revenue perspective, it more than sustained the cash cow status of the 360.
A less successful effort to replicate the 360 mainframe revolution was the Future Systems project. Between 1971 and 1975, IBM investigated the feasibility of a new revolutionary line of products designed to make obsolete all existing products in order to re-establish its technical supremacy. This effort was terminated by IBM's top management in 1975. By then it had consumed most of the high-level technical planning and design resources, thus jeopardizing progress of the existing product lines (although some elements of FS were later incorporated into actual products).
Other IBM innovations during the early 1970s included the IBM 3340 disk unit – introduced in 1973 and known as "Winchester" after IBM's internal project name – which was a storage technology which more than doubled the information density on disk surfaces. Winchester technology was adopted by the industry and used for the next two decades.
Some 1970s-era IBM technologies emerged to become facets of everyday life. IBM developed magnetic stripe technology in the 1960s, and it became a credit card industry standard in 1971. The IBM-invented floppy disk, also introduced in 1971, became the standard for storing personal computer data during the first decades of the PC era. IBM Research scientist Edgar 'Ted' Codd wrote a seminal paper describing the relational database, an invention that Forbes magazine described as one of the most important innovations of the 20th century. The IBM 5100, 50 lbs. and $9000 of personal mobility, was introduced in 1975 and presaged – at least in function if not size or price or units sold – the Personal Computer of the 1980s. IBM's 3660 supermarket checkout station, introduced in 1973, used holographic technology to scan product prices from UPC bar codes, which itself was based a 1952 IBM patent that became a grocery industry standard. Also in 1973, bank customers began making withdrawals, transfers and other account inquiries via the IBM 3614 Consumer Transaction Facility, an early form of today's Automatic Teller Machines.
IBM had an innovator's role in pervasive technologies that were less visible as well. In 1974, IBM announced Systems Network Architecture (SNA), a networking protocol for computing systems. SNA is a uniform set of rules and procedures for computer communications to free computer users from the technical complexities of communicating through local, national, and international computer networks. SNA became the most widely used system for data processing until more open architecture standards were approved in the 1990s. In 1975, IBM researcher Benoit Mandelbrot conceived fractal geometry – a new geometrical concept that made it possible to describe mathematically the kinds of irregularities existing in nature. Fractals had a great impact on engineering, economics, metallurgy, art and health sciences, and are integral to the field of computer graphics and animation.
A less successful business endeavor for IBM was its entry into the office copier market in the 1970s, after turning down the opportunity to purchase the xerography technology. The company was immediately sued by Xerox Corporation for patent infringement. Although Xerox held the patents for the use of selenium as a photoconductor, IBM researchers perfected the use of organic photoconductors which avoided the Xerox patents. The litigation lasted until the late 1970s and was ultimately settled. Despite this victory, IBM never gained traction in the copier market and withdrew from the marketplace in the 1980s. Organic photoconductors are now widely used in copiers.
Throughout this period, IBM was litigating the antitrust suit filed by the Justice Department in 1969. But in a related bit of case law, the landmark Honeywell v. Sperry Rand U.S. federal court case was concluded in April 1973. The 1964 patent for the ENIAC, the world's first general-purpose electronic digital computer, was found both invalid and unenforceable for a variety of reasons thus putting the invention of the electronic digital computer into the public domain. However, IBM was ruled to have created a monopoly via its 1956 patent-sharing agreement with Sperry-Rand.
American antitrust laws did not directly affect IBM in Europe, where as of 1971 it had fewer competitors and more than 50% market share in almost every country. Customers preferred IBM because it was, as Datamation said, "the only truly international computer company", able to serve clients almost anywhere. Rivals such as ICL, CII, and Siemens began to cooperate to preserve a European computer industry.
Key events
1970
System/370. IBM announces System/370 as successor to System/360.
Relational databases. IBM introduces relational databases which call for information stored within a computer to be arranged in easy-to-interpret tables to access and manage large amounts of data. Today, most database structures are based on the IBM concept of relational databases.
Office copiers. IBM introduces its first of three models of xerographic copiers. These machines mark the first commercial use of organic photoconductors which since became the dominant technology.
1971
Speech recognition. IBM achieves its first operational application of speech recognition, which enables engineers servicing equipment to talk to and receive spoken answers from a computer that can recognize about 5,000 words. Today, IBM's ViaVoice recognition technology has a vocabulary of 64,000 words and a 260,000-word back-up dictionary.
Floppy disk. IBM introduces the floppy disk. Convenient and portable, the floppy becomes a personal computer industry standard for storing data.
1973
Winchester storage technology. The IBM 3340 disk unit – known as "Winchester" after IBM's internal project name – is introduced, more than doubling the information density on disk surfaces. It featured a smaller, lighter read/write head that rode on an air film only 18 millionths of an inch thick. Winchester technology was adopted by the industry and used for the next two decades.
Nobel Prize. Dr. Leo Esaki, an IBM Fellow who joined the company in 1960, shares the 1973 Nobel Prize in physics for his 1958 discovery of the phenomenon of electron tunneling. His discovery of the semiconductor junction called the Esaki diode finds wide use in electronics applications. More importantly, his work in the field of semiconductors lays a foundation for further exploration in the electronic transport of solids.
1974
SNA. IBM announces Systems Network Architecture (SNA), a networking protocol for computing systems. SNA is a uniform set of rules and procedures for computer communications to free computer users from the technical complexities of communicating through local, national, and international computer networks. SNA becomes the most widely used system for data processing until more open architecture standards were approved in the 1990s.
1975–1992: Information revolution, rise of software and PC industries
President of IBM John R. Opel became CEO in 1981. IBM was one of the world's largest companies and had a 62% share of the mainframe computer market that year. While frequently relocated employees and families still joked that IBM stood for "I've Been Moved", and employees of acquisitions feared that formal IBM employees would change the culture of their more casual offices, IBM no longer required white shirts for male employees, who still wore conservative suits when meeting customers. Former employees such as Gene Amdahl used their training to found and lead many competitors and suppliers.
Expecting Japanese competition, IBM in the late 1970s began investing in manufacturing to lower costs, offering volume discounts and lower prices to large customers, and introducing new products more frequently. The company also sometimes used non-IBM components in products, and sometimes resold others' products as its own. In 1980 it introduced its first computer terminal compatible with non-IBM equipment, and Displaywriter was the first new product less expensive than the competition. IBM's share of the overall computer market, however, declined from 60% in 1970 to 32% in 1980. Perhaps distracted by the long-running antitrust lawsuit, the "Colossus of Armonk" missed the fast-growing minicomputer market during the 1970s, and was behind rivals such as Wang, Hewlett-Packard (HP), Digital Equipment Corporation (DEC), Tandem Computers, and Control Data in other areas.
In 1979 BusinessWeek asked, "Is IBM just another stodgy, mature company?" By 1981 its stock price had declined by 22%. IBM's earnings for the first half of the year grew by 5.3% – one third of the inflation rate – while those of DEC grew by more than 35%. Although IBM began selling minicomputers, in January 1982 the Justice Department ended the antitrust suit, after IBM unbundled services and, as The New York Times reported, experts concluded that IBM no longer dominated the computer industry.
IBM wished to avoid the same outcome with the new personal computer industry. The company studied the market for years and, as with UNIVAC, others like Apple Computer entered it first; IBM did not want a product with a rival's logo on corporate customers' desks. The company opened its first Product Center retail store in November 1980, and a team in the Boca Raton, Florida, office built the IBM PC using commercial off-the-shelf components. The new computer debuted on August 12, 1981 from the Entry Systems Division led by Don Estridge. IBM immediately became more of a presence in the consumer marketplace, thanks to the memorable Little Tramp advertising campaign. Though not a spectacular machine by technological standards of the day, the IBM PC brought together all of the most desirable features of a computer into one small machine. It had 128 kilobytes of memory (expandable to 256 kilobytes), one or two floppy disks and an optional color monitor. And it had the prestige of the IBM brand. Although not inexpensive, with a base price of US$1,565 it was affordable for businesses – and many businesses purchased PCs. Reassured by the IBM name, they began buying these microcomputers on their own budgets aimed at numerous applications that corporate computer departments did not, and in many cases could not, accommodate. Typically, these purchases were not by corporate computer departments, as the PC was not seen as a "proper" computer. Purchases were often instigated by middle managers and senior staff who saw the potential – once the revolutionary VisiCalc spreadsheet, the killer app, had been surpassed by a far more powerful and stable product, Lotus 1-2-3.
IBM's dominance of the mainframe market in Europe and the US encouraged existing customers to buy the PC, and vice versa; as sales of what had been an experiment in a new market became a substantial part of IBM's financials, the company found that customers also bought larger IBM computers. Unlike the BUNCH and other rivals IBM quickly adjusted to the retail market, with its own sales force competing with outside retailers for the first time. By 1985 IBM was the world's most profitable industrial company, and its sales of personal computers were larger than that of minicomputers despite having been in the latter market since the early 1970s.
By 1983 industry analyst Gideon Gartner warned that IBM "is creating a dangerous situation for competitors in the marketplace". The company helped others by defining technical standards and creating large new software markets, but the new aggressiveness that began in the late 1970s helped it dominate areas like computer leasing and computer-aided design. Free from the antitrust case, IBM was present in every computer market other than supercomputers, and entered communications by purchasing Rolm – the first acquisition in 18 years – and 18% of MCI. The company was so important to component suppliers that it urged them to diversify. When IBM (61% of revenue) abruptly reduced orders from Miniscribe shares of not only Miniscribe but that of uninvolved companies that sold to IBM fell, as investors feared their vulnerability. IBM was also vulnerable when suppliers could not fulfill orders, and customers and dealers also feared becoming overdependent; the PC was so popular in 1983 that dealers only received 60% or less of the inventory they wanted.
The IBM PC AT's 1984 debut startled the industry. Rivals admitted that they did not expect the low price of the sophisticated product. IBM's attack on every area of the computer industry and entry into communications caused competitors, analysts, and the press to speculate that it would again be sued for antitrust. Datamation and others said that the company's continued growth might hurt the United States, by suppressing startups with new technology. Gartner Group estimated in 1985 that of the 100 largest data-processing companies, IBM had 41% of all revenue and 69% of profit. Its computer revenue was about nine times that of second-place DEC, and larger than that of IBM's six largest Japanese competitors combined. The 22% profit margin was three times the 6.7% average for the other 99 companies. Competitors complained to Congress, ADAPSO discussed the company with the Justice Department, and European governments worried about IBM's influence but feared affecting its more than 100,000 employees there at 19 facilities.
However, the company soon lost its lead in both PC hardware and software, thanks in part to its unprecedented (for IBM) decision to contract PC components to outside companies like Microsoft and Intel. Up to this point in its history, IBM relied on a vertically integrated strategy, building most key components of its systems itself, including processors, operating systems, peripherals, databases and the like. In an attempt to accelerate the time-to-market for the PC, IBM chose not to build a proprietary operating system and microprocessor. Instead, it sourced these vital components from Microsoft and Intel respectively. Ironically, in a decade which marked the end of IBM's monopoly, it was this fateful decision by IBM that passed the sources of its monopolistic power (operating system and processor architecture) to Microsoft and Intel, paving the way for rise of PC compatibles and the creation of hundreds of billions of dollars of market value outside of IBM.
John Akers became IBM's CEO in 1985. During the 1980s, IBM's investment in building its research organization produced four Nobel Prize winners in physics, achieving breakthroughs in mathematics, memory storage and telecommunications, and expanded computing capabilities. In 1980, IBM researcher John Cocke introduced Reduced Instruction Set Computing (RISC). Cocke received both the National Medal of Technology and the National Medal of Science for his innovation, but IBM itself failed to recognize the importance of RISC, and lost the lead in RISC technology to Sun Microsystems.
In 1984 the company partnered with Sears to develop a pioneering online home banking and shopping service for home PCs that launched in 1988 as Prodigy. Despite a strong reputation and anticipating many of the features, functions, and technology that characterize the online experience of today, the venture was plagued by overly conservative management decisions, and was eventually sold in the mid-1990s.
The IBM token-ring local area network, introduced in 1985, permitted personal computer users to exchange information and share printers and files within a building or complex. In 1988, IBM partnered with the University of Michigan and MCI Communications to create the National Science Foundation Network (NSFNet), an important step in the creation of the Internet. But within five years the company backed away from this early lead in Internet protocols and router technologies in order to support its existing SNA revenue stream, thereby missing a boom market of the 1990s. Still, IBM investments and advances in microprocessors, disk drives, network technologies, software applications, and online commerce in the 1980s set the stage for the emergence of the connected world in the 1990s.
However, by the end of the decade, IBM was in trouble. It was a bloated organization of some 400,000 employees that was heavily invested in too many low margin, transactional, commodity businesses. Technologies IBM invented and or commercialized – DRAM, hard disk drives, the PC, electric typewriters – were starting to erode. The company had a massive international organization characterized by redundant processes and functions – its cost structure couldn't compete with smaller, less diversified competitors. Additionally, the back-to-back revolutions – the PC and the client-server – combined to undermine IBM's core mainframe business. The PC revolution placed computers directly in the hands of millions of people. It was followed by the client/server revolution, which sought to link PCs (the "clients") with larger computers that labored in the background (the "servers" that served data and applications to client machines). Both revolutions transformed the way customers viewed, used and bought technology. And both fundamentally rocked IBM and its mainframe competitors. Businesses' purchasing decisions were put in the hands of individuals and departments – not the places where IBM had long-standing customer relationships. Piece-part technologies took precedence over integrated solutions. The focus was on the desktop and personal productivity, not on business applications across the enterprise. As a result, earnings – which had been at or above US$5 billion since the early 1980s, dropped by more than a third to US$3 billion in 1989. A brief spike in earnings in 1990 did not last as corporate spending continued to shift from high-profit margin mainframes to lower margin microprocessor-based systems. In addition, corporate downsizing was in full swing.
Radical changes were considered and implemented. As IBM assessed the situation, it was clear that competition and innovation in the computer industry were now taking place along segmented, versus vertically integrated lines, where computer industry leaders emerged in their respective domains. Examples included Intel in microprocessors, Microsoft in desktop software, Novell in networking, HP in printers, Seagate in disk drives and Oracle Corporation in database software. IBM's dominance in personal computers was challenged by the likes of Compaq and later Dell. Recognizing this trend, management, with the support of the Board of Directors, began to implement a plan to split IBM into increasingly autonomous business units (e.g. processors, storage, software, services, printers, etc.) to compete more effectively with competitors that were more focused and nimble and had lower cost structures.
IBM also began spinning off its many divisions into autonomous subsidiaries (so-called "Baby Blues") in an attempt to make the company more manageable and to streamline IBM by having other investors finance those companies. These included AdStar, dedicated to disk drives and other data storage products (on creation the largest data storage business in the world); IBM Application Business Systems, dedicated to mid-range computers; IBM Enterprise Systems, dedicated to mainframes; Pennant Systems, dedicated to mid-range and large printers; Lexmark, dedicated to small printers, keyboards, and typewriters (such as the Selectric); and more. Lexmark was acquired by Clayton & Dubilier in a leveraged buyout shortly after its formation.
In September 1992, IBM combined and spun off their various non-mainframe and non-midrange, personal computer manufacturing divisions into an autonomous wholly owned subsidiary known as the IBM Personal Computer Company (IBM PC Co.). This corporate restructuring came after IBM reported a sharp drop in profit margins during the second quarter of fiscal year 1992; market analysts attributed the drop to a fierce price war in the personal computer market over the summer of 1992. The corporate restructuring was one of the largest and most expensive in history up to that point. By the summer of 1993, the IBM PC Co. had divided into multiple business units itself, including Ambra Computer Corporation and the IBM Power Personal Systems Group, the former an attempt to design and market "clone" computers of IBM's own architecture and the latter responsible for IBM's PowerPC-based workstations.
These efforts failed to halt the slide. A decade of steady acceptance and widening corporate growth of local area networking technology, a trend headed by Novell Inc. and other vendors, and its logical counterpart, the ensuing decline of mainframe sales, brought about a wake-up call for IBM. After two consecutive years of reporting losses in excess of $1 billion, on January 19, 1993, IBM announced a US$8.10 billion loss for the 1992 financial year, which was then the largest single-year corporate loss in U.S. history. All told, between 1991 and 1993, the company posted net losses of nearly $16 billion. IBM's three-decade-long Golden Age, triggered by Watson Jr. in the 1950s, was over. The computer industry now viewed IBM as no longer relevant, an organizational dinosaur. And hundreds of thousands of IBMers lost their jobs, including CEO John Akers.
Key events
mid-1970s: IBM VNET. VNET was an international computer networking system deployed in the mid-1970s, providing email and file-transfer for IBM. By September 1979, the network had grown to include 285 mainframe nodes in Europe, Asia, and North America.
1975: Fractals. IBM researcher Benoit Mandelbrot conceives fractal geometry – the concept that seemingly irregular shapes can have identical structure at all scales. This new geometry makes it possible to describe mathematically the kinds of irregularities existing in nature. Fractals later make a great impact on engineering, economics, metallurgy, art, and health sciences, and are also applied in the field of computer graphics and animation.
1975: IBM 5100 Portable computer. IBM introduces the 5100 Portable Computer, a 50 lb. desktop machine that put computer capabilities at the fingertips of engineers, analysts, statisticians, and other problem-solvers. More "luggable" than portable, the 5100 can serve as a terminal for the System/370 and costs from $9000 to $20,000.
1976: Space Shuttle. The Enterprise, the first vehicle in the U.S. Space Shuttle program, makes its debut at Palmdale, California, carrying IBM AP-101 flight computers and special hardware built by IBM.
1976: Laser printer. The first IBM 3800 printer is installed. The 3800 is the first commercial printer to combine laser technology and electrophotography. The technology speeds the printing of bank statements, premium notices, and other high-volume documents, and remains a workhorse for billing and accounts receivable departments.
1977: Data Encryption Standard. IBM-developed Data Encryption Standard (DES), a cryptographic algorithm, is adopted by the U.S. National Bureau of Standards as a national standard.
1979: Retail checkout. IBM develops the Universal Product Code (UPC) in the 1970s as a method for embedding pricing and identification information on individual retail items. In 1979, IBM applies holographic scanner technology in IBM's supermarket checkout station to read the UPC stripes on merchandise, one of the first major commercial uses of holography. IBM's support of the UPC concept helps lead to its widespread acceptance by retail and other industries worldwide.
1979: Thin film recording heads. Instead of using hand-wound wire structures as coils for inductive elements, IBM researchers substitute thin film "wires" patterned by optical lithography. This leads to higher performance recording heads at a reduced cost and establishes IBM's leadership in "areal density": storing the most data in the least space. The result is higher-capacity and higher-performance disk drives.
1979: Overcoming barriers to technology use. Since 1946, with its announcement of Chinese and Arabic ideographic character typewriters, IBM has worked to overcome cultural and physical barriers to the use of technology. As part of these ongoing efforts, IBM introduces the 3270 Kanji Display Terminal; the System/34 Kanji System with an ideographic feature, which processes more than 11,000 Japanese and Chinese characters; and the Audio Typing Unit for sight-impaired typists.
1979: First multi-function copier/printer. A communication-enabled laser printer and photocopier combination was introduced, the IBM 6670 Information Distributor. This was the first multi-function (copier/printer) device for the office market.
1980: Thermal conduction modules. IBM introduces the 3081 processor, the company's most powerful to date, which features Thermal Conduction Modules. In 1990, the Institute of Electrical and Electronics Engineers, Inc., awards its 1990 Corporate Innovation Recognition to IBM for the development of the Multilayer Ceramic Thermal Conduction Module for high performance computers.
1980: Reduced instruction set computing (RISC) architecture. IBM successfully builds the first prototype computer employing IBM Fellow John Cocke's RISC architecture. RISC simplified the instructions given to computers, making them faster and more powerful. Today, RISC architecture is the basis of most workstations and widely viewed as the dominant computing architecture.
1981: IBM PC. The IBM Personal Computer goes mass market and helps revolutionize the way the world does business. A year later, Time magazine gives its "Person of the Year" award to the Personal Computer.
1981: LASIK surgery. Three IBM scientists invent the excimer laser surgical procedure that later forms the basis of LASIK and PRK corrective eye surgeries.
1982: Antitrust suit. The United States antitrust suit against IBM, filed in 1969, is being dropped by assistant attorney general William F. Baxter as being "without merit". The reasons given were that the government was backing off antitrust actions, IBM also lost its dominance. As was later discovered Baxter failed to disclose that he had been retained as a consultant defending IBM in private antitrust cases.
1982: Trellis-coded modulation. Trellis-coded modulation (TCM) is first used in voice-band modems to send data at higher rates over telephone channels. Today, TCM is applied in a large variety of terrestrial and satellite-based transmission systems as a key technique for achieving faster and more reliable digital transmission.
1983: IBM PCjr. IBM announces the widely anticipated PCjr., an attempt to enter the home computing marketplace. The product, however, fails to capture the fancy of consumers due to its lack of compatibility with IBM PC software, its price point, and its unfortunate 'chiclet' keyboard design. IBM terminates the product after 18 months of disappointing sales.
1984: IBM 3480 magnetic tape system. The industry's most advanced magnetic tape system, the IBM 3480, introduces a new generation of tape drives that replace the familiar reel of tape with an easy-to-handle cartridge. The 3480 was the industry's first tape system to use "thin-film" recording head technology.
1984: Sexual discrimination. IBM adds sexual orientation to the company's non-discrimination policy. IBM becomes one of the first major companies to make this change.
1984: ROLM partnership/acquisition. IBM acquires ROLM Corporation for $1.25 billion. Based in Santa Clara, CA (subsequent to an existing partnership), IBM intended to develop digital telephone switches to compete directly with Northern Telecom and AT&T. Two of the most popular systems were the large scale PABX coined ROLM CBX and the smaller PABX coined ROLM Redwood. ROLM is later acquired by Siemens AG in 1989–1992.
1985: MCI. IBM acquires 18% of MCI Communications, the United States's second-largest long-distance carrier, in June 1985.
1985: RP3. Sparked in part by national concerns over losing its technology leadership in the early 1980s, IBM re-enters the supercomputing field with the RP3 (IBM Research Parallel Processor Prototype). IBM researchers worked with scientists from the New York University's Courant Institute of Mathematical Science to design RP3, an experimental computer consisting of up to 512 processors, linked in parallel and connected to as many as two billion characters of main memory. Over the next five years, IBM provides more than $30 million in products and support to a supercomputer facility established at Cornell University in Ithaca, New York.
1985: Token Ring Network. IBM's Token Ring technology brings a new level of control to local area networks and quickly becomes an industry standard for networks that connect printers, workstations and servers.
1986: IBM Almaden Research Center. IBM Research dedicates the Almaden Research Center in California. Today, Almaden is IBM's second-largest laboratory focused on storage systems, technology and computer science.
1986: Nobel Prize: Scanning tunneling microscopy. IBM Fellows Gerd K. Binnig and Heinrich Rohrer of the IBM Zurich Research Laboratory win the 1986 Nobel Prize in physics for their work in scanning tunneling microscopy. Drs. Binnig and Rohrer are recognized for developing a powerful microscopy technique which makes images of surfaces where individual atoms may be seen.
1987: Nobel Prize: High-Temperature Superconductivity. J. Georg Bednorz and IBM Fellow Alex Müller of the IBM Zurich Research Laboratory receive the 1987 Nobel Prize for physics for their breakthrough discovery of high-temperature superconductivity in a new class of materials. They discover superconductivity in ceramic oxides that carry electricity without loss of energy at higher temperatures than any other superconductor.
1987: Antivirus tools. As personal computers become vulnerable to attack from viruses, a small research group at IBM develops a suite of antivirus tools. The effort leads to the establishment of the High Integrity Computing Laboratory (HICL) at IBM. HICL goes on to pioneer the science of theoretical and observational computer virus epidemiology.
1987: Special needs access. IBM Researchers demonstrate the feasibility for blind computer users to read information directly from computer screens with the aid of an experimental mouse. And in 1988 the IBM Personal System/2 Screen Reader is announced, permitting blind or visually impaired people to hear the text as it is displayed on the screen in the same way a sighted person would see it. This is the first in the IBM Independence Series of products for computer users with special needs.
1988: IBM AS/400. IBM introduces the IBM Application System/400, a new family of easy-to-use computers designed for small and intermediate-sized companies. As part of the introduction, IBM and IBM Business Partners worldwide announce the availability of more than 1,000 software packages resulting in the AS/400 becoming a popular business computing system.
1988: National Science Foundation Network (NSFNET). IBM collaborates with the Merit Network, MCI Communications, the State of Michigan, and the National Science Foundation to upgrade and expand the 56 kbit/s NSFNET to 1.5 Mbit/s (T1) and later 45 Mbit/s (T3). This partnership provides the network infrastructure and lays the groundwork for the explosive growth of the Internet in the 1990s. The NSFNET upgrade boosts network capacity and speed allowing more intensive forms of data, such as the graphics, to travel across the Internet.
1989: Silicon germanium transistors. The replacing of expensive and exotic materials like gallium arsenide with silicon germanium (known as SiGe), championed by IBM Fellow Bernie Meyerson, creates faster chips at lower costs. Introducing germanium into the base layer of an otherwise all-silicon bipolar transistor allows for improvements in operating frequency, current, noise and power capabilities.
1990: System/390. IBM introduces the System/390 family. IBM incorporates complementary metal oxide silicon (CMOS) based processors into System/390 Parallel Enterprise Server in 1995. In 1998 the System/390 G5 Parallel Enterprise Server 10-way Turbo model exceeded the 1,000 MIPS barrier.
1990: RISC System/6000. IBM announces the RISC System/6000, a family of nine workstations that are among the fastest and most powerful in the industry. The RISC System/6000 uses Reduced instruction set computing technology, a computer design pioneered by IBM that simplifies processing steps to speed the execution of commands.
1990: Moving individual atoms. Donald M. Eigler, a physicist and IBM Fellow at the IBM Almaden Research Center demonstrated the ability to manipulate individual atoms using a scanning tunneling microscope, writing I-B-M using 35 individual xenon atoms.
1990: Environmental programs. IBM joins 14 U.S. corporations to establish a worldwide program to achieve environmental, health and safety goals by continuously improving environmental management practices and performance. IBM has invested more than $1 billion since 1973 to provide environmental protection for the communities in which IBM facilities are located.
1991: Services business. IBM reenters the computer services business through the formation of the Integrated Systems Solution Corporation. Despite being in compliance with the provisions of the 1956 Consent Decree, in four years ISSC becomes the second largest provider of computer services. The new business becomes one of IBM's primary revenue streams.
1992: Personal computer division divestiture. IBM combines and spins off their various non-mainframe and non-midrange, personal computer manufacturing divisions into an autonomous wholly owned subsidiary known as the IBM Personal Computer Company (IBM PC Co.) following a fierce price war in the PC market leading to shrinking profit margins for IBM. This restructuring is one of the largest and most expensive in history.
1993–2018: IBM's near disaster and rebirth
In April 1993, IBM hired Louis V. Gerstner Jr. as its new CEO. For the first time since 1914 IBM had recruited a leader from outside its ranks. Gerstner had been chairman and CEO of RJR Nabisco for four years, and had previously spent 11 years as a top executive at American Express. Gerstner brought with him a customer-oriented sensibility and the strategic-thinking expertise that he had honed through years as a management consultant at McKinsey & Co. Recognizing that his first priority was to stabilize the company, he adopted a triage mindset and took quick action. His early decisions included recommitting to the mainframe, selling the Federal Systems Division to Loral in order to replenish the company's cash coffers, continuing to shrink the workforce (reaching a low of 220,000 employees in 1994), and driving significant cost reductions within the company. Most importantly, Gerstner decided to reverse the move to spin off IBM business units into separate companies. He recognized that one of IBM's strengths was its ability to provide integrated solutions for customers – more than piece parts or components. Splitting the company would have destroyed that IBM advantage.
These initial steps worked. In 1994 IBM turned a profit of $3 billion. Stabilization was not Gerstner's endgame – the restoration of IBM's once great reputation was. To do that, he needed a winning business strategy. Over the next decade, Gerstner shed commodity businesses and focused on high-margin opportunities. IBM divested itself of low margin industries (DRAM, IBM Network, personal printers, and hard drives).
By building upon the decision to keep the company whole, IBM built a global services business and a reputation as a technology integrator. IBM claimed that the services business became brand agnostic integrating whatever technologies the client required, even if they were from an IBM competitor. IBM augmented this services business with the 2002 acquisition of the consultancy division of PricewaterhouseCoopers for $3.5 billion US.
Another high margin opportunity IBM invested in was software. Starting in 1995 with its acquisition of Lotus Development Corp., IBM built its software portfolio from one brand, IBM DB2, to five: DB2, Lotus, WebSphere, Tivoli, and Rational. Content to leave the consumer applications business to other firms, IBM's software strategy focused on middleware – the vital software that connects operating systems to applications. The middleware business played to IBM's strengths, and its higher margins improved the company's bottom line significantly as the century came to an end.
Not all software that IBM developed was successful. While the operating system OS/2 was arguably technically superior to Microsoft Windows 95, OS/2 sales were largely concentrated in networked computing used by corporate professionals. OS/2 failed to develop much penetration in the consumer and stand-alone desktop PC segments. There were reports that it could not be installed properly on IBM's own Aptiva series of home PCs.
Microsoft made an offer in 1994 stipulating that if IBM ended development of OS/2 completely, then it would receive the same terms as Compaq for a license of Windows 95. IBM refused and instead went with an "IBM First" strategy of promoting OS/2 Warp and disparaging Windows, as IBM aimed to drive sales of its own software and hardware. By 1995, Windows 95 negotiations between IBM and Microsoft, which were difficult, stalled when IBM purchased Lotus Development whose Lotus SmartSuite would have directly competed with Microsoft Office. As a result, IBM received their license later than their competitors which hurt sales of IBM PCs. IBM officials later conceded that OS/2 would not have been a viable operating system to keep them in the PC business.
While IBM hardware and technologies were relatively de-emphasized in Gerstner's three-legged business model, they were not relegated to secondary status. The company brought its research organization to bear more closely on its existing product lines and development processes. While Internet applications and deep computing overtook client servers as key business technology priorities, mainframes returned to relevance. IBM reinvigorated their mainframe line with CMOS technologies, which made them among the most powerful and cost-efficient in the marketplace. Investments in microelectronics research and manufacturing made IBM a world leader in specialized, high margin chip production – it developed 200 mm wafer processes in 1992, and 300 mm wafers within the decade. IBM-designed chips were used in PlayStation 3, Xbox 360, and Wii game consoles. IBM also regained the lead in supercomputing with high-end machines based upon scalable parallel processor technology.
Equally significant in IBM's revival was its reentry into the popular mindset. On October 5, 1992, at the COMDEX computer expo, IBM announced the first ThinkPad laptop computer, the 700C. The ThinkPad, a premium machine which then cost US$4350, included a 25 MHz Intel 80486SL processor, a 10.4-inch active matrix display, removable 120 MB hard drive, 4 MB RAM (expandable to 16 MB) and a TrackPoint II pointing device. The design by noted designer Richard Sapper made the Thinkpad successful with the digerati, and the cool factor of the ThinkPad brought back some of the cachet to the IBM brand that was lost in the PC wars of the 1980s. Instrumental to this popular resurgence was the 1997 chess match between IBM's chess-playing computer system Deep Blue and reigning world chess champion Garry Kasparov. Deep Blue's victory was a historic first for a computer over a reigning world champion. Also helping the company reclaim its position as a technology leader was its annual domination of supercomputer rankings and patent leadership statistics. Ironically, a contributor in reviving the company's reputation was the Dot-com bubble collapse in 2000, where many of the edgy technology high flyers of the 1990s failed to survive the downturn. These collapses discredited some of the more fashionable Internet-driven business models that IBM was previously compared against.
Another factor was the company's revival of the IBM brand. The company's marketing during the economic downturn was chaotic, presenting different, sometimes discordant voices in the marketplace. This brand chaos was attributable in part to the company having 70 different advertising agencies in its employ. In 1994, IBM consolidated its advertising in one agency. The result was a coherent, consistent message to the marketplace.
As IBM recovered its financial footing, it sought to redefine the Internet age in ways that played to traditional IBM strengths, couching the discussion in business-centric manners with initiatives like e-commerce and On Demand. It supported open source initiatives, forming ventures with partners and competitors alike.
The company also revamped its philanthropic practices to bring focus on improving K-12 education. It ended its 40-year technology partnership with the International Olympic Committee after a successful engagement at the 2000 Olympic Games in Sydney, Australia. On the human resources front, IBM adopted and integrated diversity principles and practices ahead of the industry. It added sexual orientation to its non-discrimination practices in 1984, in 1995 created executive diversity task forces, and in 1996 offered domestic partner benefits to its employees. The company is listed as among the best places for employees, employees of color, and women to work. And in 1996, the Women in Technology International Hall of Fame inducted three IBM employees as part of its inaugural class of 10 women: Ruth Leach Amonette, the first woman to hold an executive position at IBM; Barbara Grant, PhD, first woman to be named an IBM site general manager; and Linda Sanford, the highest-placed technical woman in IBM. Fran Allen – a software pioneer for her innovative work in compilers over the decades – was inducted in 1997.
In 1998, IBM merged the enterprise-oriented Personal Systems Group of the IBM PC Co. into IBM's own Global Services personal computer consulting and customer service division. The resulting merged business units then became known simply as IBM Personal Systems Group. A year later, IBM stopped selling their computers at retail outlets after their market share in this sector had fallen considerably behind competitors Compaq and Dell. Immediately afterwards, the IBM PC Co. was dissolved and merged into IBM Personal Systems Group.
Gerstner retired at the end of 2002, and was replaced by long-time IBMer Samuel J. Palmisano.
In 2005, the company sold all of its personal computer business to Chinese technology company Lenovo and, in 2009, it acquired software company SPSS Inc. Later in 2009, IBM's Blue Gene supercomputing program was awarded the National Medal of Technology and Innovation by U.S. President Barack Obama. In 2011, IBM gained worldwide attention for its artificial intelligence program Watson, which was exhibited on Jeopardy! where it won against game-show champions Ken Jennings and Brad Rutter. The company also celebrated its 100th anniversary in the same year on June 16. In 2012, IBM announced it had agreed to buy Kenexa and Texas Memory Systems, and a year later it also acquired SoftLayer Technologies, a web hosting service, in a deal worth around $2 billion. Also that year, the company designed a video surveillance system for Davao City.
In 2014, IBM announced it would sell its x86 server division to Lenovo for $2.1 billion. while continuing to offer Power ISA-based servers.
Key events
1993
IBM misreads two significant trends in the computer industry: personal computers and client-server computing: and as a result loses more than $8 billion in 1993, its third straight year of billion-dollar losses. Since 1991, the company lost $16 billion, and many feel IBM is no longer a viable player in the industry.
Louis V. Gerstner Jr. Gerstner arrives as IBM's chairman and CEO on April 1, 1993. For the first time since the arrival of Thomas J. Watson Sr., in 1914, IBM has a leader pulled from outside its ranks. Gerstner had been chairman and CEO of RJR Nabisco for four years and had previously spent 11 years as a top executive at American Express.
IBM Scalable POWERparallel system. IBM introduces the Scalable POWERparallel System, the first in a family of microprocessor-based supercomputers using RISC System/6000 technology. IBM pioneers scalable parallel system technology of joining smaller, mass-produced computer processors rather than relying on one larger, custom-designed processor. Complex queries could then be broken down into a series of smaller jobs that are run concurrently ("in parallel") to speed their completion.
1994
IBM reports a profit for the year, its first since 1990. Over the next few years, the company focuses less on its traditional strengths in hardware, and more on services, software, and its ability to craft technology solutions.
IBM RAMAC Array Storage Family. With features like highly parallel processing, multi-level cache, RAID 5, and redundant components, RAMAC advances information storage technology. Consisting of the RAMAC Array Direct Access Storage Device (DASD) and the RAMAC Array Subsystem, almost 2,000 systems shipped to customers in its first three months of availability.
Speech recognition. IBM releases the IBM Personal Dictation System (IPDS), the first wave of speech recognition products for the personal computer. It is later renamed VoiceType, and its capabilities are expanded to include control of computer applications and desktops simply by talking to them, without touching a keyboard. In 1997 IBM announces ViaVoice Gold, software that provides a hands-free way to dictate text and navigate the desktop using natural, continuous speech.
1995
Lotus Development Corporation acquisition. IBM acquires the outstanding shares of the Lotus Development Corporation, whose Notes software improves collaboration across an enterprise and whose acquisition makes IBM the world's largest software company.
Glueball calculation. IBM scientists complete a two-year calculation – the largest single numerical calculation in the history of computing – to pin down the properties of an elusive elementary particle called a "glueball". The calculation was carried out on GF11, a massively parallel computer at the IBM Thomas J. Watson Research Center.
1996
IBM Austin Research Laboratory opens. Based in Austin, Texas, the lab is focused on advanced circuit design as well as new design techniques and tools for very high performance microprocessors.
Atlanta Olympics. IBM suffers a highly public embarrassment when its IT support of the Olympic Games in Atlanta experiences technical difficulties.
Domestic partner benefits. IBM announces Domestic Partner Benefits for gay and lesbian employees.
1997
Deep Blue. The 32-node IBM RS/6000 SP supercomputer, Deep Blue, defeats World Chess Champion Garry Kasparov in the first known instance of a computer beating a reigning world champion chess player in a tournament-style competition.
eBusiness. IBM coins the term and defined an enormous new industry by using the Internet as a medium for real business and institutional transformation. e-business becomes synonymous with doing business in the Internet age.
1998
CMOS Gigaprocessor. IBM unveils the first microprocessor that runs at 1 billion cycles per second. IBM scientists develop new Silicon on insulator chips to be used in the construction of a mainstream processor. The breakthrough ushers in new circuit designs and product groups.
1999
Blue Gene. IBM Research starts a computer architecture cooperative project with the Lawrence Livermore National Laboratory, the United States Department of Energy (which is partially funding the project), and academia to build new supercomputers (4) capable of more than one quadrillion operations per second (one petaflop). Nicknamed "Blue Gene", the new supercomputers perform 500 times faster than other powerful supercomputers and can simulate folding complex proteins.
2000
Quantum mirage nanotechnology. IBM scientists discover a way to transport information on the atomic scale using electrons instead of conventional wiring. This new phenomenon, called the quantum mirage effect, may enable data transfer within future nanoscale electronic circuits too small to use wires. The quantum mirage technique can send information through solid forms and could do away with wiring that connects nanocircuit components.
IBM ASCI White – Fastest supercomputer. IBM delivers the world's most powerful computer to the US Department of Energy, powerful enough to process an Internet transaction for every person on Earth in less than a minute. IBM built the supercomputer to test the safety and effectiveness of the nation's aging nuclear weapons stockpile. This computer is 1,000 times more powerful than Deep Blue, the supercomputer that beat Garry Kasparov in chess in 1997.
Flexible transistors. IBM creates flexible transistors, combining organic and inorganic materials as a medium for semiconductors. By eliminating the limitations of etching computer circuits in silicon, flexible transistors make it possible to create a new generation of inexpensive computer displays that can be embedded into curved plastic or other materials.
Sydney Olympics. After a its successful engagement at the 2000 Olympic games in Sydney, IBM ends its 40-year technology partnership with the International Olympic Committee.
2001
The book IBM and the Holocaust written by Edwin Black is released. The book accuses IBM of having knowingly assisted Nazi authorities in the perpetuation of the Holocaust through the provision of tabulating products and services. Several lawsuits are later filed against IBM by Holocaust victims seeking restitution for their suffering and losses. All lawsuits related to this issue were eventually dropped without recovery.
Carbon nanotube transistors. IBM researchers build the world's first transistors out of carbon nanotubes – tiny cylinders of carbon atoms that are 500 times smaller than silicon-based transistors and 1,000 times stronger than steel. The breakthrough is thought to be an important step in finding materials that can be used to build computer chips when silicon-based chips can't be made smaller.
Low power initiative. IBM launches its low-power initiative to improve the energy efficiency of IT and accelerates the development of ultra-low power components and power-efficient servers, storage systems, personal computers and ThinkPad notebook computers.
Greater density & chip speeds. IBM is first to mass-produce computer hard disk drives using a revolutionary new type of magnetic coating – "pixie dust" – that eventually quadruples data density of current hard disk drive products. IBM also unveils "strained silicon", a breakthrough that alters silicon to boost chip speeds by up to 35 percent.
2002
IBM's hard disk drive business is sold to Hitachi.
2003
Blue Gene/L. The Blue Gene team unveils a proto-type of its Blue Gene/L computer roughly the size of a standard dishwasher that ranks as the 73rd most powerful supercomputer in the world. This cubic meter machine is a small scale model of the full Blue Gene/L built for the Lawrence Livermore National Laboratory in California, which will be 128 times larger when it's unveiled two years later.
2005
Crusade Against Cancer. IBM joins forces with Memorial Sloan-Kettering Cancer Center (MSKCC), the Molecular Profiling Institute and the CHU Sainte-Justine Research Center to collaborate on cancer research by building state-of-the-art integrated information management systems.
Acquisition of the IBM PC business by Lenovo. The low-margin PC division (including ThinkPads) is sold to Chinese manufacturer, Lenovo.
2006
Translation software. IBM delivers an advanced speech-to-speech translation system to U.S. forces in Iraq using bidirectional English to Arabic translation software that improves communication between military personnel and Iraqi forces and citizens. The software helps offset the shortage of military linguists.
2007
Renewable energy. IBM is recognized by the US EPA for its green power purchases in the US and for its support and participation in EPA's Fortune 500 Green Power Challenge. IBM ranked 12th on the EPA's list of Green Power Partners for 2007. IBM purchased enough renewable energy in 2007 to meet 4% of its US electricity use and 9% of its global electricity purchases. IBM's commitment to green power helps cut greenhouse gas emissions.
River watch using IBM Stream Computing. In a unique collaboration, The Beacon Institute and IBM created the first technology-based river monitoring network. The River and Estuary Observatory Network (REON) allows for minute-to-minute monitoring of New York's Hudson River via an integrated network of sensors, robotics and computational technology. This project is made possible by IBM's "Stream Computing", a new computer architecture that can examine thousands of information sources to help scientists better understand what is happening as it happens.
IBM has been granted more US patents than any other company. From 1993 to 2007, IBM was awarded over 38,000 US patents and has invested about $5 billion annually in research, development, and engineering since 1996. IBM's active portfolio is about 26,000 patents in the US and over 40,000 patents worldwide is a direct result of that investment.
2008
IBM Roadrunner No.1 Supercomputer. For the ninth consecutive time, IBM takes the No.1 ranking of the world's most powerful supercomputers with its computer built for the Roadrunner project at Los Alamos National Laboratory. It is the first in the world to operate at speeds faster than one quadrillion calculations per second and remains the world speed champion for over a year. The Los Alamos system is twice as energy-efficient as the No. 2 computer at the time, using about half the electricity to maintain the same level of computing power.
Green power. IBM opens its "greenest" data center in Boulder, Colorado. The energy-efficient facility is part of a $350 million investment by IBM to help meet customer demand for reducing energy costs. The new data center includes high-density computing systems with virtualization technology that reduce data center carbon footprint.
2011
Watson. IBM's supercomputer Watson won on the TV game show Jeopardy! against Ken Jennings and Brad Rutter. The competition was presented by PBS.
June 16, 2011: IBM founded 100 years ago. Mark Krantz and Jon Swartz in USA Today state "It has remained at the forefront through the decades ... the fifth-most-valuable U.S. company [today] ... demonstrated a strength shared by most 100-year-old companies: the ability to change. ...survived not only the Depression and several recessions, but technological shifts and intense competition as well."
2015
April: IBM Watson Health division created. IBM Watson Health was created largely through a series of acquisitions with the intention of using Watson in healthcare. A 2021 post from the Association for Computing Machinery (ACM) titled "What Happened To Watson Health?" described the portfolio management challenges.
October 28, Red Hat acquisition for $34 billion On October 28, 2018, IBM announced its intent to acquire Red Hat for US$34 billion, in one of its largest-ever acquisitions. The company will operate out of IBM's Hybrid Cloud division.
2019–present
The 2019 acquisition of Red Hat enabled IBM to change its focus on future platforms, according to IBM Chief Executive Arvind Krishna.
In October 2020, IBM announced it is splitting itself into two public companies. IBM will focus on high-margin cloud computing and artificial intelligence, built on the foundation of the 2019 Red Hat acquisition. The legacy Managed Infrastructure Services unit will be spun off into a new public company Kyndryl to manage clients' IT infrastructure and accounts, and have 4,600 clients in 115 countries, with a backlog of $60 billion.
This was IBM's largest divestiture so far, and was welcomed by investors.
On January 21, 2022, IBM announced that it would sell Watson Health to the private equity firm Francisco Partners.
In July 2022, IBM announced the acquisition of Databand, a data observability software developer, for an undisclosed amount. Following the acquisition, Databand employees will join IBM's data and AI division.
In December 2022, it was announced IBM had acquired the Reston-headquartered digital transformation and IT modernization services provider, Octo Consulting from Arlington Capital Partners for an undisclosed price. IBM also signed a partnership with new Japanese 2 nm process manufacturing company Rapidus.
In August 2023, IBM announced that it would sell The Weather Company to private equity firm Francisco Partners.
Twentieth-century market power and antitrust
IBM dominated the electronic data processing market for most of the 20th century, initially controlling over 70 percent of the punch card and tabulating machine market and then achieving a similar share in the computer market. IBM asserted that its successes in achieving and maintaining such market share were due to its skill, industry and foresight; governments and competitors asserted that the maintenance of such large shares was at least in part due to anti-competitive acts such as unfair prices, terms and conditions, tying, product manipulations and creating FUD (Fear, Uncertainty and Doubt) related to its competitors, in the marketplace. IBM was thus the defendant in more than twenty government and private antitrust actions during the 20th century. IBM lost only one of these matters but did settle others in ways that profoundly shaped the industry as summarized below. By the end of the 20th century, IBM was no longer so dominant in the computer industry. Some observers suggest management's attention to the many antitrust lawsuits of the 1970s was at least in part responsible for its decline.
1936 Consent Decree
In 1932, U.S. Government prosecutors asserted as anti-competition tying IBM's practice of requiring customers who leased its tabulating equipment to purchase punched cards used on such equipment. IBM lost the lawsuit and in the resulting 1936 consent decree, IBM agreed to no longer require only IBM cards and agreed to assist alternative suppliers of cards in starting production facilities that would compete with IBM's; thereby create a separate market for the punched cards and in effect for subsequent computer supplies such as magnetic tapes and disk packs.
1956 Consent Decree
On January 21, 1952, the U.S. Government filed a lawsuit which resulted in a consent decree entered as a final judgment on January 25, 1956. The government's goal to increase competition in the data processing industry was effected through several provisions in the decree:
IBM was required to sell equipment on terms that would not place purchasers at a disadvantage with respect to customers leasing the same equipment from IBM. Prior to this decree, IBM had only rented its equipment. This created markets both for used IBM equipment and enabled lease financing of IBM equipment by third parties (leasing companies).
IBM was required to provide parts and information to independent maintainers of purchased IBM equipment, enabling and creating a demand for such hardware maintenance services.
IBM was required to sell data processing services through a subsidiary that could be treated no differently than any company independent of IBM, enabling competition in the data processing services business.
IBM was required to grant non-exclusive, non-transferable, worldwide licenses for any and all patents at reasonable royalty rates to anyone, provided the licensee cross-licensed its patents to IBM on similar terms. This removed IBM patents as a barrier to competition in the data processing industry and enabled the emergence of manufacturers of equipment plug compatible to IBM equipment.
While the decree did little to limit IBM's future dominance of the then-nascent computer industry, it did enable competition in segments such as leasing, services, maintenance, and equipment attachable to IBM systems and reduced barriers to entry through mandatory reasonable patent cross-licensing.
The decree's terms remained in effect until 1996; they were phased out over the next five years.
1968–1984 Multiple government and private antitrust complaints
In 1968 the first of a series of antitrust suits against IBM was filed by Control Data Corp (CDC). It was followed in 1969 by the US government's antitrust complaint, then by 19 private US antitrust complaints and one European complaint. In the end IBM settled a few of these matters but mainly won. The US government's case sustained by four US Presidents and their Attorneys General was dropped as "without merit" in 1982 by William Baxter, US President Reagans' Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice.
1968–1973 Control Data Corp. v. IBM
CDC filed an antitrust lawsuit against IBM in Minnesota's federal court alleging that IBM had monopolized the market for computers in violation of section 2 of the Sherman Antitrust Act by among other things announcing products it could not deliver. A 1965 internal IBM memo by an IBM attorney noted that Control Data had publicly blamed its declining earnings on IBM, "and its frequent model and price changes. There was some sentiment that the charges were true." In 1973 IBM settled the CDC case for about $80 million in cash and the transfer of assets including the IBM Service Bureau Corporation to CDC.
1969–1982 U.S. v. IBM
On January 17, 1969, the United States of America filed a complaint in the United States District Court for the Southern District of New York, alleging that IBM violated Section 2 of the Sherman Antitrust Act by monopolizing or attempting to monopolize the general-purpose electronic digital computer system market, specifically computers designed primarily for business. Subsequently, the US government alleged IBM violated the antitrust laws in IBM's actions directed against leasing companies and plug-compatible peripheral manufacturers.
In June 1969 IBM unbundled its software and services in what many observers believed was in anticipation of and a direct result of the 1969 US Antitrust lawsuit. Overnight a competitive software market was created.
Among the major violations asserted were:
Anticompetitive price discrimination such as giving away software services.
Bundling of software with "related computer hardware equipment" for a single price.
Predatorily priced and preannounced specific hardware "fighting machines".
Developed and announced specific hardware products primarily for the purpose of discouraging customers from acquiring competing products.
Announced certain future products knowing that it was unlikely to be able to ship such products within the announced time frame.
Engaged in below cost and discount conduct in selected markets in order to injure peripheral manufacturers and leasing companies.
It was in some ways one of the great single firm monopoly cases of all times. IBM produced 30 million pages of materials during discovery; it submitted its executives to a series of pretrial depositions. Trial began six years after the complaint was filed and then it battled in court for another six years. The trial transcript contains over 104,400 pages with thousands of documents placed in the record. It ended on January 8, 1982, when William Baxter, the then Assistant Attorney General in charge of the Antitrust Division of the Department of Justice dropped the case as "without merit".
1969–1981 Private antitrust lawsuits
The U.S.'s 1969 antitrust lawsuit was followed by about 18 private antitrust complaints all but one of which IBM ultimately won. Some notable lawsuits include:
Greyhound Computer Corp.
Greyhound, a leasing company, filed a case under Illinois' state antitrust law in Illinois state court. This case went to trial in federal court in 1972 in Arizona with a directed verdict for IBM on the antitrust claims; however, the court of appeals in 1977 reversed the decision. Just before the retrial was to start in January 1981, IBM and Greyhound settled the case for $17.7 million.
Telex Corp.
Telex, a peripherals equipment manufacturer, filed suit on January 21, 1972, charging that IBM had monopolized and had attempted to monopolize the worldwide manufacture, distribution, sales, and leasing of electronic data processing equipment including the relevant submarket of plug-compatible peripheral devices. After a non-jury trial in 1973, IBM was found guilty "possessing and exercising monopoly power" over the "plug-compatible peripheral equipment market", and ordered to pay triple damages of $352.5‐million and other relief including disclosure of peripheral interface specifications. Separately Telex was found guilty of misappropriated IBM trade secrets. The judgment against IBM was overturned on appeal and on October 4, 1975, both parties announced they were terminating their actions against each other.
Other private lawsuits
Other private lawsuits ultimately won by IBM include California Computer Products Inc., Memorex Corp., Marshall Industries, Hudson General Corp., Transamerica Corporation and Forro Precision, Inc.
1980–1984 European Union
The European Economic Community Commission on Monopolies initiated proceedings against IBM under article 86 of the Treaty of Rome for exploiting its domination of the continent's computer business and abusing its dominant market position by engaging in business practices designed to protect its position against plug-compatible manufacturers. The case was settled in 1984 with IBM agreeing to change its business practices with regard to disclosure of device interface information.
Products and technologies
See List of IBM products
Evolution of IBM's operating systems
IBM operating systems have paralleled hardware development. On early systems, operating systems represented a relatively modest level of investment, and were essentially viewed as an adjunct to the hardware. By the time of the System/360, however, operating systems had assumed a much larger role, in terms of cost, complexity, importance, and risk.
High-level languages
Early IBM computer systems, like those from many other vendors, were programmed using assembly language. Computer science efforts through the 1950s and early 1960s led to the development of many new high-level languages (HLL) for programming. IBM played a complicated role in this process. Hardware vendors were naturally concerned about the implications of portable languages that would allow customers to pick and choose among vendors without compatibility problems. IBM, in particular, helped create barriers that tended to lock customers into a single platform.
Nevertheless, IBM had a significant role in the following major computer languages:
FORTRAN – for years, the dominant language for mathematics and scientific programming
PL/I – an attempt to create a "be all and end all" language
COBOL – eventually the ubiquitous, standard language for business applications
APL – an early interactive language with a mathematical notation
PL/S – an internal systems programming language proprietary to IBM
RPG – an acronym for 'Report Program Generator', developed on the IBM 1401 to produce reports from data files. General Systems Division enhanced the language to HLL status on its midrange systems to rival COBOL.
SQL – a relational query language developed for IBM's System R; now the standard RDBMS query language
Rexx – a macro and scripting language based on PL/I syntax originally developed for Conversational Monitor System (CMS) and authored by IBM Fellow Mike Cowlishaw
IBM and AIX/UNIX/Linux/SCO
IBM developed an inconsistent relationship with the UNIX and Linux worlds. The importance of IBM's large computer business placed pressures on all of IBM's attempts to develop other lines of business. All IBM projects faced the risk of being seen as competing against company priorities. This was because, for example, if a customer decided to build an application on an RS/6000 platform, this also meant that a decision had been made against the highly profitable and entrenched mainframe platform. So despite having some excellent technology, IBM often placed itself in a compromised position.
A case in point is IBM's GFIS products for infrastructure management and GIS applications. Despite long having a dominant position in such industries as electric, gas, and water utilities, IBM stumbled in the 1990s trying to build workstation-based solutions to replace its existing mainframe-based products. Some customers moved to new technologies from other vendors; many felt betrayed by IBM.
While IBM better embraced open source technologies in the 1990s, it later became embroiled in a complex litigation with SCO group over intellectual property rights related to the UNIX and Linux platforms.
See also
Category IBM articles
History of IBM magnetic disk drives
Notes and references
Further reading
Commentary, general histories
For more recent IBM subject books see: IBM#Further reading
Boyett, Joseph H.; Schwartz, Stephen; Osterwise, Laurence; Bauer, Roy (1993) The Quality Journey: How Winning the Baldrige Sparked the Remaking of IBM, Dutton
Engelbourg, Saul (1954) International Business Machines: A Business History, 385pp (doctoral dissertation). Reprinted by Arno, 1976
Foy, Nancy (1975) The Sun Never Sets on IBM, William Morrow, 218pp (published in UK as The IBM World)
IBM (1936) Machine Methods of Accounting This book is constructed from 18 pamphlets, the first of which (AM-01) is Development of International Business Machines Corporation – a 12-page 1936 IBM-written history of IBM.
Malik, R. (1975) And Tomorrow the World: Inside IBM, Millington, 496pp
Mills, D. Quinn (1988) The IBM Lesson: The Profitable Art of Full Employment, Times Books, 216pp
Richardson, F.L.W. Jr.; Walker, Charles R. (1948). Human Relations in an Expanding Company. Labor and Management Center Yale University. Reprinted by Arno, 1977.
– A paperback reprint of IBM: Colossus in Transition.
Technology
For Punched card history, technology, see: Unit record equipment#Further reading
For Herman Hollerith see: Herman Hollerith#Further reading
Baker, Stephen (2012) Final Jeopardy: The Story of Watson, the Computer That Will Transform Our World, Mariner Books
Baldwin, Carliss Y; Clark, Kim B. (2000) Design Rules: The Power of Modularity, vol.1, MIT. unique perspective on the 360 (Tedlow p. 305)
Bashe, Charles J.; Pugh, Emerson W.; Johnson, Lyle R./Palmer, John H. (1986). IBM's Early Computers. MIT Press. .
Chposky, James; Leonsis, Ted (1988). Blue Magic: The People, Power, and Politics Behind the IBM Personal Computer. Facts on File.
Dell, Deborah; Purdy, J. Gerry. ThinkPad: A Different Shade of Blue. Sams. .
Hsu, Feng-hsiung (2002). Behind Deep Blue: Building the Computer that Defeated the World Chess Champion. Princeton University Press. .
Kelly, Brian W. (2004) Can the AS/400 Survive IBM?, Lets Go
Killen, Michael (1988) IBM: The Making of the Common View, Harcourt Brace Jovanovich
Mills, H.D.; O'Neill, D.; Linger, R.C.; Dyer, M.; Quinnan, R.E. (1980) The Management of Software Engineering, IBM Systems Journal (SJ), Vol. 19, No. 4, 1980, pp. 414–77 http://www.research.ibm.com/journal/sj/
Pugh, Emerson W. (1995). Building IBM: Shaping and Industry and Its Technology. MIT Press. .
Pugh, Emerson W.; Johnson, Lyle R.; Palmer, John H. (1991). IBM's 360 and Early 370 Systems. MIT Press. .
Soltis, Frank G. (2002) Fortress Rochester: The Inside Story of the IBM iSeries, 29th Street Press
Yost, Jeffrey R. (2011) The IBM Century: Creating the IT Revolution, IEEE Computer Society
Locations – plants, labs, divisions, countries
DeLoca, Cornelius E.; Kalow, Samuel J. (1991) The Romance Division ... A Different Side of IBM , D & K Book, 223pp (history, strategy, key people in Electric Typewriter and successor Office Products Div)
France, Boyd (1961) IBM in France, Washington National Planning Assoc
Harvey, John (2008) Transition The IBM Story, Switzer (IBM IT services in Australia)
Heide, Lars (2002) National Capital in the Emergence of a Challenger to IBM in France
Jardine, Diane (ed) (2002) IBM @ 70: Blue Beneath the Southern Cross: Celebrating 70 Years of IBM in Australia, Focus
Joseph, Allan (2010) Masked Intentions: Navigating a Computer Embargo on China, Trafford, 384pp
Meredith, Suzanne; Aswad, Ed (2005) IBM in Endicott, Arcadia, 128pp
Norberg, Arthur L.; Yost, Jeffrey R. (2006) IBM Rochester: A Half Century of Innovation, IBM
Robinson, William Louis (2008) IBM's Shadow Force: The Untold Story of Federal Systems, The Secretive Giant that Safeguarded America, Thomas Max, 224pp
Biographies, memoirs
For IBM's corporate biographies of former CEOs and many others see: IBM Archives Biographies Builders reference room
Amonette, Ruth Leach (1999). Among Equals, A Memoir: The Rise of IBM's First Woman Vice President. Creative Arts Book Company. .
Beardsley, Max (2001) International Business Marionettes: An IBM Executive Struggles to Regain His Sanity after a Brutal Firing, Lucky Press
Birkenstock, James W. (1999). Pioneering: On the Frontier of Electronic Data Processing, A Personal Memoir, self-published, 72pp
Lewis M. Branscomb#Books by Lewis Branscomb
Drandell, Milton (1990) IBM: The Other Side, 101 Former Employees Look Back, Quail
Charles Ranlett Flint#Bibliography
Louis V. Gerstner Jr.#References
Gould, Heywood (1971). Corporation Freak, Tower, 174pp ("hired as an audio-visual consultant by the Advanced Systems Development Division")
Herman Hollerith#Further reading
Lamassonne, Luis A. (2001). My Life With IBM. Protea. .
Maisonrouge, Jacques (1985). Inside IBM: A Personal Story. McGraw Hill. .
William W. Simmons#Selected publications
Ulrich Steinhilper#IBM and later life
Thomas, Charles (1993) Black and Blue: Profiles of Blacks in IBM, Atlanta Aaron, 181pp
Thomas J. Watson#Further reading
Thomas Watson Jr.#Further reading
Williamson, Gordon R. (2009) Memoirs of My Years with IBM: 1951–1986, Xlibris, 768pp
External links
IBM Archives, History of IBM
IBM at 100 – IBM reviews and reflects on its first 100 years
THINK: Our History of Progress; 1890s to 2001. IBM
Oral History with James W. Birkenstock, Charles Babbage Institute, University of Minnesota. Birkenstock was an adviser to the president and subsequently as Director of Product Planning and Market Analysis at IBM. In this oral history, Birkenstock discusses the metamorphosis of the company from leader of the tabulating machine industry to leader of the data processing industry. He describes his involvement with magnetic tape development in 1947, the involvement of IBM in the Korean War, the development of the IBM 701 computer (known internally as the Defense Calculator), and the emergence of magnetic-core memory from the SAGE project. He then recounts the entry of IBM into the commercial computer market with the IBM 702. The end of the interview concerns IBM's relationship with other early entrants in the international computer industry, including litigation with Sperry Rand, its cross-licensing agreements, and cooperation with Japanese electronics firms.
IBM Archives
Biographies
Builders reference room
History
History of computer companies
History of computing hardware
History of companies of the United States | History of IBM | [
"Technology"
] | 23,468 | [
"History of computer companies",
"History of computing hardware",
"History of computing"
] |
7,283,344 | https://en.wikipedia.org/wiki/Electrochemical%20window | The electrochemical window (EW) of a substance is the electrode electric potential range between which the substance is neither oxidized nor reduced. The EW is one of the most important characteristics to be identified for solvents and electrolytes used in electrochemical applications. The EW is a term that is commonly used to indicate the potential range and the potential difference. It is calculated by subtracting the reduction potential (cathodic limit) from the oxidation potential (anodic limit).
When the substance of interest is water, it is often referred to as the water window.
This range is important for the efficiency of an electrode. Out of this range, the electrodes will react with the electrolyte, instead of driving the electrochemical reaction.
In principle, ammonia has an extremely small electrochemical window, but thermodynamically-favored reactions less than 1 V outside the window are very slow. Consequently, the electrochemical window for many practical reactions is much larger, comparable to water. Ionic liquids famously have a very large electrochemical window, about 4–5 V.
The importance of electrochemical window (EW) in organic batteries
The electrochemical window (EW) is an important concept in organic electrosynthesis and design of batteries, especially organic batteries. This is because at higher voltage (greater than 4.0 V) organic electrolytes decompose and interferes with the oxidation and reduction of the organic cathode/anode materials. For this reason, the best organic electrolytes should be characterized by a wider range of electrochemical window, i.e., greater than the working range of the battery cell voltage. For example, the electrochemical window of Lithium bis- (trifluoromethanesulfonyl)imide, commercially known as LiTFSI is about 3.0 V because it can operate in the range of 1.9 -4.9 V. On the other hand, for electrolytes that are characterized by narrow electrochemical window, they are prone to irreversible decomposition, which in turn triggers the battery capacity decaying during subsequent battery cycling.
The electrochemical window of organic electrolyte depends on many factors that include temperature, molecular frontier orbitals such LUMO (Lowest Unoccupied Molecular Orbital) and HOMO (Highest occupied Molecular Orbital) because the mechanisms of reduction (electron gaining) and oxidation (electron loss) are governed by band gap between HOMO and LUMO. Solvation energy also plays an important role in defining the electrochemical window of the electrolyte.
In order to safeguard the thermodynamic stability working conditions of the electrode materials in a given electrolyte, the electrochemical potentials of the electrode materials (anode and cathode) must be comprised within the electrochemical stability of the electrolyte. This condition is very succinct because electrolyte might be oxidized when the cathode material possess an electrochemical potential, which is less than the electrolyte oxidation potential. When the electrochemical potential of the anode material is quite higher than the reduction potential of the electrolyte, the electrolyte will be degraded through reduction process.
Limitation of Electrochemical window
One of the shortcoming of electrochemical window (EW) in predicting the stability of the electrolyte towards anode or cathode materials ignores the voltage and the ionic conductivity, which are also important.
References
Electrochemistry | Electrochemical window | [
"Chemistry"
] | 709 | [
"Electrochemistry"
] |
7,283,807 | https://en.wikipedia.org/wiki/Zhengma%20method | The Zhengma Input Method (Simplified Chinese: 郑码输入法, Traditional Chinese: 鄭碼輸入法) (also referred to as Zheng code method) is a Chinese language input method. The primary goal of Zhengma design is compatibility with different types of characters (ability to input both simplified Chinese and traditional Chinese), scalability (it works well with extremely large sets of ideographs) and ease of use, especially for people who are experienced with how ideographs are formed. For these reasons this input method is used more by scholars of the Chinese language or people who need to use both traditional and simplified Chinese. This input method is one of two stroke-based input method that are included with Microsoft Windows. (The other stroke-based method is Cangjie which can also generate both simplified and traditional characters and which is extensively taught and used in Taiwan and Hong Kong.)
Zhengma is similar to the Wubi method, but has different stroke coding.
Under Linux, this input method is supported by the following IME packages:
fcitx
ibus
References
External links
CJK input methods
Input methods | Zhengma method | [
"Technology"
] | 228 | [
"Input methods",
"Natural language and computing"
] |
7,284,042 | https://en.wikipedia.org/wiki/Band%20cell | A band cell (also called band neutrophil, band form or stab cell) is a cell undergoing granulopoiesis, derived from a metamyelocyte, and leading to a mature granulocyte.
It is characterized by having a curved but not lobular nucleus.
The term "band cell" implies a granulocytic lineage (e.g., neutrophils).
Clinical significance
Band neutrophils are an intermediary step prior to the complete maturation of segmented neutrophils. Polymorphonuclear neutrophils are initially released from the bone marrow as band cells. As the immature neutrophils become activated or exposed to pathogens, their nucleus will take on a segmented appearance. An increase in the number of these immature neutrophils in circulation can be indicative of an infection for which they are being called to fight against, or some inflammatory process. The increase of band cells in the circulation is called bandemia and is a "left shift" process.
Blood reference ranges for neutrophilic band cells in adults are 3 to 5% of white blood cells, or up to 0.7 × 109/L.
An excess may sometimes be referred to as bandemia.
See also
Pluripotential hemopoietic stem cell
Additional images
References
External links
- "Bone Marrow and Hemopoiesis: bone marrow smear, neutrophil series"
Histology at okstate.edu
Slide at hematologyatlas.com - "Neutrophil band" visible in second row
Interactive diagram at lycos.es
Histology | Band cell | [
"Chemistry"
] | 346 | [
"Histology",
"Microscopy"
] |
7,284,050 | https://en.wikipedia.org/wiki/Metamyelocyte | A metamyelocyte is a cell undergoing granulopoiesis, derived from a myelocyte, and leading to a band cell.
It is characterized by the appearance of a bent nucleus, cytoplasmic granules, and the absence of visible nucleoli. (If the nucleus is not yet bent, then it is likely a myelocyte.)
Additional images
See also
Pluripotential hemopoietic stem cell
External links
- "Bone Marrow and Hemopoiesis: bone marrow smear, neutrophilic metamyelocyte and mature PMN"
Interactive diagram at lycos.es
Slide at marist.edu
hematologyatlas.com
Histology
Leukocytes | Metamyelocyte | [
"Chemistry"
] | 153 | [
"Histology",
"Microscopy"
] |
7,284,218 | https://en.wikipedia.org/wiki/Integrated%20enterprise%20modeling | Integrated enterprise modeling (IEM) is an enterprise modeling method used for the admission and for the reengineering of processes both in producing enterprises and in the public area and service providers. In integrated enterprise modeling different aspects as functions and data become described in one model. Furthermore, the method supports analyses of business processes independently of the available organizational structure.
The Integrated Enterprise Modeling is developed at the Fraunhofer Institute for Production Systems and Design Technology (German: IPK) Berlin, Germany.
Integrated enterprise modeling topics
Base constructs
The integrated enterprise modeling (IEM) method uses an object-oriented approach and adapts this for the enterprise description. An application-oriented division of all elements of an enterprise forms the core of the method in generic object classes "product", "resource" and "order".
Product
The object class "product" represents all objects whose production and sale are the aim of the looked-at-enterprise as well as all objects which flow into the end product. Raw materials, intermediate products, components and end products, as well as services and the describing data, are included.
Order
The object class "order" describes all types of commissioning in the enterprise. The objects of the class "order" represent the information that is relevant from the point of view of planning, control, and supervision of the enterprise processes. One understands by it what, when, at which objects, in whose responsibility and with which resources it will be executed.
Resource
The IEM class "resource" contains all necessary key players which are required in the enterprise for the execution or support of activities. Among other things, these are employees, business partner, all kinds of documents as well as information systems or operating supplies.
The classes "product", "order", and "resource" can gradually be given full particulars and specified. Through this it is possible to show both line of business typical and enterprise-specific product, order and resource subclasses. Structures (e.g. parts lists or organisation charts) can be shown as relational features of the classes with the help of being-part-of- and consists-of-relations between different subclasses.
Action
The activities which are necessary for the production of products and to the provision of services can be described as follows: an activity is the purposeful change of objects. The aim orientation of the activities causes an explicit or implicit planning and control. The execution of the activities is incumbent by the capable key players. From these considerations the definitions can be derived for the following constructs:
An action is an object neutral description of activities: a verbal description of a work task, a lawsuit or proceeding;
A function describes the change of state of a defined status into another defined one of objects of a class by using an action; and
An activity specifies necessary resources for the state transformation of objects of a class the controlling order described by a function and these for the execution of this transformation in the enterprise, in each case represented by an object state description.
Views
All modeled data of the looked-at-enterprise are recorded in the model core of an Integrated Enterprise Modeling (IEM) model in two main views:
the "information model"; and
the "business process model".
All relevant objects of an enterprise, their qualities and relations are shown in the "information model". It is class trees of the object classes "product", "order" and "resource" here. The "business process model" represents enterprise processes and their relations to each other. Activities are shown in their interaction with the objects.
Process modeling
The structuring of the enterprise processes in Integrated Enterprise Modeling (IEM) is reached by its hierarchical subdivision with the help of the decomposition. Decomposition means the reduction of a system in a partial system which respectively contains components which are in a logical cohesion. The process modeling is a partitioning of processes into its threads. Every thread describes a task completed into itself. The decomposition of single processes can be carried out long enough until the threads are manageable, i.e. appropriately small. They may turn out also not too rudimentary because a high number of detailed processes increases the complexity of a business process model. A process modeling person, therefore, has to find a balance between the effort complexity degree of the model and possible detailed description of the enterprise processes. A model depth generally recommends itself with at most three to four decomposition levels (model levels).
On a model level business process flows are represented with the aid of illustrated combination elements. There are these five basic types of combinations between the activities:
Sequential order: At a sequential order the activities are executed after each other.
Parallel branching: A parallel branching means that all parallel branched activities to be executed have to be completed before the following activity can be started with. It is not necessary that the parallel activities are executed at the same time. They can be deferred, too.
Case distinction: Decision either or. The case distinction is a branching in alternative processes depending on definition of the subsequent conditions.
Uniting: The end of a parallel as the case may be alternative execution or also an integration of process chains is indicated by the uniting.
Loop: A repatriation (loop, cycle) is represented by means of case distinction and uniting. The activities included in the loop are executed as long as the condition for the continuation is given.
Modeling proceeding
The modeling procedure for the illustration of business processes in IEM covers the following steps:
System delimitation;
Modeling;
Model evaluation and use; and
Model change.
The system delimitation is the base of an efficient modeling. Starting out from a conceptual formulation the area of the real system to be shown is selected and interfaces will be defined to an environment. In addition, the detail depth of the model is also determined, i.e. the depth of the hierarchical decomposition relations in the view "business process model".
The delimited real system is convicted with help of the IEM method in an abstract model. IEM is the construction of the two main positions "information model" and "business process model". The "information model" is made by the specification of the object classes to be modeled for "product", "order" and "resource" with the class structures as well as descriptive and relational features. By identification and description of functions, activities and its combination to processes the "business process model" is formed. As a general rule the construction of the "information model" follows first in which the modeling person can go back to available reference class structures. The reference classes which do not correspond to the real system or were not found to be relevant at the system delimitation are deleted. The missing relevant classes are inserted. After the object base is fixed, the activities and functions are joined at the objects according to the "generic activity model" and with the help of combination elements to business processes. A model is made which can be analysed and changed if it is required. It often happens, that during the construction of the "business process model" new relevant object classes are identified so that the class trees getting completed. The construction of the two positions is, therefore, an iterative process.
Afterward, weak points and improvement potentials can be identified in the course of the model evaluation. This can cause the model changes whose realization should clear the weak points and make use of the improvement potentials in the real system.
Modeling tool MO²GO
The software tool MO²GO (method for an object-oriented business process optimization) supports the modeling process based on the integrated enterprise modeling (IEM). Different analyses of a given model are available like the planning and implementation of information systems. The MO²GO system is expandable easily and makes a high-speed modeling approach possible.
The currently used MO²GO system consists of the following components:
MO²GO version 2.4: This component offers modeling functions for class structures, process chains and mechanism for analysis of IEM.
MO²GO Macro editor version 2.1: The macro editor supports the outline of MO²GO macros for user-defined evaluation procedures.
MO²GO Viewer version 1.07: The Java-based and licence-free MO²GO Viewer is a user interface to be used easily to navigate process chains through MO²GO.
MO²GO XML converter version 1.0: Nowadays the IT implementation works mainly with UML diagrams. MO²GO supports a component for a model based XML file which can be imported in UML tools.
MO²GO Web publisher version 2.0: The web Publisher is a mechanism of analysis to be started directly out of MO²GO 2.4. A process assistant is the result of the evaluation of the model contents based on texture and hyperlink representation. To be able to adapt the process assistant to the user requirements flexibly, the web Publisher contains a configuration component.
MO²GO process assistant
The IEM business process models contain much information that can not only be used by system analysts but also be helpful for the employees at their daily work. To provide this model information for the staff and to enable the participation of the employees for the results of the modeling, a special tool was developed at the Fraunhofer IPK. This is a web-based process assistant whose contents are generated automatically from the IEM business process model of the enterprise. The process assistant provides all users the information of the business process model in an HTML-based form by intranet of the enterprise. For its implementation, no special methods or tool knowledge is required besides the basic EDP and Internet experiences.
The process assistant has been developed so that the employees can find answers to the questions fast and precisely: e.g.
What are the processes in the enterprise?
In which way are they structured as?
Who and with which responsibility is involved in the certain process?
Which documents and application systems are used?
Or also:
A certain organisation unit is involved at which processes?
Or in which processes a certain document or an application system is used?
To make an informative process assistant from the business process model, certain modeling rules must be followed. The means e.g. that the individual actions must be deposited with its descriptions, the responsibility of the organisation units must be indicated explicitly or the paths also must be entered to the documents in the class tree. The fulfilment of these conditions means an additional time expenditure at the modeling, if these conditions are met, all employees are able to "surf" online through the intranet with the help of the process assistant by an informative enterprise documentation. They have the possibility between a graphic view and a texture-based description according to their preferences and methodical previous knowledge. The graphic view is provided by the MO²GO Viewer, a viewer tool for MO²GO models. The process assistant and the MO²GO Viewer are connected so that the graphic representation of the process looked at can be accessed context sensitively from the process assistant.
Users can call on all templates, specifications and documents for the working sequence both from the process assistant and from the MO²GO Viewer online. Therefore, the process assistant cannot only be employed for the tracing of the modeling results but also in the daily business for the training of new employees as well as execution of process steps. To improve the usability in the daily routine, the process assistant can be adapted to the needs of the users' flexibility. This customization can be carried out both concerning the layout and concerning the main content emphases of the process assistant.
Areas of application of the IEM
Knowledge is used in organisations as a resource to render services for customers. The service preparation performs along actions which are described as processes or business processes. The analysis and improvement in dealing with knowledge presupposes a common idea about this context. An explicit description of the processes, therefore, is required because they represent the context for the respective knowledge contents. The process modeling represents a powerful instrument for the design and a conversion of a process-oriented knowledge management. In the context of the method of the business process-oriented knowledge management (GPO KM) developed at the Fraunhofer IPK the method of the "integrated enterprise modeling" (IEM) is accessed. It makes it possible to be able to show, to describe, to analyse and to form organisational processes. The IEM features few object classes, is ascertainable easily understandable and fast. Furthermore, the object orientation of the IEM opens up the possibility of showing knowledge as an object class. For the knowledge-oriented modeling of the business processes according to the IEM method the relevant knowledge contents have to be specified after knowledge domains and know-how bearers and represented as resources in the business process model.
In further applications, IEM is used to create models across organisations (e.g. companies) to archive a common understanding between the involved stakeholders and derive services (create software and define the ASP). In this context the object-oriented basis of IEM has been used to create a common semantic across the single company models and to archive compliant enterprise models (predefined classes – terminology, model templates, etc.). The reason is that the terminology used within a model has to be understandable independent of the modeling language, see also SDDEM.
See also
Business process modeling
References
Further reading
Peter Bernus ; Mertins, K. ; Schmidt, G. (2006). Handbook on architectures of information systems. Berlin : Springer, 2006, (International handbook on information systems) , Second Edition 2006
Mertins, K. (1994). Modellierungsmethoden für rechnerintegrierte Produktionsprozesse. Hanser Fachbuchverlag, Germany, ASIN 3446177469
Mertins, K.; Süssenguth, W.; Jochem, R. (1994). Modellierungsmethoden für rechnerintegrierte Produktionsprozesse Carl Hanser Verlag, Germany,
Mertins, K.; Jochem, J. (1997). Qualitätsorientierte Gestaltung von Geschäftsprozessen. Beuth-Verlag Berlin (Germany)
Mertins, K.; Jochem, R. (1998). MO²GO. Handbook on Architectures of Information Systems. Springer-Verlag Berlin (Germany)
Mertins, K.; Jaekel, F-W. (2006). MO²GO: User Oriented Enterprise Models for Organizational and IT Solutions. In: Bernus, P.; Mertins, K.; Schmidt, G.: Handbook on Architectures of Information Systems. Second Edition. Springer-Verlag Berlin.
Spur, G.; Mertins, K.; Jochem, R.; Warnecke, H.J. (1993). Integrierte Unternehmensmodellierung Beuth Verlag GmbH Germany,
Schwermer, M. (1998): Modellierungsvorgehen zur Planung von Geschäftsprozessen (Dissertation) FhG/IPK Berlin (Germany),
External links
Fraunhofer Institute for Production Systems and Design Technology
Modeling tool MO²GO
Enterprise modelling
Systems engineering | Integrated enterprise modeling | [
"Engineering"
] | 3,102 | [
"Systems engineering",
"Enterprise modelling"
] |
7,284,936 | https://en.wikipedia.org/wiki/Hiroyuki%20Goto | recited pi from memory to 42,195 decimal places at NHK Broadcasting Centre, Tokyo on 18 February 1995. This set the world record at the time, which was held for more than a decade until Lu Chao beat it in 2005.
He is a game designer at Namco. He is the creator of word puzzle video game Kotoba no Puzzle Moji Pittan, which was released as an arcade game in 2001 and later become available for various console and portable gaming systems.
External links
Pi World ranking list
Moji Pittan interview (Japanese)
Pi-related people
Japanese video game designers
Keio University alumni
Designers from Tokyo
Living people
Year of birth missing (living people) | Hiroyuki Goto | [
"Mathematics"
] | 137 | [
"Pi-related people",
"Pi"
] |
7,285,133 | https://en.wikipedia.org/wiki/Lymphopoiesis | Lymphopoiesis (lĭm'fō-poi-ē'sĭs) (or lymphocytopoiesis) is the generation of lymphocytes, one of the five types of white blood cells (WBCs). It is more formally known as lymphoid hematopoiesis.
Disruption in lymphopoiesis can lead to a number of lymphoproliferative disorders, such as lymphomas and lymphoid leukemias.
Terminology
Lymphocytes are blood cells of lymphoid (rather than the myeloid or erythroid) lineage.
Lymphocytes are found in the bloodstream and originate in the bone marrow, however, they principally belong to the separate lymphatic system, which interacts with the blood circulation.
Lymphopoiesis is now usually used interchangeably with the term "lymphocytopoiesis" – the making of lymphocytes, but some sources distinguish between the two, stating that "lymphopoiesis" additionally refers to creating lymphatic tissue, while "lymphocytopoiesis" refers only to the creation of cells in that tissue. It is rare now for lymphopoiesis to refer to the creation of lymphatic tissues.
Myelopoiesis refers to the "generation of cells of the myeloid lineage" and erythropoiesis refers to the "generation of cells of the erythroid lineage", so parallel usage has evolved in which lymphopoiesis refers to the "generation of cells of the lymphoid lineage".
The two classes of WBCs in mice originate from cells with strong stem cell properties – myeloids from the common myeloid progenitor (CMP), and lymphoids from the common lymphoid progenitor (CLP). It was eventually found these progenitors were not unique, and that the myeloid and lymphoid classes were not disjoint, but rather two partially interwoven family trees.
Function
Mature lymphocytes are a critical part of the immune system that, with the exception of memory B and T cells, have short lives measured in days or weeks and must be continuously generated throughout life by cell division and differentiation from cells such as common lymphoid progenitors (CLPs) in mice.
The set comprising CLP cells and similar progenitors are themselves descendants of the pluripotential hemopoietic stem cell (pHSC), which is capable of generating all of the cell types of the complete blood cell system. Despite their ability to generate the complete suite of lymphocytes, most progenitors are not true stem cells, and must be continually renewed by differentiation from the pHSC stem cell.
Many progenitor cells are also referred to as transit cells, sometimes also called transit amplifying cells, the meaning of this term being that the transit cell may find a new sub-lineage but the number of resultant cells is strictly limited (although possibly very large, even trillions yet finite) and the lineage is terminated by cells that die off (by apoptosis) or remain as cells that can no longer divide. Examples of such cells are CFUs (Colony-forming units – referred to as such because of their ability to form colonies in vitro in artificial media) such as CFU-T.
Transplantation of a single pHSC cell can reconstitute a sub-lethally irradiated host (i.e. a mouse that has been irradiated so that all leukocytes are killed) with all these lineages of cells, including all types of lymphocytes via CLPs.
Lymphopoiesis continues throughout life and so progenitor cells and their parent stem cells must always be present.
Overview
In the case of mammals such as humans (Homo sapiens), lymphopoiesis begins with limited passive provision from the mother. This includes lymphocytes and immunoglobulin G that cross the placenta and enter the fetus to provide some protection against pathogens, as well as leukocytes that come from breast milk and enter circulation via the digestive tract. It is often not effective in preventing infections in the newborn.
However, early in gestation, the developing embryo has begun its own lymphopoiesis from the fetal liver. Lymphopoiesis also arises from the yolk sac. This is in contrast to the adult where all lymphocytes originate in the bone marrow.
There are four major types of lymphocytes, along with many sub-types. Scientists have identified hundreds or thousands of lymphocyte cell types, all of which are generated by normal or abnormal lymphopoiesis, except for certain artificial strains created in laboratories through the development of existing strains. Although lymphocytes are usually considered mature, as seen in blood tests, they are certainly not inert. Lymphocytes can travel around the body wherever there is a need. When such needs arise, new rounds of downstream lymphopoiesis, such as cell multiplication and differentiation, may occur, accompanied by intense mitotic and metabolic activity.
This is hardly a simple topic. In his 1976 text Immunology, Aging and Cancer immunologist and Nobel Prize winner Sir Frank Macfarlane Burnet speculated that the immune system might one day be found to be as complex as the nervous system. As the production of lymphocytes is so close to the central role of the immune response it is wise to approach the study of it with some humility in the face of the task. However, there are general principles that help in understanding.
Process
Lymphopoiesis can be viewed in a mathematical sense as a recursive process of cell division and also as a process of differentiation, measured by changes to the properties of cells.
Given that lymphocytes arise from specific types of limited stem cells – which we can call P (for Progenitor) cells – such cells can divide in several ways. These are general principles of limited stem cells.
Considering the P as the ‘mother’ cell, but not a true stem cell, it may divide into two new cells, which are themselves identical, but differ to some degree from the mother. Or the mother cell P may divide unequally into two new daughter cells both of which differ from each other and also from the mother.
Any daughter cell will usually have new specialized abilities and if it is able to divide it will form a new sub-lineage. The difference of a daughter cell from the mother may be great, but it could also be much less, even subtle. What the P mother cell does not do is divide into two new P mother cells or a mother and a daughter; this is a matter of observation as such limited progenitor cells are known to not self-renew.
There is a sort of exception when daughter cells at some level of the lineage may divide several times to form more seemingly identical cells, but then further differentiation and division will inevitably occur, until a final stage is reached in which no further division can occur, and the cell type lineage is finally mature. An example of maturity is a plasma cell, from the B cell lineage, which produces copious antibody, but cannot divide and eventually dies after a few days or weeks.
The progenitor CLP of the mouse or the progenitor MLP of the human differentiates into lymphocytes by first becoming a lymphoblast (Medical Immunology, p.10). It then divides several more times to become a prolymphocyte that has specific cell-surface markers unique to either a (1) T cell or (2) B cell. The progenitor can also differentiate into (3) natural killer cells (NK) and (4) dendritic cells (DC).
T cells, B cells and NK cells (and all other Innate lymphoid cells) are unique to the lymphocyte family, but dendritic cells are not. DC that are of identical appearance but have different markers are spread throughout the body, and come from either lymphoid and myeloid lineages. Still, these cells may have somewhat different tasks and may take up lodging preferentially in different locations. (Revise in light of new research) This is now an open question; also, the different dendritic cell lineages may have different ‘tasks’ or functions and stay in different ‘locations.’
T and B lymphocytes are indistinguishable under the microscope. The inactive B and T cells are so featureless with few cytoplasmic organelles and mostly inactive chromatin that until the 1960s textbooks could describe these cells, now the central focus of immunology, as having no known function!
However, T and B lymphocytes are very distinct cell lineages and they ‘grow up’ in different places in the body. They perform quite different (although co-operative) functions in the body. No evidence has ever been found that T and B cells can ever interconvert. T and B cells are biochemically distinct and this is reflected in the differing markers and receptors they possess on their cell surfaces. This seems to be true in all vertebrates, although there are many differences in the details between the species.
Regardless of whether the CLP (mouse) or MLP or a small closely related set of progenitor cells take credit for generating the profusion of lymphocytes, the same lymphoid progenitors can still generate some cells that are clearly identifiably myeloid.
Lymphopoiesis for T cells
T cells are formed in bone marrow and then migrate to the cortex of the thymus to undergo maturation in an antigen-free environment for about one week where a mere 2–4% of the T cells succeed. The remaining 96–98% of T cells die by apoptosis and are phagocytosed by macrophages in the thymus. So many thymocytes (T cells) die during the maturation process because there is intensive screening to make sure each thymocyte can recognize self-peptide:self-MHC complex and for self-tolerance. Having experienced apoptosis, the thymocyte dies and is quickly recycled.
Upon maturity, there are several forms of thymocytes including
T-helper (needed for activation of other cells such as B cells and macrophages),
T-cytotoxic (which kills virally infected cells),
T-memory (T cells that remember antigens previously encountered), and
T-suppressor cells (which moderate the immune response of other leukocytes). Also called T-regulatory cells (Treg)
When T cells become activated they undergo a further series of developments. A small, resting T lymphocyte rapidly undergoes blastogenic transformation into a large lymphocyte (13–15 μm). This large lymphocyte (known in this context as a lymphoblast) then divides several times to produce an expanded population of medium (9–12 μm) and small lymphocytes (5–8 μm) with the same antigenic specificity. Final activated and differentiated T lymphocytes are once again morphologically indistinguishable from a small, resting lymphocyte. Thus the following developmental states may be noticed in sequence in blood tests:
Prolymphocyte
Large lymphocyte
Small lymphocyte
Basic Map of T Cell lymphopoiesis
This basic map of T Cell formation in sequence, is simplified and is akin to textbook descriptions, and may not reflect latest research. (Medical Immunology, p. 119)
In the thymus
MLP
ETP
DN1
(B; Mφ)
DN2
(DC; NK)
DN3
(γδ)
DN4
DP
(TNK; CD4; CD8; Treg)
In the Periphery
(Th1; Th2)
T cell development
Unlike other lymphoid lineages, T cell development occurs almost exclusively in the thymus. T-lymphopoiesis does not occur automatically but requires signals generated from the thymus stromal cells. Several stages at which specific regulators and growth factors are required for T cell development to proceed have been defined. Later in T cell development and its maturation, these same regulatory factors again are used to influence T cell specialization.
T cells are unique among the lymphocyte populations in their ability to further specialize as mature cells and become yet more mature. T cells come in many flavors, for example: the conventional TcRαβ T cells; the so-called unconventional TcRγδ T cells; NKT cells; and T regulatory cells (Treg). Details regarding the developmental and life cycle of the unconventional T cells are less well-described measured to the conventional T cells.
Stages of T cell maturation
Stage One: Thymus Migration
Multi-potent lymphoid progenitors (MLP) enter the T cell pathway as they immigrate to the thymus. The most primitive cells in the thymus are the early thymocyte progenitors (ETP), which retain all lymphoid and myeloid potential but exist only transiently, rapidly differentiating into T and NK lineages. (Medical Immunology, p. 118)
Stage Two: Proliferative Expansion and T Lineage Commitment
Final commitment to the T cell lineage occurs within the thymus microenvironment, the microscopic structures of the thymus where T cells are nurtured. The most primitive T cells retain multipotential ability and can differentiate into cells of the myeloid or lymphoid lineages (B cells, DC, T cells, or NK cells).
More differentiated double negative T cells (DN2 cells) have more limited potentiality but are not yet fully restricted to the T cell lineage (they can still develop into DC, T cells, or NK cells). Later on, they are fully committed to the T cell lineage- when thymocytes expressing Notch1 receptors engage thymus stromal cells expressing Notch1 ligands, the thymocytes become finally committed to the T-cell lineage. See Gallery Image "Double Negatives"
With the commitment to the T cell lineage, begins a very complex process known as TCR gene rearrangement. This creates an enormous diversity of T cells bearing antigen receptors. Afterward some T cells leave the thymus to migrate to the skin and mucosae.
Stage Three: β-Selection
Stage Four: T Cell Receptors Selection
Only 2% to 3% of the differentiating thymocytes, those that express TcR capable of interaction with MHC molecules, but tolerant to self-peptides, survive the Stage Four selection process.
Stage Five: Continuing Differentiation in the Periphery
It was previously believed that the human thymus remained active as the site of T cell differentiation only until early adulthood and that later in adult life the thymus atrophies, perhaps even vanishing. Recent reports indicate that the human thymus is active throughout adult life. Thus, several factors may contribute to the supply of T cells in adult life: generation in the thymus, extra-thymic differentiation, and the fact that memory T cells are long-lived and survive for decades.
T cell types
Unconventional T cells
The thymus also gives rise to the so-called ‘unconventional T cells’ such at γδ T cells, natural killer T cells (NKT) and regulatory T cells (Treg).
γδ T cells
γδT cells represent only 1% to 5% of the circulating T cells but are abundant in the mucosal immune system and the skin, where they represent the dominant T cell population. These ‘non-MHC restricted T cells’ are involved in specific primary immune responses, tumor surveillance, immune regulation and wound healing.
Several differences between αβ and γδ T cell development have been described. They emigrate from the thymus in "waves" of clonal populations, which home to discrete tissues. For example, one kind is found in the peripheral blood while another predominates in the intestinal tract.
Natural Killer T cells
Human NKT cells are a unique population and are thought to play an important role in tumor immunity and immunoregulation.
T Regulatory cells
Treg cells are considered as naturally occurring regulatory T cells. Tregs comprised about 5% of the circulating CD4+ T cells. These cells are thought to possess an important autoimmunity property by regulating 'autoreactive' T cells in the periphery. (Medical Immunology, p. 117-122)
Lymphopoiesis for B cells
B cells are formed and mature in bone marrow (and spleen).
It is a good mnemonic aide that B cells are formed in the bone marrow, but it is a mere coincidence since B cells were first studied in the chicken's bursa of Fabricius and it is from this bursa that B cells get their name.
These B cells then leave the bone marrow and migrate via bloodstream and the lymph to peripheral lymphoid tissues, such as a spleen, lymph nodes, tonsils and mucosal tissues. Once in a secondary lymphoid organ the B cell can be introduced to an antigen that it is able to recognize.
Through this antigen recognition and other cell interactions the B cell becomes activated and then divides and differentiates to become a plasma cell. The plasma cell, a B cell end product, is a very active antibody-secreting cell that helps protect the body by attacking and binding to antigen.
Even after many decades of research, some controversy remains as to where B cells mature and 'complete their education', with the possibility remaining that the site may also partially be peri-intestinal lymphoid tissues.
B lymphopoiesis occurs exclusively in the bone marrow and B lymphocytes are made continuously throughout life there in a 'microenvironment' composed of stromal cells, extracellular matrix, cytokines and growth factors, which are critical for proliferation, differentiation, and survival of early lymphocyte and B-lineage precursors.
The relative proportion of precursor B cells in the bone marrow remains rather constant throughout the life span of an organism. There are stages such as Pre-B-I cells (5% to 10% of the total); Pre-B-II cells (60% to 70%) while the remaining 20% to 25% are immature B cells. Most textbooks say that B Cells mature in the bone marrow but, generally, immature B cells migrate to the spleen for 'higher education' of some sort where they go through transitional stages before final maturation. (Medical Immunology, p. 136)
B lymphocytes are identified by the presence of soluble immunoglobulin G (IgG). This is the most common protective immunoglobulin in the adult body. After antigenic stimulation, B cells differentiate into plasma cells that secrete large quantities of soluble IgG. This is the final stage of B lymphopoiesis, but it is the clincher because the plasma cells must either issue antibody close to a source of infection or disseminate it in the blood to fight an infection at a distance or in an inaccessible part of the body.
Basic map of B cell lymphopoiesis
A generally regarded valid map of B cell lymphopoiesis is as follows in sequence, in two parts with the first being in the bone marrow and the second in the spleen:. The development process in the bone marrow occurs in germinal centers
In the bone marrow
Pro-B
Pre-B-I
Pre-B-II large
Pre-B-II small
Imm(ature)
In the spleen
T1
T2/T3
(Marginal Zone (MZ); B-1; B-2)
B-2 further differentiate into:
(Germinal Center (GC); Memory; Plasma)
Lymphopoiesis for NK cells
NK cells, which lack antigen specific receptors, develop in the bone marrow. After maturation and release from the marrow they circulate in the blood through their lifetime seeking opportunity. The opportunity they seek is to encounter and recognize and then kill abnormal cells such as cancer or virally infected cells. It is well known that lymphocytes never have granules or at least not granules that are readily visible even upon staining. However, NK cells are the exception. They do have numerous granules which provide their ability to kill cells and these granules are why NK cells have an alternate name- Large Granular Lymphocytes (LGLs).
NK cells are the only lymphocytes considered part of the innate immune system (in contrast to the adaptive immune system. Yet they are much more closely related to T cells (part of the adaptive immune system) than to other cells of the innate immune system. NK cells not only share many surface markers, functions and activities in common with T Cells, they also arise from a common T/NK progenitor. The T/NK precursor is also believed to be the source of a subpopulation of lymphoid DC. (Medical Immunology, p. 121)
NK cells have a definition 'barcode' as CD3, CD16+, CD56t lymphocytes. (See Barcode Section of this article). NK progenitors can be found mainly in the thymus (mouse), but the thymus is not absolutely required for NK development. Probably NK cells can develop in a variety of organs, but the major site of NK cell development is not known.
In humans, the majority (85–90%) of the NK cells have a high cytolytic capacity (the ability to lyse cells). A smaller subset (10–15%) called NK 'CD56 bright' is chiefly responsible for cytokine production and has enhanced survival. Traveling to lymph nodes the 'CD56 bright' NK cells differentiate again into mature NK cells which express killer cell immunoglobulin-like receptors (KIR), natural cytotoxicity receptors (NCR), and critical adhesion molecules. (Medical Immunology, p. 122)
Lymphopoiesis for dendritic cells
The process by which CLP cells may differentiate to generate dendritic cells of lymphoid lineage is not yet well defined.
DCs are highly specialized and efficient antigen-presenting cells. Cells identical in appearance come both from a myeloid lineage (referred to as myeloid dendritic cells) and also from a lymphoid lineage (referred to as plasmacytoid dendritic cells).
The development and regulation of DC is not well-characterized. While the DC precursors have been identified in the human fetal liver, thymus, and bone marrow, during adult life DC are thought to be produced only from the bone marrow and released into the blood to wander and settle down. Overall, a large number of DC of varying types are dispatched throughout the body, especially at epithelia such as skin, to monitor invaders and nibble their antigens. (Medical Immunology, p. 122)
Comparison of killers from lymphopoiesis
Lymphocytes have a number of alarming properties such as the ability to wander around the body and take up lodging almost anywhere, and while on the way issue commands in the form of cytokines and chemokines and lymphokines, commands that affect many cell types in the body and which may also recursively induce further lymphopoiesis. One strong behavior pattern that captivates researchers and the public alike is the ability of lymphocytes to act as police, judge and executioner to kill other cells or demand that they suicide, a command that is usually obeyed. There seems to be no other sentencing option available.
Killers are distinguished from cells such as macrophages that eat other cells or munch debris by a method called phagocytosis. Killers do not use phagocytosis; they just kill and leave the clean-up to other cells.
Killers are known to attack virus-infected cells and cells that have become cancerous. Because of these abilities much research has been done into transforming these qualities into medical therapy, but progress has been slow.
Here is the parade of killers and how they work:
Cytotoxic T cells
(also called Tc or antigen-specific cyto-lytic or -toxic T lymphocytes (CTL)). Tc kill by apoptosis and either splash their target with perforin or granzymes or else use Fas-Fasl Interaction to command target elimination. This kills cells that are infected and display antigen.
NK cells (also called LGL (large granular lymphocytes))
These kill with exactly the same methods as Tc but have no interaction with any antigen. They select their targets based on typical molecules displayed by cells that are under stress by viral infection. NK Cells mainly are in the circulation (5-15% of the circulating lymphocytes) yet are also distributed in tissues everywhere.
LAK cells (Lymphokine-activated killer) are a laboratory/clinical subset of NK Cells promoted by IL-2 to attack tumor cells.
NKT cells see Natural Killer T cell main article
Natural Killer T Cells. Human NK T cells are a unique population (which express NK cell markers such as CD56 and KIR). NKT cells are thought to play an important role in tumor immunity and immunoregulation (Medical Immunology, p. 135), yet little is known. Recent evidence suggests a role working together with hepatic stellate cells being a liver-resident antigen-presenting cell that presents lipid antigens to and stimulates proliferation of NKT cells.
Natural killer-like T cells
A heterogeneous group with ill-defined properties.
However, in summary there is no known cell or set of cells that is capable of killing cancerous cells in general.
Labeling lymphopoiesis
Because all WBCs are microscopic, colorless and often seemingly identical in appearance they are individually identified by their natural chemical markers, many of which have been analyzed and named. When two cells have the same markers, the reasonable assumption is made that the cells are identical at that time. A set of markers is colloquially described as the barcode for that cell or that cell line.
Here is an example of how a barcode can come to be, for the all-important hematopoietic stem cell (HSC) as an example.
HSCs are technically described as: lacking FMS-like tyrosine kinase 3 (Flt3) and lacking the markers specific to discrete lymphoid lineages (Lin) but expressing high levels of Sca1 and c-kit; HSC also express CD44, low levels of Thy1.1 (CD90), but no IL-7Ra or CD27.
This is called the (surface) phenotype of an HSC. It can be expressed as a set (Lin2, Sca1high, c-kit high, CD44+, Thy1.1low, CD27 2, and IL-7Ra2). This set is a ‘barcode’ for the HSC, akin to the barcode label attached to your chicken-wing plastic bag for checkout at a supermarket! Scientists use these barcodes to check, categorize and accumulate cells for many purposes often using laboratory methods such as cell flow cytometry. These barcodes partially define the modern meaning of phenotype for leukocytes.
Progression of HSC differentiation and lineage commitment is indicated by changes in this phenotype. That is, as the cell changes, the markers will also change, and the barcode will change.
Typical barcodes for some cell types appearing in this article.
Note explaining the barcode parameter details: Flt3 is a cytokine tyrosine kinase receptor thought to be important in early lymphoid development. In addition, Flt3 plays a major role in maintaining B lymphoid progenitors. CD27 plays a role in lymphoid proliferation, differentiation, and apoptosis. The acquisition of CD27 and Flt3 by the HSC coincides with the loss of long-term repopulating potential. At this stage the cells retain both lymphoid and myeloid potential and are referred to as multipotent progenitors. (Medical Immunology, p. 114)
Knowledge development regarding lymphopoiesis
New questions emerge in immunology continuously as though there were a stem cell for questions. For example, it was thought that the process of lymphopoiesis was a direct, orderly unidirectional sequence. But it is not clear if end-stage lymphocytes come from progenitors that are homogeneous populations or overlapping populations. Nor is it clear whether lineages of lymphocytes develop via a continuum of differentiation with a progressive loss of lineage options or whether abrupt events result in the acquisition of certain properties.
Changes in cytoplasm, morphology of the cell nucleus, granules, cell internal biochemistry, signaling molecules and cell surface markers are difficult to correlate with definite stages in lymphopoiesis. The morphological differences do not just correspond to steps in mitosis (somatic cell division), but result from continuous "maturation processes" of the cell nucleus, as well as of the cytoplasm and so one must not be too rigid about morphological distinctions between certain cell stages.
Models and updates on the lymphopoiesis family tree
Until recently the model of the CMP generating all myeloid cell and the CLP generating all lymphoid cells was considered necessary and sufficient to explain the known facts observed in the generation of WBCs, and it is still found in most basic textbooks. However, beginning around 2000 and gaining momentum after 2005 in both studies in humans and mice, new complexities were noted and published in papers. These studies are important now mainly to immunology researchers but are likely to eventually lead to changes in medical treatments.
The changes were sparked by observations that lymphopoiesis did not always break into two lineages at the level of the CLP. Worse, some macrophages (long considered a myeloid lineage) could be generated by lymphoid lineage progenitors. In essence focus has been shifted away from the CLP to the MLP (lymphoid-specified progenitors), which are clearly lymphoid progenitors yet retain some myeloid potential, particularly the ability in both humans and mice to make macrophages – one of the most versatile of immune cell defenders – and also many dendritic cells, the best 'watchdogs' of antigen invaders.
However, whatever the details may turn out to be, the process of lymphopoiesis always seems to relentlessly give rise to progeny with special attributes and abilities – "superpowers" so to speak – but with progressively more restricted lymphoid developmental potential.
Stages of development
The old model: lymphoid vs myeloid
This model of lymphopoiesis had the virtue of relative simplicity, agreement with nomenclature and terminology, and is still essentially valid for the laboratory mouse.
pHSC pluripotent, self-renewing, hematopoietic stem cells which give rise to
MPP multipotent progenitors, which give rise to
ELP (or PRO) Prolymphocytes, early lymphoid progenitors, and finally to the
CLP Common lymphoid progenitor, a cell type fully committed to the lymphoid lineage.
pHSC, MPP and ELP cells are not fully committed to the lymphoid lineage because if one is removed to a different location it may differentiate into non-lymphoid progeny. However CLP are committed to the lymphoid lineage. The CLP is the transit cell responsible for these (generally parallel) stages of development, below:
NK cells
Dendritic cells (lymphoid lineage; DC2 )
Progenitor B cells
Pro-B cells => Early Pro (or pre-pre)-B cells => Late Pro (or pre-pre)-B cells
Large Pre-B cells => Small Pre-B cells
Immature B cells
B Cells => (B1 cells; B2 cells)
Plasma cells
Pro-T cells
T-cells
Research on new models (not mice)
By 2008 it was found that "the majority of early thymic progenitor [ETP] cells do not commit to becoming T cells by the time they get to the thymus gland. ETP cells retained the ability to become either T cells or myeloid cells."
See also :
Graphical view of the old model vs mixed lymphoid and myeloid model
General immunology reference texts
Texts in bold are the most heavily cited in this article.
Cell Communication in Nervous and Immune System; Gundelfinger, Seidenbecher, Schraven; Springer Berlin Heidelberg New York; 2006;
Color Atlas of Hematology; Theml et al.; Thieme; 2004;
Dynamics of Cancer; Steven A. Frank; Princeton University Press, Princeton, New Jersey; 2007; , Creative Commons Public License
Fundamental Immunology, 5th edition; William E. Paul (Editor); Lippincott Williams & Wilkins Publishers; 2003;
Immuno-Biology: The Immune System in Health and Science, 6th Edition; Janeway, Travers; 2005; Garland Science Publishing, New York;
Immunology Introductory Textbook (ebook;revised 2nd edition); Nandini Shetty; New Age International (P) Limited, Publishers, India; 2005;
Instant Notes in Immunology, 2nd ed.; Lydyard, Whelan, Fanger; Taylor and Francis Group; 2004; China Version ; 46RMB Wangfujing Bookstore
Medical Immunology—6th ed.; G. Virella, Editor; Informa Healthcare USA, Inc; 2007;
Stem Cell Biology; Marshak, Gardner, Gottlieb; Cold Spring Harbor Laboratory Press; 2001; /01
Textbook of Human Development and Histology; Zhong Cuiping et al.; Shanghai Scientific and Technical Publishers; 2006;
Textbook of Medical Immunology (Immunology, 7th Edition); LIM Pak Leong; Elsevier (Singapore) Pte Ltd.; 2006;
References
Additional images
Alternate views of lineages
External links
The www.copewithcytokines.de Mini-portal to Lymphopoiesis terminology
Overview at hematologica.pl
Lymphology
Lymphocytes
Hematopoiesis
Histology
fr:Leucopoïèse | Lymphopoiesis | [
"Chemistry"
] | 7,295 | [
"Histology",
"Microscopy"
] |
7,286,120 | https://en.wikipedia.org/wiki/Nasadiya%20Sukta | The Nāsadīya Sūkta (after the incipit , or "not the non-existent"), also known as the Hymn of Creation, is the 129th hymn of the 10th mandala of the Rigveda (10:129). It is concerned with cosmology and the origin of the universe. The Nāsadīya Sūkta has been the subject of extensive scholarly attention.
There are numerous translations and interpretations of the text. Nasadiya Sukta begins with the statement: "Then, there was neither existence, nor non-existence." It ponders when, why, and through whom the universe came into being in a contemplative tone, and provides no definite answers. Rather, it concludes that the gods too may not know, as they came after creation, and that even the surveyor of that which has been created, in the highest heaven may or may not know. To this extent, the conventional English title Hymn of Creation is perhaps misleading, since the verse does not itself present a cosmogony or creation myth akin to those found in other religious texts, instead provoking the listener to question whether one can ever know all the details of origins of the universe.
Interpretations
The hymn has attracted a large body of literature of commentaries both in Indian darsanas and in Western philology. The hymn, as Mandala 10 in general, is late within the Rigveda Samhita, and expresses thought more typical of later Vedantic philosophy. Even though untypical of the content of the Vedic hymns, it is one of the most widely received portions of the Rigveda. An atheist interpretation sees the Creation Hymn as one of the earliest accounts of skeptical inquiry and agnosticism. Astronomer Carl Sagan quoted it in discussing India's "tradition of skeptical questioning and unselfconscious humility before the great cosmic mysteries."
The text begins by paradoxically stating, "not the non-existent existed, nor did the existent exist then" (), which is paralleled in verse 2 by "then not death existed, nor the immortal" (). However verse 2 already mentions that there was "breathing without breath, of its own nature, that one" ). In verse 3, being unfolds, "from heat (tapas) was born that one" (). Verse 4 mentions desire (kāma) as the primal seed, and the first poet-seers (kavayas) who "found the bond of being within non-being with their heart's thought".
Karel Werner describes the author's source for the material as one not derived from reasoning, but a "visionary, mystical or Yogic experience put into words."
Brereton (1999) argues that the reference to the sages searching for being in their spirit is central, and that the hymn's gradual procession from non-being to being in fact re-enacts creation within the listener (see ), equating poetic utterance and creation (see śabda).
Metre
Nasadiya Sukta consists of seven trishtubhs, although para 7b is defective, being two syllables short,
"if he has created it; or if not [...]"
Brereton (1999) argues that the defect is a conscious device employed by the rishi to express puzzlement at the possibility that the world may not be created, parallel to the syntactic defect of pada 7d, which ends in a subordinate clause without a governing clause:
"he verily knows; or maybe he does not know [...]"
Text and translation
See also
Creation myth
Creatio ex nihilo
God in Hinduism
Hindu cosmology
Hiranyagarbha
Indian logic
List of suktas and stutis
Narayana sukta
Neti neti
Purusha Sukta
References
Sources
Further reading
P. T. Raju, The Development of Indian Thought, Journal of the History of Ideas (1952)
Karel Werner, Symbolism in the Vedas and Its Conceptualisation, Numen (1977)
Agnosticism
Atheism
Creation myths
Hindu creation myths
Hindu cosmology
Hindu philosophical concepts
Hindu philosophy
Irreligion in India
Religious atheism
Rigveda
Vedic hymns | Nasadiya Sukta | [
"Astronomy"
] | 866 | [
"Cosmogony",
"Creation myths"
] |
7,286,489 | https://en.wikipedia.org/wiki/Obligate | As an adjective, obligate means "by necessity" (antonym facultative) and is used mainly in biology in phrases such as:
Obligate aerobe, an organism that cannot survive without oxygen
Obligate anaerobe, an organism that cannot survive in the presence of oxygen
Obligate air-breather, a term used in fish physiology to describe those that respire entirely from the atmosphere
Obligate biped, Bipedalism designed to walk on two legs
Obligate carnivore, an organism dependent for survival on a diet of animal flesh.
Obligate chimerism, a kind of organism with two distinct sets of DNA, always
Obligate hibernation, a state of inactivity in which some organisms survive conditions of insufficiently available resources.
Obligate intracellular parasite, a parasitic microorganism that cannot reproduce without entering a suitable host cell
Obligate parasite, a parasite that cannot reproduce without exploiting a suitable host
Obligate photoperiodic plant, a plant that requires sufficiently long or short nights before it initiates flowering, germination or similarly functions
Obligate symbionts, organisms that can only live together in a symbiosis
See also
Opportunism (biological)
Biology terminology | Obligate | [
"Biology"
] | 266 | [
"nan"
] |
7,286,491 | https://en.wikipedia.org/wiki/Lamellibrachia | Lamellibrachia is a genus of tube worms related to the giant tube worm, Riftia pachyptila. They live at deep-sea cold seeps where hydrocarbons (oil and methane) leak out of the seafloor, and are entirely reliant on internal, sulfide-oxidizing bacterial symbionts for their nutrition. The symbionts, gammaproteobacteria, require sulfide and inorganic carbon (carbon dioxide). The tube worms extract dissolved oxygen and hydrogen sulfide from the sea water with the crown of plumes. Species living near seeps can also obtain sulfide through their "roots", posterior extensions of their body and tube. Several sorts of hemoglobin are present in the blood and coelomic fluid to bind to the different components and transport them to the symbionts.
L. luymesi provides the bacteria with hydrogen sulfide and oxygen by taking them up from the environment and binding them to a specialized hemoglobin molecule. Unlike the tube worms that live at hydrothermal vents, L. luymesi uses a posterior extension of its body called the root to take up hydrogen sulfide from the seep sediments. L. luymesi may also help fuel the generation of sulfide by excreting sulfate through its root into the sediments below the aggregations.
The most well-known seeps where L. luymesi lives are in the northern Gulf of Mexico from 500 to 800 m depth. This tube worm can reach lengths over 3 m (10 ft), and grows very slowly, with individuals living to be over 250 years old. It forms a biodiverse habitat by creating large aggregations of hundreds to thousands of individuals. Living in these aggregations are over 100 different species of animals, many of which are found only at these depths.
While most species of vestimentiferan tubeworms live in deep waters below the photic zone, L. satsuma was discovered in Kagoshima Bay, Kagoshima at a depth of only 82 m, the shallowest depth record for a vestimentiferan.
Species
The following species are included in this genus:
Lamellibrachia anaximandri Southward, Andersen & Hourdez, 2011
Lamellibrachia barhami Webb, 1969
Lamellibrachia columna Southward, 1991
Lamellibrachia donwalshi McCowin & Rouse, 2018
Lamellibrachia juni Miura & Kojima, 2006
Lamellibrachia luymesi van der Land & Nørrevang, 1975
Lamellibrachia satsuma Miura, 1997
Lamellibrachia victori Mañe-Garzon & Montero, 1985
References
Sabellida
Chemosynthetic symbiosis
Polychaete genera | Lamellibrachia | [
"Biology"
] | 572 | [
"Biological interactions",
"Chemosynthetic symbiosis",
"Behavior",
"Symbiosis"
] |
7,286,863 | https://en.wikipedia.org/wiki/Pneumocystis%20pneumonia | Pneumocystis pneumonia (PCP), also known as Pneumocystis jirovecii pneumonia (PJP), is a form of pneumonia that is caused by the yeast-like fungus Pneumocystis jirovecii.
Pneumocystis specimens are commonly found in the lungs of healthy people although it is usually not a cause for disease. However, they are a source of opportunistic infection and can cause lung infections in people with a weak immune system or other predisposing health conditions. PCP is seen in people with HIV/AIDS (who account for 30-40% of PCP cases), those using medications that suppress the immune system, and people with cancer, autoimmune or inflammatory conditions, and chronic lung disease.
Signs and symptoms
Signs and symptoms may develop over several days or weeks and may include: shortness of breath and/or difficulty breathing (of gradual onset), fever, dry/non-productive cough, weight loss, night sweats, chills, and fatigue. Uncommonly, the infection may progress to involve other visceral organs (such as the liver, spleen, and kidney).
Cough - typically dry/non-productive because sputum becomes too viscous to be coughed up. The dry cough distinguishes PCP from typical pneumonia.
Complications
Pneumothorax is a well-known complication of PCP. Also, a condition similar to acute respiratory distress syndrome (ARDS) may occur in patients with severe Pneumocystis pneumonia, and such individuals may require intubation.
Pathophysiology
The risk of PCP increases when CD4-positive T-cell levels are less than 400 cells/μL. In these immunosuppressed individuals, the manifestations of the infection are highly variable. The disease attacks the interstitial, fibrous tissue of the lungs, with marked thickening of the alveolar septa and alveoli, leading to significant hypoxia, which can be fatal if not treated aggressively. In this situation, lactate dehydrogenase levels increase and gas exchange is compromised. Oxygen is less able to diffuse into the blood, leading to hypoxia, which along with high arterial carbon dioxide () levels, stimulates hyperventilatory effort, thereby causing dyspnea (breathlessness).
In addition, in symptomatic cases of P. jirovecii pneumonia, the overgrowth of the fungus is associated to a co-infection with trichomonads, unicellular flagellated parabasalid protist (Parabasalia) of the family Trichomonadidae. These parasites (including the commensal Trichomonas tenax, Trichomonas vaginalis and Tritrichomonas foetus) exhibit an amoeboid form, without flagellum, which makes it difficult to identify them under the microscope. Amoeboid transformation is an argument in favor of a deleterious action, which nevertheless remains conjectural.
Diagnosis
The diagnosis can be confirmed by the characteristic appearance of the chest X-ray and an arterial oxygen level (PaO2) that is strikingly lower than would be expected from symptoms. Gallium 67 scans are also useful in the diagnosis. They are abnormal in about 90% of cases and are often positive before the chest X-ray becomes abnormal. Chest X-ray typically shows widespread pulmonary infiltrates. CT scan may show pulmonary cysts (not to be confused with the cyst-forms of the pathogen).
The diagnosis can be definitively confirmed by histological identification of the causative organism in sputum or bronchoalveolar lavage (lung rinse). Staining with toluidine blue, silver stain, periodic acid-Schiff stain, or an immunofluorescence assay shows the characteristic cysts. The cysts resemble crushed ping-pong balls and are present in aggregates of two to eight (and not to be confused with Histoplasma or Cryptococcus, which typically do not form aggregates of spores or cells). A lung biopsy would show thickened alveolar septa with fluffy eosinophilic exudate in the alveoli. Both the thickened septa and the fluffy exudate contribute to dysfunctional diffusion capacity that is characteristic of this pneumonia.
Pneumocystis infection can also be diagnosed by immunofluorescent or histochemical staining of the specimen, and more recently by molecular analysis of polymerase chain reaction products comparing DNA samples. Notably, simple molecular detection of P. jirovecii in lung fluids does not mean that a person has PCP or infection by HIV. The fungus appears to be present in healthy individuals in the general population. A blood test to detect β-D-glucan (a part of the cell wall of many different types of fungi) can also help in the diagnosis of PCP.
Prevention
In immunocompromised people, prophylaxis with co-trimoxazole (trimethoprim/sulfamethoxazole), atovaquone, or regular pentamidine inhalations may help prevent PCP.
Treatment
Antipneumocystic medication is used with concomitant steroids to avoid inflammation, which causes an exacerbation of symptoms about 4 days after treatment begins if steroids are not used. By far, the most commonly used medication is trimethoprim/sulfamethoxazole, but patients are often unable to tolerate this treatment due to adverse reactions, especially if they are HIV positive. Other medications that are used, alone or in combination, include pentamidine, trimetrexate, dapsone, atovaquone, primaquine, pafuramidine maleate (under investigation), and clindamycin. Treatment is usually for a period of about 21 days.
Pentamidine is less often used, as its major limitation is the high frequency of side effects. These include acute pancreatic inflammation, kidney failure, liver toxicity, decreased white blood cell count, rash, fever, and low blood sugar.
Epidemiology
Current epidemiology
The disease PCP is relatively rare in people with normal immune systems, but common among people with weakened immune systems, such as premature or severely malnourished children, the elderly, and especially persons living with HIV/AIDS (in whom it is most commonly observed). PCP can also develop in patients who are taking immunosuppressive medications. It can occur in patients who have undergone solid organ transplantation or bone marrow transplantation and after surgery. Infections with Pneumocystis pneumonia are also common in infants with hyper IgM syndrome, an X-linked or autosomal recessive trait.
The causative organism of PCP is distributed worldwide and Pneumocystis pneumonia has been described in all continents except Antarctica. More than 75% of children are seropositive by the age of four, which suggests a high background exposure to the organism. A post mortem study conducted in Chile of 96 persons who died of unrelated causes (suicide, traffic accidents, and so forth) found that 65 (68%) of them had pneumocystis in their lungs, which suggests that asymptomatic pneumocystis infection is extremely common. Up to 20% of adults may be asymptomatic carriers at any given time, and asymptomatic infection may persist for months before being cleared by an immune response.
P. jirovecii is commonly believed to be a commensal organism (dependent upon its human host for survival). The possibility of person-to-person transmission has recently gained credence, with supporting evidence coming from many different genotyping studies of P. jirovecii isolates from human lung tissue. For example, in one outbreak of 12 cases among transplant patients in Leiden, it was suggested as likely, but not proven, that human-to-human spread may have occurred.
PCP and AIDS
Since the start of the AIDS epidemic, PCP has been closely associated with AIDS. Because it only occurs in an immunocompromised host, it may be the first clue to a new AIDS diagnosis if the patient has no other reason to be immunocompromised (e.g. taking immunosuppressive drugs for organ transplant). An unusual rise in the number of PCP cases in North America, noticed when physicians began requesting large quantities of the rarely used antibiotic pentamidine, was the first clue to the existence of AIDS in the early 1980s.
Prior to the development of more effective treatments, PCP was a common and rapid cause of death in persons living with AIDS. Much of the incidence of PCP has been reduced by instituting a standard practice of using oral co-trimoxazole (Bactrim / Septra) to prevent the disease in people with CD4 counts less than 200/μL. In populations who do not have access to preventive treatment, PCP continues to be a major cause of death in AIDS.
History
The first cases of Pneumocystis pneumonia were described in premature infants in Europe following the Second World War. It was then known as plasma cellular interstitial pneumonitis of the newborn.
In the era before the existence of HIV/AIDS in humans, clinical transplant immunology, and widespread immunomodulatory therapy for autoimmune diseases, the neonatal and infantile population was the principal immunity-limited population. For example, a 1955 review article stated, "Interstitial plasma cell pneumonia is a type of infantile pneumonia, occurring chiefly in Europe." It also stated, "The etiology is unknown, but the disease acts like an infection in its epidemiology. No present-day therapeutic measures seem to be of any definite value."
Nomenclature
Both Pneumocystis pneumonia and pneumocystis pneumonia are orthographically correct; one uses the genus name per se and the other uses the common noun based on it. (This is the same reason, for example, why "group A Streptococcus" and "group A streptococcus" are both valid.) Synonyms for PCP include pneumocystosis (pneumocystis + -osis), pneumocystiasis (pneumocystis + -iasis), and interstitial plasma cell pneumonia.
The older species name Pneumocystis carinii (which now applies only to the Pneumocystis species that is found in rats) is still in common usage. As a result, Pneumocystis pneumonia (PCP) is also known as Pneumocystis jiroveci[i] pneumonia and (incorrectly) as Pneumocystis carinii pneumonia.
Regarding nomenclature, when the name of Pneumocystis pneumonia (PCP) changed from P. carinii pneumonia to P. jirovecii pneumonia, it was at first asked whether "PJP" should replace "PCP". However, because the short name "PCP" was already well established among physicians that managed patients with Pneumocystis infection, it was widely accepted that this name could continue to be used, as it could now stand for pneumocystis pneumonia.
References
External links
Animal fungal diseases
Pneumonia
HIV/AIDS
Atypical pneumonias
Fungal diseases | Pneumocystis pneumonia | [
"Biology"
] | 2,426 | [
"Fungi",
"Fungal diseases"
] |
7,287,264 | https://en.wikipedia.org/wiki/Rubylith | Rubylith is a brand of masking film, invented and trademarked by the Ulano Corporation. Today the brand has become genericized to the point that it has become synonymous with all coloured masking films.
Rubylith consists of two films sandwiched together. The bottom layer is a clear polyester backing sheet; the top layer is a translucent, red-(ruby-)coloured sheet. The top layer can be cut and peeled away from the bottom layer. The top layer's colour is light-safe for orthochromatic films (which are sensitive to blue and green light but insensitive to red light).
Rubylith is used in many areas of graphic design, typically to produce masks for various printing techniques. For example it is often used to mask off areas of a design when using a photoresist to produce printing plates for offset lithography or gravure. It is also frequently used during screen-printing.
Ulano also produced a yellow-(amber-)coloured masking film called Amberlith that was light-safe only for blue-sensitive emulsions. They discontinued production in late 2007.
Typeface production
Letraset, and other typeface foundries that used photographic reproduction, used Rubylith cut by hand with knives to produce primary art that was then photographically reduced to make the final dry-transfer typeface products. The process is described by Freda Sack in an interview with Unit Editions..
VLSI production
Rubylith was used in the early days of semiconductors and integrated circuits manufacturing as stencils to make photomasks (reticles). The physical layouts of the first generations of Intel microprocessors were first hand drawn on graph paper. A technician would then use a coordinatograph to precisely cut the rubylith (laminated onto a transparent plastic such as mylar) and a knife (X-Acto) to peel the appropriate sections away while it was resting on the light table. The finished Rubylith mechanical masters were then photo reduced (onto a photographic film) up to 100 times and then step and repeated on to glass plates for production use. Usually, several such masks were made that were then used layer by layer.
Shortly after the 8008, Intel started using Calma's computer-aided design system that ran on a Data General minicomputer; the output masters may have stayed Rubylith for a time, but other output options became available. Bell Telephone Laboratories, for example, had a high-resolution photoplotter. The integrated circuit industry left Rubylith behind for technologies that were more efficient.
Certain digital image editing programs that have masking features may use a red overlay to designate masked areas, mimicking the use of actual Rubylith film.
Manufacturing
Intel
For about ten years since 1968 engineers at Intel used manually drawn rubylith schematics to produce its first line of products: SRAM, DRAM, and EPROM memory; notable chips produced by using rubylith include:
Intel 3101, first Intel product, a SRAM device
Intel 4004
Intel 8008 (née 1201)
Intel 8080
Intel 8085
Intel 8086
Zilog
Z80
MOS Technology
MOS 6502 (layout by Bill Mensch)
Notes
See also
Now the Chips Are Down
References
External links
Graphic design | Rubylith | [
"Engineering"
] | 687 | [
"Design stubs",
"Design"
] |
7,287,469 | https://en.wikipedia.org/wiki/IT%20operations%20architecture | Operations architecture allows the ongoing support and management of IT services infrastructure of an enterprise . The IT infrastructure of an enterprise will typically comprise many different systems and platforms, often in different geographic locations. Operations architecture ensures that these systems perform as expected by centrally unifying the control of operational procedures and automating the execution of operational tasks. It also reports the performance of the IT infrastructure. The implementation of an operations architecture consists of a dedicated set of tools and processes which are designed to provide centralisation and automation.
Scope and context
Operations architecture is mainly centered on back office and data center infrastructure rather than desktop computers. It supports a company’s computing infrastructure to provide IT services which are required by business processes. Any organisation with a substantial amount of automated back office processes can have an operations architecture, such as banks, factories and government agencies. The focus of operations architecture is on the day-to-day provision of IT services and it is not concerned with manufacturing IT products such as the programming of applications.
While operations architecture deals with the technical implementation of IT service management, it does not address personnel or human resource aspects. The hiring and training of suitable operators is not managed within the context of operations architecture.
The operations architecture is the technical implementation of an IT governance framework such as ITIL. Specifically, it is the solution design for IT service management. While IT service management focuses on the conceptual design of the IT service, operations architecture focuses on the technical practicalities of implementing this concept.
Geographically, operations architecture unites the control over increasingly disparate and mobile IT systems on a central operations "bridge" (so named in analogy to a ship’s command bridge).
Elements of operations architecture
Some of the most common elements of operations architecture are:
Production scheduling/monitoring
System monitoring
Performance monitoring
Network monitoring
Event management
Secure file transfer
service level agreements (SLAs)
operating level agreements (OLAs)
External links
ITIL Framework for IT Service Management
Information technology management | IT operations architecture | [
"Technology"
] | 390 | [
"Information technology",
"Information technology management"
] |
7,287,845 | https://en.wikipedia.org/wiki/Hexafluorophosphate | Hexafluorophosphate is an anion with chemical formula of . It is an octahedral species that imparts no color to its salts. is isoelectronic with sulfur hexafluoride, , and the hexafluorosilicate dianion, , and hexafluoroantimonate . In this anion, phosphorus has a valence of 5. Being poorly nucleophilic, hexafluorophosphate is classified as a non-coordinating anion.
Synthesis
Hexafluorophosphate salts can be prepared by the reaction of phosphorus pentachloride and alkali or ammonium halide in a solution of hydrofluoric acid:
Hexafluorophosphoric acid can be prepared by direct reaction of hydrogen fluoride with phosphorus pentafluoride. It is a strong Brønsted acid that is typically generated in situ immediately before its use.
These reactions require specialized equipment to safely handle the hazards associated with hydrofluoric acid and hydrogen fluoride.
Quantitative analysis
Several methods of quantitative analysis for the hexafluorophosphate ion have been developed. Tetraphenylarsonium chloride, , has been used both for titrimetric and gravimetric quantifications of hexafluorophosphate. Both of these determinations depend on the formation of tetraphenylarsonium hexafluorophosphate:
Hexafluorophosphate can also be determined spectrophotometrically with ferroin.
Reactions
Hydrolysis is extremely slow under basic conditions. Acid-catalyzed hydrolysis to the phosphate ion is also slow. Nonetheless, hexafluorophosphate is prone to decomposition with the release of hydrogen fluoride in ionic liquids.
Organometallic and inorganic synthesis
Hexafluorophosphate is a common counteranion for cationic metal complexes. It is one of three widely used non-coordinating anions: hexafluorophosphate, tetrafluoroborate , and perchlorate . Of these, the hexafluorophosphate ion has the least coordinating tendency.
Hexafluorophosphate salts can be prepared by reactions of silver hexafluorophosphate with halide salts. Precipitation of insoluble silver halide helps drive this reaction to completion. Since hexafluorophosphate salts are often insoluble in water but soluble in polar organic solvents, even the addition of ammonium hexafluorophosphate () to aqueous solutions of many organic and inorganic salts gives solid precipitates of hexafluorophosphate salts. One example is the synthesis of rhodocenium salts: The overall conversion equation is
Tetrakis(acetonitrile)copper(I) hexafluorophosphate is produced by the addition of hexafluorophosphoric acid to a suspension of copper(I) oxide in acetonitrile:
Hydrolysis of hexafluorophosphate complexes
While the hexafluorophosphate ion is generally inert and hence a suitable counterion, its solvolysis can be induced by highly electrophilic metal centers. For example, the tris(solvento) rhodium complex undergoes solvolysis when heated in acetone, forming a difluorophosphate-bridged complex .
Applications
Practical uses of the hexafluorophosphate ion typically exploit one or more of the following properties: that it is a non-coordinating anion; that hexafluorophosphate compounds are typically soluble in organic solvents, particularly polar ones, but have low solubility in aqueous solution; or, that it has a high degree of stability, including resistance to both acidic and basic hydrolysis.
Secondary batteries
The main commercial use of hexafluorophosphate is as its lithium salt, lithium hexafluorophosphate. This salt, in combination with dimethyl carbonate, is a common electrolyte in commercial secondary batteries such as lithium-ion cells. This application exploits the high solubility of hexafluorophosphate salts in organic solvents and the resistance of these salts to reduction by the alkali metal cathode. Since the lithium ions in these batteries are generally present as coordination complexes within the electrolyte, the non-coordinating nature of the hexafluorophosphate ion is also a useful property for these applications.
Ionic liquids
Room temperature ionic liquids such as 1-butyl-3-methylimidazolium hexafluorophosphate (typically abbreviated as bmimPF6) have been prepared. The advantage of the anion exchange in favour of a non-coordinating anion is that the resulting ionic liquid has much greater thermal stability. 1-Butyl-3-methylimidazolium chloride decomposes to N-methylimidazole and 1-chlorobutane or to N-butylimidazole and chloromethane. Such decompositions are not possible for bmimPF6. However, thermal decompositions of hexafluorophosphate ionic liquids to generate hydrogen fluoride gas are known.
References
Non-coordinating anions | Hexafluorophosphate | [
"Chemistry"
] | 1,106 | [
"Coordination chemistry",
"Non-coordinating anions"
] |
7,287,921 | https://en.wikipedia.org/wiki/Terex | Terex Corporation is an American company and worldwide manufacturer of lifting and material-handling equipment. Terex does business in the Americas, Europe, Australia and Asia Pacific.
Corporate history
The origins of Terex date to 1933, when the Euclid Company was founded by George A. Armington to build hauling dump trucks. In 1953, General Motors purchased Euclid, expanding the business to include more than half of all U.S. off-highway dump truck sales. Due to a 1968 Justice Department ruling, GM was required to stop manufacturing and selling off-highway trucks in the United States for four years and divest the Euclid brand. GM coined the "Terex" name in 1968 from the Latin words "terra" (earth) and "rex" (king) for its construction equipment products and trucks not covered by the ruling.
General Motors sold the Terex division to German firm IBH Holding AG led by Horst-Dieter Esch de in 1980. After IBH Holding AG declared bankruptcy in 1983, ownership of Terex returned to General Motors and was organized as Terex Equipment Limited (Scotland), Terex do Brasil Limitada (Belo Horizonte, Brazil), and Terex USA (Hudson, Ohio).
American entrepreneur Randolph W. Lenz purchased Terex USA from GM in 1986, then exercised an option to purchase Terex Equipment Limited in 1987. In 1988, Lenz merged his primary construction equipment asset, Northwest Engineering Company, into Terex Corporation, making Terex the parent entity.
Terex Corporation was incorporated in Delaware in 1986 and listed on the New York Stock Exchange in 1991. As a publicly traded company, Terex grew from acquisitions under the leadership of Ron DeFeo, who became president in 1993 and CEO in 1995.
In 1997 Terex aquired mining business from O&K, including worlds largest hydraulic excavator RH 400, later produced as Cat 6090.
In 2010 Terex sold its mining business to Bucyrus.
In December 2013, Volvo Construction Equipment (VCE) acquired the Terex line of heavy haul trucks.
John L. Garrison, Jr., succeeded him as President and CEO in 2015 and further transformed the business through acquisitions, new-business launches, and divestitures. In January 2024, Terex named Simon A. Meester, formerly President of the company's Aerial Work Platforms business segment, as Terex president and chief executive officer.
In September 2021 VCE rebranded the business Rokbak.
Products
Materials Processing (MP) manufactures the following:
crushers
washing systems
screens
trommels
apron feeders
material handlers
pick and carry cranes
rough terrain cranes
tower cranes
wood processing, biomass and recycling equipment
concrete mixer trucks and concrete pavers
conveyors
related components and replacement parts for the above
Customers use these products in construction, infrastructure and recycling projects, quarrying and mining applications, as well as landscaping and biomass production industries, material handling applications, maintenance applications to lift equipment or material, moving materials and equipment on rugged or uneven terrain, lifting construction material and placing material at point of use. Terex MP brands and business lines include: Terex, Powerscreen, Fuchs, EvoQuip, Canica, Cedarapids, CBI, Simplicity, Franna, Terex Ecotec, Finlay, ProAll, ZenRobotics, Terex Washing Systems, Terex MPS, Terex Jaques, Terex Advance, ProStack, Terex Bid-Well, MDStm, MARCO, Green-Tec, Magna, and Terex Recycling Systems.
Aerial Work Platforms (AWP) manufactures mobile elevating platforms, utility equipment and telehandlers. Products include portable material lifts, portable aerial work platforms, trailer-mounted articulating booms, self-propelled articulating and telescopic booms, scissor lifts, Terex Utility equipment (including digger derricks and insulated aerial devices) and telehandlers, as well as replacement parts. Aerial equipment safely positions workers and materials at height, enhancing safety and productivity. Customers use these products to construct and maintain industrial, commercial, institutional and residential buildings and facilities, for construction and maintenance of transmission and distribution lines, tree trimming, certain construction and foundation drilling applications, and for other commercial operations, as well as infrastructure projects. AWP markets principally under the Terex and Genie brand names.
Acquisitions and divestitures
On October 8, 2024, Terex completed the acquisition of the Environmental Solutions Group (ESG) from Dover Corporation for $2 billion. ESG is an integrated equipment manufacturer serving the solid waste and recycling industries. Its market-leading brands include Heil, Marathon, Curotto-Can, Bayne Thinline, Parts Central, and digital solutions 3rd Eye and Soft-Pak. As of December 2024, Terex marketed under more than 30 customer-facing brands. Terex was built through a series of acquisitions, internal start-ups, and divestitures over the years. These and other actions helped to shape the current business portfolio:
Acquisitions
1999 – Powerscreen, Finlay, Simplicity, Franna
2001 – Canica, Jaques, Bid-Well, CMI Roadbuilding
2002 – Genie, Fuchs, Advance Mixer
2015 – CBI, Ecotec
2020-2023 – MDS, Steelweld, ZenRobotics, ProAll, MARCO
2024 – Environmental Solutions Group (ESG)
Divestitures
2010 – Mining Segment
2013 – Roadbuilding / Heavy hauling businesses
2017 – MHPS port handling business; construction business
2019 – Demag cranes business
Criticism
In 1992 American businessman Richard Carl Fuisz reported to the Operations Subcommittee of the House Committee on Agriculture that he witnessed the construction of military vehicles at a Terex owned facility in Scotland in 1987. Fuisz alleged that Terex employees reported that the vehicles were manufactured at the request of the CIA and British Intelligence and were destined for service within the Iraqi military. Terex denied the allegations and, in 1992, filed a libel complaint against Fuisz and Seymour M. Hersh, writer of a New York Times article covering Fuisz's allegations. After several investigations, including a 16-month-long federal task force investigation, no legal charges were filed against Terex. The New York Times, in an editor's note on 7 December 1995, said, "The article should never have suggested that Terex has ever supplied Scud missile launchers to Iraq, and The Times regrets any damage that may have resulted to Terex from any false impression the article may have caused."
References
External links
Terex Collection Hudson Library & Historical Society
Companies based in Norwalk, Connecticut
Companies listed on the New York Stock Exchange
Construction equipment manufacturers of the United States
Former General Motors subsidiaries
Manufacturing companies based in Connecticut
Manufacturing companies established in 1933
1933 establishments in Connecticut
Mining equipment companies
Companies in the S&P 400 | Terex | [
"Engineering"
] | 1,402 | [
"Mining equipment",
"Mining equipment companies"
] |
7,287,976 | https://en.wikipedia.org/wiki/BMT%20Group | BMT Group Ltd (previously British Maritime Technology), established in 1985, is an international multidisciplinary engineering, science and technology consultancy offering services particularly in the defence, security, critical infrastructure, commercial shipping, and environment sectors. The company's heritage dates to World War II. BMT's head office is at the Zig Zag Building, 70 Victoria Street Westminster, London, United Kingdom.
BMT specialises in maritime engineering design, design support, risk and contract management and provides services focused by geography, technology and/or market sector. It employs around 1,300 professionals operating from 27 offices across four continents, with primary bases in Australia, Europe, North America and Asia-Pacific.
In August 2017, Sarah Kenny OBE was appointed as Chief Executive. A marine environmental scientist by background, Sarah has worked in maritime science and technology businesses throughout her career.
Sarah recently completed her tenure as Chair of Chair of Maritime UK. to which role she was appointed in 2021. She is also a board member of Maritime London, a Trustee Director of the National Oceanography Centre, a member of the UK Defence Innovation Advisory Panel, and of the National Shipbuilding Office SEG group.
Sarah is also an Honorary Captain of the Royal Navy, an Honorary member of the Royal Corps of Naval Constructors, and a Younger Brother of Trinity House. She was awarded an OBE in the 2019 Queen's Birthday Honours list, for services to the Maritime Industry and Diversity.
BMT's annual turnover for the year 2023 was approx. £184.7m.
History
Originally formed from the merger and privatisation of the UK's British Shipbuilders Research Association (BSRA) and the National Maritime Institute (NMI), it enjoyed tax-free status as a scientific research association for more than a decade.
BMT's heritage includes the water tanks where the famous Bouncing Bomb, used in the Dambusters Raid, was developed during World War II, as well as more recent advances in computer-aided design and aerodynamics.
BMT has also helped to assess the damage caused by major maritime disasters, from the Piper Alpha platform and the Herald of Free Enterprise in 1987, to the Sea Empress oil spill and the effects of Hurricane Katrina.
BMT has also conducted airflow wind tunnel testing of major landmarks and tall buildings, including the Bird's Nest Olympic Stadium in Beijing, the Stonecutters Bridge in Hong Kong; and the 21st Century Tower and Burj al-Arab in Dubai, although it no longer operates wind tunnels.
Employee Ownership
BMT Group Ltd is a company limited by guarantee with its assets held in an Employee Benefit Trust. The remit of the EBT is to ensure the long-term sustainability of the group with the employees as beneficiaries. The EBT trustees are chaired by Sue Mackenzie and include other non-executive directors from the board of BMT and a wholly independent external trustee.
Notable Defence Projects
Queen Elizabeth-Class Aircraft Carrier Design
BMT gained prominence in 2003 when the Secretary of State for Defence revealed the crucial design role of BMT Defence Services in the Future Aircraft Carriers programme. The company provided much of the design expertise within the Thales CVF Team, whose design was taken forward into the alliance with BAE Systems to create what is now the Royal Navy's Queen Elizabeth-class aircraft carrier.
Other Naval Projects
Tide Class (MARS) Royal Fleet Auxiliary Tankers
Tide Class is a fleet of four tankers built to enhance the Royal Navy's maritime capabilities. The first vessel in the class was commissioned into the Royal Fleet Auxiliary (RFA) service in 2016. The next-generation tankers are part of the Military Afloat Reach and Sustainability (MARS) project and were designed to replace the RFA's ageing fleet of single-hulled tankers.
The vessels are designed by BMT, in cooperation with BMT Reliability Consultants and BMT Isis, and are based on the AEGIR tanker. The Tide Class vessels are intended to provide logistics support and services such as transportation of fuel, fresh water, food, and weaponry to the Royal Navy's warships and vessels deployed around the world.
In addition to maintaining the Royal Navy's bulk fuel replenishment at sea capabilities, the tankers can also conduct constabulary and humanitarian aid missions, as well as provide assistance to NATO and coalition allied forces.
Aurora Engineering Delivery Partnership (EDP)
Aurora EDP is a partnership between QinetiQ, AtkinsRéalis and BMT and is the UK Ministry of Defence's primary route for procuring engineering services to ensure that necessary systems and equipment, including maintenance and spares, are available when and where they are required to meet Royal Navy and Royal Fleet Auxiliary operational demand. The partnership thus contributes to platform availability, capability and safety, supporting DE&S Ships Domain through the Master Record Data Centre (MRDC), the Ministry of Defence's core facility for ship information configuration services for the Royal Navy and Royal Fleet Auxiliary Surface Ship Fleet.
Royal Fleet Auxiliary’s Fleet Solid Support (FSS)
Fleet Solid Support Ships are the UK Royal Fleet Auxiliary's modern solid stores replenishment ships- an essential supporting element to the delivery of the Maritime Carrier Strike Group. Fleet Solid Support (FSS) will provide support ships designed to deliver munitions, supplies and provisions to the Royal Navy while at sea. They will provide logistical and operational support, including counter-piracy and counter-terrorism missions and will collaborate with allies on operations.
In January 2023, DE&S awarded a contract worth £1.6 billion to Team Resolute, composed of Harland & Wolff, BMT and Navantia UK, to deliver three Fleet Solid Support ships to the RFA.
The construction of the ships, being built to BMT's design, will take place in both the UK and Spain. Each ship will have a core RFA crew of 101, with accommodation provided for an additional 80 personnel operating helicopters, boats, or performing other roles when required.
The ships are designed with an emphasis on minimising carbon emissions, equipped with energy-efficient technologies to decrease power consumption and are adaptable to reduce their carbon footprint by using low-carbon, non-fossil fuels, and future sustainable energy sources. They are also designed to be adaptable from the outset to achieve a Carbon Zero status by the end of their 30-year operational lifespan.
The production of the first FSS ship is expected to begin in 2025 across three shipyards and all three ships will enter service after final equipment fits and military trials, by 2032.
Team Victoria-Class
Team Victoria-Class is a partnership involving Babcock Canada, Seaspan Victoria Shipyards, and BMT, providing submarine maintenance and sustainment for the Royal Canadian Navy. Operating under the Victoria In-Service Support Contract (VISSC) since 2008, the team manages the upkeep of the Victoria-Class submarines, focusing on project management, engineering, and supply chain development. The submarines perform strategic military roles, such as coastal surveillance and joint coalition missions. The initiative also supports Canadian industry, Indigenous relations, and academic institutions.
ELMS Contract Award
BMT was awarded the Engineering, Logistics, and Management Support (ELMS) contract by the Royal Canadian Navy. The contract involves providing services for the Arctic and Offshore Patrol Ships (AOPS) and Joint Support Ships (JSS), focusing on engineering expertise, in-service support, and supply chain management. The ELMS initiative aims to enhance the operational readiness and sustainment of the Navy's vessels, ensuring long-term efficiency in fleet management.
Non-Defence Projects
BMT REMBRANDT
BMT REMBRANDT is a simulation and training tool developed by BMT for maritime pilot training, incident reconstruction, and risk assessment. It offers high-fidelity simulations of vessel operations, with a focus on navigation and manoeuvring in various environments. The system is used for training mariners, analysing real-world incidents, and assessing operational risks. It supports dynamic modelling of ships and environmental factors, providing a realistic training experience. BMT REMBRANDT is widely employed in the maritime industry for its versatility in enhancing safety and operational efficiency.
Notable BMT REMBRANDT Projects
ROC Dock Project
The ROC Dock Project is a UK maritime innovation initiative involving BMT, which integrates high-fidelity simulation with real-world operations. The project uses BMT's synthetic REMBRANDT environment to enhance maritime training, design, and testing, providing a virtual platform for evaluating vessel behaviour and port operations. By combining digital simulations with physical trials, ROC Dock aims to drive advancements in maritime safety, operational efficiency, and technology development across the UK's maritime sector.
BMT MOD Tug Training Contract
BMT secured a contract with the UK Ministry of Defence (MOD) to provide tug training for Queen Elizabeth-class aircraft carriers. The program utilises BMT REMBRANDT to deliver realistic training scenarios for tug crews managing large vessels. The training focuses on enhancing safety and operational efficiency when manoeuvring the aircraft carriers in confined spaces. The contract emphasises BMT's expertise in simulation-based maritime training, supporting the MOD's requirements for advanced, high-fidelity training solutions.
Networked Simulators for Pilot and Tug Master Training
BMT has integrated networked simulators, including BMT REMBRANDT, to facilitate joint training for marine pilots and tug masters. This approach enables realistic, scenario-based training in which participants can practise complex manoeuvres and coordinated operations. The use of networked simulation enhances safety by allowing pilots and tug operators to train together, simulating the dynamics of real-life ship handling in various conditions. This system is designed to improve operational readiness and collaboration in challenging maritime environments.
Voyage Optimisation Technology
BMT has developed digital voyage optimisation solutions aimed at improving fuel efficiency, emissions reduction, and operational costs for the maritime industry. These tools utilise real-time data and predictive analytics to assist in route planning and decision-making, ensuring safer and more efficient navigation. The technology supports compliance with environmental regulations by optimising voyages to reduce fuel consumption and greenhouse gas emissions.
South Devon College Marine Training Initiative
South Devon College, in collaboration with BMT, launched a marine training initiative integrating advanced technology for maritime education. The program features simulators and other digital tools to train students in navigation, vessel operations, and marine engineering. The collaboration aims to equip the next generation of maritime professionals with industry-relevant skills and support local maritime development.
DNV Approval for BMT Simulators
BMT's simulator suite and associated software, including BMT REMBRANDT, received approval from DNV, an international classification society. The recognition certifies the simulators for use in maritime training and operational assessments. The suite offers high-fidelity simulations for ship handling, navigation, and incident reconstruction, meeting industry standards for training and competency evaluation.
Innovations
BMT SPARO Payload Delivery Device
The BMT SPARO is an innovative payload delivery system developed by BMT for drones, specifically designed for challenging front-line logistics and various other demanding environments. It replaces traditional drone-mounted winches with a novel approach that allows the payload to autonomously descend on a controlled line while the drone maintains a stable hover at a higher altitude.
The BMT SPARO system includes an internally powered cable drum and integrated rotors for horizontal manoeuvrability, providing precise control over the payload's positioning and delivery. This design improves safety and operational versatility by reducing the noise and complexity typically associated with traditional winch systems. It enables drones to perform precise deliveries in situations where direct landing is not feasible, such as hostile environments, disaster relief zones, or ship-to-shore transfers.
While initially developed for defence logistics, the BMT SPARO has potential applications in other sectors, including emergency response, humanitarian aid, and commercial delivery services, where reliable, accurate, and autonomous payload delivery is required. The system's design supports operations in diverse conditions, offering a new solution for aerial logistics.
Offshore Energy Infrastructure
BMT is an established designer of Crew Transfer Vessels for the offshore wind power sector, with vessels deployed in the North American, Japanese and Taiwanese markets. In February 2024 it unveiled its first Service Operation Vessel (SOV) design, capable of being powered by methanol (potentially the efuel variant).
BMT is involved earth observation for maritime markets, having been selected in February 2021 by the European Space Agency as part of the development team to assess the feasibility of applying space-based data to support the decommissioning of offshore energy assets, including oil and gas platforms and offshore wind farms.
Stampede Tension Leg Platform
BMT provided an Integrated Marine Monitoring System (IMMS) for the Stampede Tension Leg Platform (TLP) located in the U.S. Gulf of Mexico. The system is designed to monitor the platform's structural integrity, environmental conditions, and riser tension, enhancing safety and operational efficiency. The IMMS supplies real-time data to support platform management and decision-making in harsh offshore conditions, reflecting BMT's expertise in advanced monitoring solutions for the offshore energy sector.
Partnership for Subsea Monitoring
BMT and Sonardyne are collaborating to advance subsea monitoring technology with a focus on improving the accuracy and reliability of underwater asset monitoring. Their joint efforts aim to deliver a significant step-change in data quality for offshore applications, which will aid in enhancing safety, reducing operational risks, and optimising maintenance strategies. The partnership integrates BMT's expertise in monitoring solutions with Sonardyne's advanced subsea technologies to provide comprehensive monitoring systems for offshore platforms and other underwater assets.
Marine Monitoring for BP in the Gulf of Mexico
BMT has been contracted by BP to provide marine monitoring services for its Gulf of Mexico operations. The scope of the project includes monitoring the environmental conditions, structural health of offshore assets, and key safety indicators. BMT's systems will deliver real-time insights that help ensure operational safety and support compliance with environmental regulations. The contract represents BMT's continued partnership with BP in enhancing the safety and sustainability of offshore oil and gas activities.
Mad Dog Phase 2 Floating Production System
BMT has been awarded a contract to provide monitoring systems for the Mad Dog Phase 2 Floating Production System (FPS) in the Gulf of Mexico. The project involves implementing systems to track structural integrity, environmental conditions, and safety performance. These monitoring capabilities aim to improve the safety and efficiency of offshore operations by offering real-time data to inform maintenance and operational decisions. This initiative highlights BMT's commitment to supporting high-risk offshore projects with advanced monitoring technologies.
StratCat35 Crew Transfer Vessel
The StratCat35 is a crew transfer vessel (CTV) developed by BMT, in collaboration with Strategic Marine, for the offshore wind industry. The vessel was unveiled at WindEnergy Hamburg and is designed to meet diverse operator requirements, with a focus on sustainability in offshore wind operations.
The StratCat35 is part of Strategic Marine's range of CTVs and measures 35 metres in length. It features an expansive deck area to improve storage capacity and operational versatility. The vessel is equipped with BMT's Z-Bow hull form, which is engineered to enhance seakeeping capabilities, vessel speed, and overall performance in challenging offshore conditions.
The StratCat35 incorporates a hybrid propulsion system aimed at reducing greenhouse gas emissions and increasing fuel efficiency. The vessel is also configured to be methanol-ready, allowing for future adaptation to alternative fuel technologies without the need for significant retrofits. The vessel includes BMT's latest active fender system, designed to facilitate safer and more efficient technician transfers in rough sea conditions. It can accommodate up to 36 passengers and 10 crew members, with facilities designed to maximise comfort during transit.
The development of the StratCat35 is part of BMT and Strategic Marine's ongoing collaboration to advance CTV technology within the offshore wind sector. The vessel's design reflects a combination of sustainability considerations and operational efficiency.
Commercial Naval Architecture
Fire and Rescue Vessel Projects
BMT, in collaboration with Penguin Shipyard International, is developing a new 38-metre fire and rescue vessel for the Singapore Civil Defence Force (SCDF). The project expands on previous vessels, the Heavy Rescue Vessel (Red Manta) and Marine Rescue Vessel (Red Dolphin), delivered in 2019. The new vessels will enhance SCDF's rapid response capabilities with advanced firefighting and rescue equipment, high-speed capabilities, and a design focused on manoeuvrability and safety. The vessels are expected to be operational by 2025.
Greenline 150 Electric Ferry
BMT, in collaboration with Greenline Marine, unveiled the Greenline 150 Passenger Electric Ferry at the Canadian Ferry Association 2024 Conference. The 32-metre vessel, designed to accommodate 150 passengers, focuses on energy efficiency and environmental sustainability. It features an optimised hull form and electric propulsion system aimed at minimising energy consumption and emissions. The design meets international environmental standards and includes safety measures for battery systems. The ferry aims to enhance passenger comfort with a quieter, smoother ride.
References
External links
1985 establishments in the United Kingdom
Companies established in 1985
Defence companies of the United Kingdom
Engineering companies of the United Kingdom
Marine engineering organizations | BMT Group | [
"Engineering"
] | 3,519 | [
"Marine engineering organizations",
"Marine engineering"
] |
7,288,183 | https://en.wikipedia.org/wiki/Special%20Metals%20Corporation | Special Metals Corporation (SMC) is an American supplier of special refractory alloys and is headquartered in New Hartford, New York, United States. The company has operations in Perth, Western Australia; Albury, New South Wales;Huntington, West Virginia; Dunkirk, New York; Burnaugh, Kentucky; Elkhart, Indiana and Hereford, England.
SMC's trademarks include Inconel, Incoloy, Monel, Nimonic, and Udimet.
History
"In 1952, a predecessor of Special Metals pioneered the melting technology that led to the practical development of the superalloys that are the critical materials used in the 'hot' section of modern jet engines."
At year end of 1996, SMC had "45 million pounds of vacuum induction melting capacity", 590 employees, was incorporated in Delaware and was managed by Don Muzyka.
SMC acquired Inco Alloys International from Inco in 1998 at the same time as it sold US$125 million of preferred stock to Titanium Metals Corporation.
In 2006, Special Metals was acquired by Precision Castparts Corp. of Portland, Oregon, US.
SMC is now ultimately owned by Berkshire Hathaway as a result of the latter company's acquisition of Precision Castparts in January 2016.<ref name="news">
References
External links
Precision Metal Stamping
Special Metals Wiggin Official UK website
Nickel
Nickel alloys
Berkshire Hathaway
American corporate subsidiaries
Metal companies of the United States
Manufacturing companies based in West Virginia
Manufacturing companies based in New York (state)
2006 mergers and acquisitions | Special Metals Corporation | [
"Chemistry"
] | 317 | [
"Nickel alloys",
"Alloys"
] |
7,288,953 | https://en.wikipedia.org/wiki/ReaxFF | ReaxFF (for “reactive force field”) is a bond order-based force field developed by Adri van Duin, William A. Goddard, III, and co-workers at the California Institute of Technology. One of its applications is molecular dynamics simulations. Whereas traditional force fields are unable to model chemical reactions because
of the requirement of breaking and forming bonds (a force field's functional form depends on having all bonds defined
explicitly), ReaxFF eschews explicit bonds in favor of bond orders, which allows for continuous bond formation/breaking. ReaxFF aims to be as general as possible and has been parameterized and tested for hydrocarbon reactions, alkoxysilane gelation, transition-metal-catalyzed nanotube formation, and many advanced material applications such as Li ion batteries, TiO2, polymers, and high-energy materials.
To be able to deal with bond breaking and formation whilst having only 1 single atom type for each element, ReaxFF is a fairly complex force field with many parameters. Therefore an extensive training set is necessary covering the relevant chemical phase space, including bond and angle stretches, activation and reaction energies, equation of state, surface energies, and much more. Usually, but not necessarily, the training data is generated with electronic structure methods. In practice, often DFT calculations are used as a pragmatic approach, especially since more accurate functionals are available.
For the parameterization of such a complex force field, global optimization techniques offer the best chance to get a parameter set that most closely describes the training data.
References
External links
Adri van Duin's Website
ReaxFF in LAMMPS
ReaxFF in the Amsterdam Modeling Suite
ReaxFF in PuReMD (Purdue Reactive MD) suite
Force fields (chemistry) | ReaxFF | [
"Chemistry"
] | 364 | [
"Molecular dynamics",
"Computational chemistry",
"Force fields (chemistry)"
] |
7,289,559 | https://en.wikipedia.org/wiki/ZaMirNET | ZaMirNET (ForPeaceNET) is a Croatia-based non-governmental organisation working in the field of ICT (information and communication technology).
Roots
AdvocacyNet.org describes its formation during the turmoil in the former Yugoslavia. It says, "Electronic information became an instrument of war and peace during the collapse of Yugoslavia." Amidst the "worst crimes committed in Europe this century" the first major experiment in email was launched in June 1992 in Zagreb and Belgrade, almost exactly a year after Croatia seceded from Yugoslavia, triggering a brutal response from Serbia.
This venture got the support of the George Soros-funded Open Society Institute, and US peace activist Eric Bachman, living in Europe since 1969, together with the Dutch peace activist Wam Kat (who wrote his daily "Zagreb Dairy" on Zamir), set up an electronic network between peace groups in the region. Eric Bachman enlisted the help of FoeBuD (now digitalcourage), an organisation promoting digital communications and privacy and based in Bielefeld, Germany. In the early years, bulletin board systems in London, Austria and Bielefeld provided the shortest routes for electronic messages across the borders of the emerging Balkan states. The network was named ZaMir ("For Peace") Transnational Net.
A research work titled Documenting the impact of the community peacebuilding practices in the post-Yugoslav region as a basis for policy framework development conducted by Marina Škrabalo, an activist of the Centre for Peace Studies, Croatia, provides some details.
It says: "(The) initial steps to enable communication among emerging peace groups separated by the lines of conflict took place in October 1991, when an improvised fax relay system was set up, with the help of international solidarity organizations such as War Resisters International (WRI) and the International Fellowship of Reconciliation (IFOR) that acted as intermediaries and dispatchers of messages. A turning point was early 1992 when the Communications Aid project for the people in former Yugoslavia was launched by foreign peace groups together with the Center for the Culture of Peace and Nonviolence in Ljubljana, the Anti-War Campaign of Zagreb and the Center for Anti-war Action in Belgrade, with the objective of setting up an alternative electronic mail system (bulletin board system or BBS) that could work on poor quality telephone lines and simple computers, the only available ICT resources at that time in the war-stricken post-Yugoslav region.
"Trust link"
According to a report by FoeBuD, "The Communications Aid is not only for an exchange of letters, messages, news and ideas among the peace groups, but it is helping people from both sides of the conflict begin to communicate again with each other. (This idea was first expressed in a proposal of the International Physicians for the Prevention of Nuclear War (IPPNW) in the former Yugoslavia for a "trust link" between the conflicting sides.) It is being enlarged to enable humanitarian aid groups, NGOs (non-governmental organisations), educational institutions and others to use the network. Additionally it can, for example, provided the basis of a communication network to help refugees and displaced persons to find each other."
Marina Škrabalo's research says: "In February/March 1994, ZaMir servers were installed in Ljubljana and besieged Sarajevo, followed by the set up of the Priština server ZANA, administered by the independent newspaper Koha in October 1994. The network was considerably improved in spring 1995, when the Zagreb, Sarajevo and Belgrade servers were enlarged and a new server, with direct international telephone access was installed in Tuzla".
The impact of ZTN on the development and sustainability of the post-Yugoslav peace movement during the most intense war period from 1992 until the signing of Dayton agreement in November 1995 is considered by some to be significant.
Peace, human rights activists and journalists
This network connected and provided training and technical support to more than 1700 peace, human rights and humanitarian workers and independent journalists from all the countries in war, including dozens of local and international NGOs that used this communication channel to assist in the search for the missing persons and tracing relatives stuck in war zones, plan joint peace-building projects, political campaigns and send out independent news reports and access more than 150 regional and international news conferences.
Two international volunteers, Kathryn Turnipseed and Cecilia Hansen, under a project name "Electronic Witches", created the first ZTN training manual for women-users ensuring that gender specific barriers to use of ICT would be overcome in the trainings they delivered to hundreds of women activists throughout Croatia, Serbia and Bosnia and Herzegovina.
As the intense war period in Bosnia and Herzegovina and Croatia passed, the telephone lines and direct Internet access became more viable, ZTN did not manage to achieve its goal of adjusting its system to more advanced technology, due to lack of resources and weariness of the core groups of volunteers who
kept it going during the difficult war years.
Several web-based networking and media outlets have in the meantime emerged in the post-Yugoslav region—such as Ljudmila, Kontrapunkt Festival, out of which ZaMirNET in Zagreb has built on—in terms of values, activist networks and human resources of ZTN.
Current operations
ZaMirNet has an office in Zagreb, Croatia with a staff of around six. Until 2004, ZaMirNet had several local offices in war affected areas of Croatia.
Current program areas include:
Strategic use of ICTs
Education
Networking
Independent media initiatives
It is a member of the Association for Progressive Communications, and sees itself as being "dedicated to the promotion of civil society and its values through ICT and the development of new media initiatives".
Recently (2004), it was involved with a media-project called ZaMirZINE—an electronic news magazine specialising in themes related to civil society. This interactive e-journal, was aimed at serving as a media outlet for a situation of otherwise scarce news on youth, peace-building, women's rights, gay and lesbian issues, environment and independent cultural initiatives. It combined articles with columns on national, regional and international events of relevance to human rights, social and economic justice and peace. ZaMirZINE is based on cooperation and knowledge-transfer between activists, young journalists and established journalists. ZaMirZINE was voted the best Croatian electronic zine of the year 2004 by the PC Chip, a magazine focused on ICT.
Through its MEDIAnet project, launched in 2004 too, ZaMirNET says it aims to encourage and facilitate the establishment of locally based independent media in Croatia and the neighbouring countries—Bosnia and Herzegovina, Serbia and Montenegro. NGOs, through this project, are supported to expand their outreach and communication strategies to the media—including via community-based, alternative, and internet-based media outlets.
ZaMirNET says its research indicates that the mainstream media "tend to represent civil society organisations with a sensationalist and sometimes biased perspectives". On the other hand, it also notes that NGO members tend to lack journalistic skills to systematically report news about their sector "in a professional way".
ZaMirNET believes its ZaMirZINE could offer an inclusive media environment, based on a "knowledge transfer between activists, young journalists and established journalists".
ZaMirNET team
Last ZaMirNET's governing board was composed of Vatroslav Zovko, Srđan Dvornik, Davor Gjenero, Predrag Bejaković, and Nebojša Gavrilov.
Due to the lack of funding ZaMirNET ceased its operations in 2016.
Notes
External links
ZaMirNET, Croatia
ZaMirNET, English website
ZaMirNET Founds a Network of Independent Media
APC report, ZaMirNET founds a network of independent media
Peace-building in Croatia through community technology centres
ZaMirZINE
Information and communication technologies for development
Non-profit technology
Information technology organizations based in Croatia | ZaMirNET | [
"Technology"
] | 1,600 | [
"Information and communications technology",
"Information and communication technologies for development",
"Information technology",
"Non-profit technology"
] |
7,289,806 | https://en.wikipedia.org/wiki/Screenless%20video | Screenless video is any system for transmitting visual information from a video source without the use of a screen. Screenless computing systems can be divided into three groups: Visual Image, Retinal Direct, and Synaptic Interface.
Visual image
Visual Image screenless display includes any image that the eye can perceive. The most common example of Visual Image screenless display is a hologram. In these cases, light is reflected off some intermediate object (hologram, LCD panel, or cockpit window) before it reaches the retina. In the case of LCD panels the light is refracted from the back of the panel, but is nonetheless a reflected source. Google has proposed a similar system to replace the screens of tablet computers and smartphones.
Retinal display
Virtual retinal display systems are a class of screenless displays in which images are projected directly onto the retina. They are distinguished from visual image systems because light is not reflected from some intermediate object onto the retina, it is instead projected directly onto the retina. Retinal Direct systems, once marketed, hold out the promise of extreme privacy when computing work is done in public places because most snooping relies on viewing the same light as the person who is legitimately viewing the screen, and retinal direct systems send light only into the pupils of their intended viewer.
Synaptic interface
Synaptic Interface screenless video does not use light at all. Visual information completely bypasses the eye and is transmitted directly to the brain. While such systems have only been implemented in humans in rudimentary form - for example, displaying single Braille characters to blind people – success has been achieved in sampling usable video signals from the biological eyes of a living horseshoe crab through their optic nerves, and in sending video signals from electronic cameras into the creatures' brains using the same method.
See also
Volumetric display
Fog display
Augmented reality
References
Display technology
Virtual reality
User interfaces
Computer graphics
Computer output devices | Screenless video | [
"Technology",
"Engineering"
] | 394 | [
"User interfaces",
"Electronic engineering",
"Interfaces",
"Display technology"
] |
7,290,730 | https://en.wikipedia.org/wiki/Rotation%20formalisms%20in%20three%20dimensions | In geometry, various formalisms exist to express a rotation in three dimensions as a mathematical transformation. In physics, this concept is applied to classical mechanics where rotational (or angular) kinematics is the science of quantitative description of a purely rotational motion. The orientation of an object at a given instant is described with the same tools, as it is defined as an imaginary rotation from a reference placement in space, rather than an actually observed rotation from a previous placement in space.
According to Euler's rotation theorem, the rotation of a rigid body (or three-dimensional coordinate system with a fixed origin) is described by a single rotation about some axis. Such a rotation may be uniquely described by a minimum of three real parameters. However, for various reasons, there are several ways to represent it. Many of these representations use more than the necessary minimum of three parameters, although each of them still has only three degrees of freedom.
An example where rotation representation is used is in computer vision, where an automated observer needs to track a target. Consider a rigid body, with three orthogonal unit vectors fixed to its body (representing the three axes of the object's local coordinate system). The basic problem is to specify the orientation of these three unit vectors, and hence the rigid body, with respect to the observer's coordinate system, regarded as a reference placement in space.
Rotations and motions
Rotation formalisms are focused on proper (orientation-preserving) motions of the Euclidean space with one fixed point, that a rotation refers to. Although physical motions with a fixed point are an important case (such as ones described in the center-of-mass frame, or motions of a joint), this approach creates a knowledge about all motions. Any proper motion of the Euclidean space decomposes to a rotation around the origin and a translation. Whichever the order of their composition will be, the "pure" rotation component wouldn't change, uniquely determined by the complete motion.
One can also understand "pure" rotations as linear maps in a vector space equipped with Euclidean structure, not as maps of points of a corresponding affine space. In other words, a rotation formalism captures only the rotational part of a motion, that contains three degrees of freedom, and ignores the translational part, that contains another three.
When representing a rotation as numbers in a computer, some people prefer the quaternion representation or the axis+angle representation, because they avoid the gimbal lock that can occur with Euler rotations.
Formalism alternatives
Rotation matrix
The above-mentioned triad of unit vectors is also called a basis. Specifying the coordinates (components) of vectors of this basis in its current (rotated) position, in terms of the reference (non-rotated) coordinate axes, will completely describe the rotation. The three unit vectors, , and , that form the rotated basis each consist of 3 coordinates, yielding a total of 9 parameters.
These parameters can be written as the elements of a matrix , called a rotation matrix. Typically, the coordinates of each of these vectors are arranged along a column of the matrix (however, beware that an alternative definition of rotation matrix exists and is widely used, where the vectors' coordinates defined above are arranged by rows)
The elements of the rotation matrix are not all independent—as Euler's rotation theorem dictates, the rotation matrix has only three degrees of freedom.
The rotation matrix has the following properties:
is a real, orthogonal matrix, hence each of its rows or columns represents a unit vector.
The eigenvalues of are where is the standard imaginary unit with the property
The determinant of is +1, equivalent to the product of its eigenvalues.
The trace of is , equivalent to the sum of its eigenvalues.
The angle which appears in the eigenvalue expression corresponds to the angle of the Euler axis and angle representation. The eigenvector corresponding to the eigenvalue of 1 is the accompanying Euler axis, since the axis is the only (nonzero) vector which remains unchanged by left-multiplying (rotating) it with the rotation matrix.
The above properties are equivalent to
which is another way of stating that form a 3D orthonormal basis. These statements comprise a total of 6 conditions (the cross product contains 3), leaving the rotation matrix with just 3 degrees of freedom, as required.
Two successive rotations represented by matrices and are easily combined as elements of a group,
(Note the order, since the vector being rotated is multiplied from the right).
The ease by which vectors can be rotated using a rotation matrix, as well as the ease of combining successive rotations, make the rotation matrix a useful and popular way to represent rotations, even though it is less concise than other representations.
Euler axis and angle (rotation vector)
From Euler's rotation theorem we know that any rotation can be expressed as a single rotation about some axis. The axis is the unit vector (unique except for sign) which remains unchanged by the rotation. The magnitude of the angle is also unique, with its sign being determined by the sign of the rotation axis.
The axis can be represented as a three-dimensional unit vector
and the angle by a scalar .
Since the axis is normalized, it has only two degrees of freedom. The angle adds the third degree of freedom to this rotation representation.
One may wish to express rotation as a rotation vector, or Euler vector, an un-normalized three-dimensional vector the direction of which specifies the axis, and the length of which is ,
The rotation vector is useful in some contexts, as it represents a three-dimensional rotation with only three scalar values (its components), representing the three degrees of freedom. This is also true for representations based on sequences of three Euler angles (see below).
If the rotation angle is zero, the axis is not uniquely defined. Combining two successive rotations, each represented by an Euler axis and angle, is not straightforward, and in fact does not satisfy the law of vector addition, which shows that finite rotations are not really vectors at all. It is best to employ the rotation matrix or quaternion notation, calculate the product, and then convert back to Euler axis and angle.
Euler rotations
The idea behind Euler rotations is to split the complete rotation of the coordinate system into three simpler constitutive rotations, called precession, nutation, and intrinsic rotation, being each one of them an increment on one of the Euler angles. Notice that the outer matrix will represent a rotation around one of the axes of the reference frame, and the inner matrix represents a rotation around one of the moving frame axes. The middle matrix represents a rotation around an intermediate axis called line of nodes.
However, the definition of Euler angles is not unique and in the literature many different conventions are used. These conventions depend on the axes about which the rotations are carried out, and their sequence (since rotations on a sphere are non-commutative).
The convention being used is usually indicated by specifying the axes about which the consecutive rotations (before being composed) take place, referring to them by index or letter . The engineering and robotics communities typically use 3-1-3 Euler angles. Notice that after composing the independent rotations, they do not rotate about their axis anymore. The most external matrix rotates the other two, leaving the second rotation matrix over the line of nodes, and the third one in a frame comoving with the body. There are possible combinations of three basic rotations but only of them can be used for representing arbitrary 3D rotations as Euler angles. These 12 combinations avoid consecutive rotations around the same axis (such as XXY) which would reduce the degrees of freedom that can be represented.
Therefore, Euler angles are never expressed in terms of the external frame, or in terms of the co-moving rotated body frame, but in a mixture. Other conventions (e.g., rotation matrix or quaternions) are used to avoid this problem.
In aviation orientation of the aircraft is usually expressed as intrinsic Tait-Bryan angles following the convention, which are called heading, elevation, and bank (or synonymously, yaw, pitch, and roll).
Quaternions
Quaternions, which form a four-dimensional vector space, have proven very useful in representing rotations due to several advantages over the other representations mentioned in this article.
A quaternion representation of rotation is written as a versor (normalized quaternion):
The above definition stores the quaternion as an array following the convention used in (Wertz 1980) and (Markley 2003). An alternative definition, used for example in (Coutsias 1999) and (Schmidt 2001), defines the "scalar" term as the first quaternion element, with the other elements shifted down one position.
In terms of the Euler axis
and angle this versor's components are expressed as follows:
Inspection shows that the quaternion parametrization obeys the following constraint:
The last term (in our definition) is often called the scalar term, which has its origin in quaternions when understood as the mathematical extension of the complex numbers, written as
and where are the hypercomplex numbers satisfying
Quaternion multiplication, which is used to specify a composite rotation, is performed in the same manner as multiplication of complex numbers, except that the order of the elements must be taken into account, since multiplication is not commutative. In matrix notation we can write quaternion multiplication as
Combining two consecutive quaternion rotations is therefore just as simple as using the rotation matrix. Just as two successive rotation matrices, followed by , are combined as
we can represent this with quaternion parameters in a similarly concise way:
Quaternions are a very popular parametrization due to the following properties:
More compact than the matrix representation and less susceptible to round-off errors
The quaternion elements vary continuously over the unit sphere in , (denoted by ) as the orientation changes, avoiding discontinuous jumps (inherent to three-dimensional parameterizations)
Expression of the rotation matrix in terms of quaternion parameters involves no trigonometric functions
It is simple to combine two individual rotations represented as quaternions using a quaternion product
Like rotation matrices, quaternions must sometimes be renormalized due to rounding errors, to make sure that they correspond to valid rotations. The computational cost of renormalizing a quaternion, however, is much less than for normalizing a matrix.
Quaternions also capture the spinorial character of rotations in three dimensions. For a three-dimensional object connected to its (fixed) surroundings by slack strings or bands, the strings or bands can be untangled after two complete turns about some fixed axis from an initial untangled state. Algebraically, the quaternion describing such a rotation changes from a scalar +1 (initially), through (scalar + pseudovector) values to scalar −1 (at one full turn), through (scalar + pseudovector) values back to scalar +1 (at two full turns). This cycle repeats every 2 turns. After turns (integer ), without any intermediate untangling attempts, the strings/bands can be partially untangled back to the turns state with each application of the same procedure used in untangling from 2 turns to 0 turns. Applying the same procedure times will take a -tangled object back to the untangled or 0 turn state. The untangling process also removes any rotation-generated twisting about the strings/bands themselves. Simple 3D mechanical models can be used to demonstrate these facts.
Rodrigues vector
The Rodrigues vector (sometimes called the Gibbs vector, with coordinates called Rodrigues parameters) can be expressed in terms of the axis and angle of the rotation as follows:
This representation is a higher-dimensional analog of the gnomonic projection, mapping unit quaternions from a 3-sphere onto the 3-dimensional pure-vector hyperplane.
It has a discontinuity at 180° ( radians): as any rotation vector tends to an angle of radians, its tangent tends to infinity.
A rotation followed by a rotation in the Rodrigues representation has the simple rotation composition form
Today, the most straightforward way to prove this formula is in the (faithful) doublet representation, where , etc.
The combinatoric features of the Pauli matrix derivation just mentioned are also identical to the equivalent quaternion derivation below. Construct a quaternion associated with a spatial rotation as,
Then the composition of the rotation with is the rotation , with rotation axis and angle defined by the product of the quaternions,
that is
Expand this quaternion product to
Divide both sides of this equation by the identity resulting from the previous one,
and evaluate
This is Rodrigues' formula for the axis of a composite rotation defined in terms of the axes of the two component rotations. He derived this formula in 1840 (see page 408). The three rotation axes , , and form a spherical triangle and the dihedral angles between the planes formed by the sides of this triangle are defined by the rotation angles.
Modified Rodrigues Parameters (MRPs)
Modified Rodrigues parameters (MRPs) can be expressed in terms of Euler axis and angle by
Its components can be expressed in terms of the components of a unit quaternion representing the same rotation as
The modified Rodrigues vector is a stereographic projection mapping unit quaternions from a 3-sphere onto the 3-dimensional pure-vector hyperplane. The projection of the opposite quaternion results in a different modified Rodrigues vector than the projection of the original quaternion . Comparing components one obtains that
Notably, if one of these vectors lies inside the unit 3-sphere, the other will lie outside.
Cayley–Klein parameters
See definition at Wolfram Mathworld.
Higher-dimensional analogues
Vector transformation law
Active rotations of a 3D vector in Euclidean space around an axis over an angle can be easily written in terms of dot and cross products as follows:
wherein
is the longitudinal component of along , given by the dot product,
is the transverse component of with respect to , and
is the cross product of with .
The above formula shows that the longitudinal component of remains unchanged, whereas the transverse portion of is rotated in the plane perpendicular to . This plane is spanned by the transverse portion of itself and a direction perpendicular to both and . The rotation is directly identifiable in the equation as a 2D rotation over an angle .
Passive rotations can be described by the same formula, but with an inverse sign of either or .
Conversion formulae between formalisms
Rotation matrix ↔ Euler angles
The Euler angles can be extracted from the rotation matrix by inspecting the rotation matrix in analytical form.
Rotation matrix → Euler angles ( extrinsic)
Using the -convention, the 3-1-3 extrinsic Euler angles , and (around the -axis, -axis and again the -axis) can be obtained as follows:
Note that is equivalent to where it also takes into account the quadrant that the point is in; see atan2.
When implementing the conversion, one has to take into account several situations:
There are generally two solutions in the interval . The above formula works only when is within the interval .
For the special case , and will be derived from and .
There are infinitely many but countably many solutions outside of the interval .
Whether all mathematical solutions apply for a given application depends on the situation.
Euler angles ( intrinsic) → rotation matrix
The rotation matrix is generated from the 3-2-1 intrinsic Euler angles by multiplying the three matrices generated by rotations about the axes.
The axes of the rotation depend on the specific convention being used. For the -convention the rotations are about the -, - and -axes with angles , and , the individual matrices are as follows:
This yields
Note: This is valid for a right-hand system, which is the convention used in almost all engineering and physics disciplines.
The interpretation of these right-handed rotation matrices is that they express coordinate transformations (passive) as opposed to point transformations (active). Because expresses a rotation from the local frame to the global frame (i.e., encodes the axes of frame with respect to frame ), the elementary rotation matrices are composed as above. Because the inverse rotation is just the rotation transposed, if we wanted the global-to-local rotation from frame to frame , we would write
Rotation matrix ↔ Euler axis/angle
If the Euler angle is not a multiple of , the Euler axis and angle can be computed from the elements of the rotation matrix as follows:
Alternatively, the following method can be used:
Eigendecomposition of the rotation matrix yields the eigenvalues 1 and . The Euler axis is the eigenvector corresponding to the eigenvalue of 1, and can be computed from the remaining eigenvalues.
The Euler axis can be also found using singular value decomposition since it is the normalized vector spanning the null-space of the matrix .
To convert the other way the rotation matrix corresponding to an Euler axis and angle can be computed according to Rodrigues' rotation formula (with appropriate modification) as follows:
with the identity matrix, and
is the cross-product matrix.
This expands to:
Rotation matrix ↔ quaternion
When computing a quaternion from the rotation matrix there is a sign ambiguity, since and represent the same rotation.
One way of computing the quaternion
from the rotation matrix is as follows:
There are three other mathematically equivalent ways to compute . Numerical inaccuracy can be reduced by avoiding situations in which the denominator is close to zero. One of the other three methods looks as follows:
The rotation matrix corresponding to the quaternion can be computed as follows:
where
which gives
or equivalently
This is called the Euler–Rodrigues formula for the transformation matrix
Euler angles ↔ quaternion
Euler angles ( extrinsic) → quaternion
We will consider the -convention 3-1-3 extrinsic Euler angles for the following algorithm. The terms of the algorithm depend on the convention used.
We can compute the quaternion
from the Euler angles as follows:
Euler angles ( intrinsic) → quaternion
A quaternion equivalent to yaw (), pitch () and roll () angles. or intrinsic Tait–Bryan angles following the convention, can be computed by
Quaternion → Euler angles ( extrinsic)
Given the rotation quaternion
the -convention 3-1-3 extrinsic Euler Angles can be computed by
Quaternion → Euler angles ( intrinsic)
Given the rotation quaternion
yaw, pitch and roll angles, or intrinsic Tait–Bryan angles following the convention, can be computed by
Euler axis–angle ↔ quaternion
Given the Euler axis and angle , the quaternion
can be computed by
Given the rotation quaternion , define
Then the Euler axis and angle can be computed by
Rotation matrix ↔ Rodrigues vector
Rodrigues vector → Rotation matrix
Since the definition of the Rodrigues vector can be related to rotation quaternions:By making use of the following propertyThe formula can be obtained by factoring from the final expression obtained for quaternions:
Leading to the final formula:
Conversion formulae for derivatives
Rotation matrix ↔ angular velocities
The angular velocity vector
can be extracted from the time derivative of the rotation matrix by the following relation:
The derivation is adapted from Ioffe as follows:
For any vector , consider and differentiate it:
The derivative of a vector is the linear velocity of its tip. Since is a rotation matrix, by definition the length of is always equal to the length of , and hence it does not change with time. Thus, when rotates, its tip moves along a circle, and the linear velocity of its tip is tangential to the circle; i.e., always perpendicular to . In this specific case, the relationship between the linear velocity vector and the angular velocity vector is
(see circular motion and cross product).
By the transitivity of the abovementioned equations,
which implies
Quaternion ↔ angular velocities
The angular velocity vector
can be obtained from the derivative of the quaternion as follows:
where is the conjugate (inverse) of .
Conversely, the derivative of the quaternion is
Rotors in a geometric algebra
The formalism of geometric algebra (GA) provides an extension and interpretation of the quaternion method. Central to GA is the geometric product of vectors, an extension of the traditional inner and cross products, given by
where the symbol denotes the exterior product or wedge product. This product of vectors , and produces two terms: a scalar part from the inner product and a bivector part from the wedge product. This bivector describes the plane perpendicular to what the cross product of the vectors would return.
Bivectors in GA have some unusual properties compared to vectors. Under the geometric product, bivectors have a negative square: the bivector describes the -plane. Its square is . Because the unit basis vectors are orthogonal to each other, the geometric product reduces to the antisymmetric outer product, so and can be swapped freely at the cost of a factor of −1. The square reduces to since the basis vectors themselves square to +1.
This result holds generally for all bivectors, and as a result the bivector plays a role similar to the imaginary unit. Geometric algebra uses bivectors in its analogue to the quaternion, the rotor, given by
where is a unit bivector that describes the plane of rotation. Because squares to −1, the power series expansion of generates the trigonometric functions. The rotation formula that maps a vector to a rotated vector is then
where
is the reverse of (reversing the order of the vectors in is equivalent to changing its sign).
Example. A rotation about the axis
can be accomplished by converting to its dual bivector,
where is the unit volume element, the only trivector (pseudoscalar) in three-dimensional space. The result is
In three-dimensional space, however, it is often simpler to leave the expression for , using the fact that commutes with all objects in 3D and also squares to −1. A rotation of the vector in this plane by an angle is then
Recognizing that
and that is the reflection of about the plane perpendicular to gives a geometric interpretation to the rotation operation: the rotation preserves the components that are parallel to and changes only those that are perpendicular. The terms are then computed:
The result of the rotation is then
A simple check on this result is the angle . Such a rotation should map to . Indeed, the rotation reduces to
exactly as expected. This rotation formula is valid not only for vectors but for any multivector. In addition, when Euler angles are used, the complexity of the operation is much reduced. Compounded rotations come from multiplying the rotors, so the total rotor from Euler angles is
but
These rotors come back out of the exponentials like so:
where refers to rotation in the original coordinates. Similarly for the rotation,
Noting that and commute (rotations in the same plane must commute), and the total rotor becomes
Thus, the compounded rotations of Euler angles become a series of equivalent rotations in the original fixed frame.
While rotors in geometric algebra work almost identically to quaternions in three dimensions, the power of this formalism is its generality: this method is appropriate and valid in spaces with any number of dimensions. In 3D, rotations have three degrees of freedom, a degree for each linearly independent plane (bivector) the rotation can take place in. It has been known that pairs of quaternions can be used to generate rotations in 4D, yielding six degrees of freedom, and the geometric algebra approach verifies this result: in 4D, there are six linearly independent bivectors that can be used as the generators of rotations.
See also
Euler filter
Orientation (geometry)
Rotation around a fixed axis
References
Further reading
External links
EuclideanSpace has a wealth of information on rotation representation
Q36. How do I generate a rotation matrix from Euler angles? and Q37. How do I convert a rotation matrix to Euler angles? — The Matrix and Quaternions FAQ
Imaginary numbers are not Real – the Geometric Algebra of Spacetime – Section "Rotations and Geometric Algebra" derives and applies the rotor description of rotations
Starlino's DCM Tutorial – Direction cosine matrix theory tutorial and applications. Space orientation estimation algorithm using accelerometer, gyroscope and magnetometer IMU devices. Using complimentary filter (popular alternative to Kalman filter) with DCM matrix.
Rotation
Euclidean symmetries
Orientation (geometry)
Rigid bodies mechanics | Rotation formalisms in three dimensions | [
"Physics",
"Mathematics"
] | 5,198 | [
"Physical phenomena",
"Functions and mappings",
"Euclidean symmetries",
"Mathematical objects",
"Classical mechanics",
"Rotation",
"Motion (physics)",
"Topology",
"Space",
"Mathematical relations",
"Geometry",
"Spacetime",
"Orientation (geometry)",
"Symmetry"
] |
7,290,910 | https://en.wikipedia.org/wiki/B%E2%80%93Bbar%20oscillation | Neutral B meson oscillations (or – oscillations) are one of the manifestations of the neutral particle oscillation, a fundamental prediction of the Standard Model of particle physics. It is the phenomenon of B mesons changing (or oscillating) between their matter and antimatter forms before their decay. The meson can exist as either a bound state of a strange antiquark and a bottom quark, or a strange quark and bottom antiquark. The oscillations in the neutral B sector are analogous to the phenomena that produce long and short-lived neutral kaons.
– mixing was observed by the CDF experiment at Fermilab in 2006 and by LHCb at CERN in 2011 and 2021.
Excess of matter over antimatter
The Standard Model predicts that regular matter are slightly favored in these oscillations over their antimatter counterpart, making strange B mesons of special interest to particle physicists. The observation of the – mixing phenomena led physicists to propose the construction of the so-named "B factories" in the early 1990s. They realized that a precise – oscillation measure could pin down the unitarity triangle and perhaps explain the excess of matter over antimatter in the universe. To this end construction began on two "B factories" in the late nineties, one at the Stanford Linear Accelerator Center (SLAC) in California and one at KEK in Japan.
These B factories, BaBar and Belle, were set at the (4S) resonance which is just above the threshold for decay into two B mesons.
On 14 May 2010, physicists at the Fermi National Accelerator Laboratory reported that the oscillations decayed into matter 1% more often than into antimatter, which may help explain the abundance of matter over antimatter in the observed Universe. However, more recent results at LHCb in 2011, 2012, and 2021 with larger data samples have demonstrated no significant deviation from the Standard Model prediction of very nearly zero asymmetry.
See also
Baryogenesis
CP Violation
Kaon
Neutral particle oscillation
Strange B meson
References
Further reading
— paper describing the discovery of B-meson mixing by the ARGUS collaboration
— announcement of the 5 sigma discovery
External links
BaBar Public Homepage
Belle Public Homepage
B physics | B–Bbar oscillation | [
"Physics"
] | 479 | [
"Particle physics stubs",
"Particle physics"
] |
7,291,031 | https://en.wikipedia.org/wiki/Escambia%20Amateur%20Astronomers%20Association | The Escambia Amateur Astronomers Association (EAAA) is an amateur astronomy club in Northwest Florida.
History
It was originally started in June 1959 by two elementary school students and one about to start junior high school. Originally known as the Warrington Amateur Astronomers Association, the first few years it operated informally as a backyard telescope group. As the membership gained in age, the club was renamed the Escambia Amateur Astronomers Association (EAAA) when the club went county-wide. Activities included star parties, meetings at the public library, and field trips, to the Pensacola Naval Air Station planetarium and centrifuge, to Spring Hill College observatory, to a high school astronomy club east-in DeFuniak Springs, Florida, The Walton County Astronomy Club. Both were Junior Member Clubs in the Astronomical League. In the 1960s many of the most active members left for college and a new generation of members replaced them. Soon, the club was printing a club newsletter—the "METEOR"—which is still in print. Sponsor Dr. Wayne Wooten edited it for 40 years, and former student Nicole Gunter has taken it over as part of her journalism work at the University of Florida.
The club became inactive in the 1970s. Activity resumed a few years later when the club founder Robert Blake returned to the area as a temporary replacement for the Pensacola Junior College astronomy instructor. He got together such of the old membership as were still in the area in late 1977 and planned a reactivation of the club with the facilities of the community college—such as their Owens Planetarium. With this backing, the club became more and more active. When the astronomy instructor returned, he donated the first large portable telescope to the club—permitting public viewing. Since the 1980s the club hosted a summer viewing program for the National Park Service at the Fort Pickens campground—taken over from a Pensacola Junior College professor Dr. Frank Palma who had previously given the programs. When an annular eclipse took place nearby in May 1984, the club raised the money for safe Mylar solar filter material and then gave programs to schools on how to safely watch the event—with filters handed out at no cost to the students. In 1982, Merry Edenton-Wooten became club president, and her astronomy-related business Draco Productions became the club's sponsor in 1992. She obtained permission from Thomas Baader to purchase and use the new solar filter material, and sell it in a variety of sizes. Draco Productions began making affordable, safe, and superior quality solar filters (endorsed by NASA and the Astronomical League) for naked-eye viewing as well as scope or binocular use with the new Baader solar filter film from the Baader Planetarium in Germany in 1990. Edenton-Wooten has also given many filters to school teachers and students. She believes that making these fine filters, telescopes, and astronomy educational materials available to everyone is her most important contribution to astronomy. For the August 2017 eclipse, Draco and the EAAA provided more than 5000 free safe solar filters to students and the public.
The club's gaze director Dewey Barker has added two other monthly events to the new moon gazes at Fort Pickens. The first quarter moon gazes are held at the Pensacola Beach Pavilion, and third quarter moon gazes at Big lagoon State Park, west of Pensacola. In a typical year, thousands of campers, families, students, and foreign tourists attend EAAA beach gazes. In 2010, EAAA sponsor Dr. Wayne Wooten won the Astronomical League Award for his four decades of helping organize astronomy clubs in Florida and Alabama.
Associations
EAAA joined the Astronomical League in the 1960s and has been a member organization continuously—except when the club was inactive. Former EAAA President Merry Edenton-Wooten was the executive secretary of the Astronomical League from 1986–1992, and won its Wright Service Award from the AL in 1991. Sponsor Wayne Wooten won the Astronomical League Award in 2010 for his work promoting amateur astronomy in the South East.
Equipment
The EAAA has made several attempts to build an observatory, including mounting a dome on top of an Avion travel trailer. But, light pollution made a permanent site impractical so a telescope has its own trailer and is taken where needed for public viewing. The EAAA maintains a collection of loaner telescopes for member use for $1 per month rental fee. These now include a 16" Dobsonian, a Meade 16" SCT, a C-11. a Meade 10" SCT, an 8" Dobsonian, two 6" Newtonians, 11x80 binocs, and numerous 4" Newtonians.
Because of the generosity of members in sharing their instruments and time many people have been able to enjoy astronomy opportunities offered to the general public, scouting groups and schools.
In 1982, club president Merry Edenton-Wooten had the idea of using surplus Xerox copier lenses as objectives for beginner scopes. The "copier lens telescope" idea spread across the world, promoted by surplus suppliers like JerryCo and C&H Sales, and led to the construction of thousands of high quality rich field telescopes at very affordable prices.
Like many other astronomy clubs, the EAAA has since 2014 been donating 4" reflectors to local libraries. They have now placed seven of the loaner telescopes for the public to check out and use in Escambia and Santa Rosa county.
The EAAA has enlisted many student members at Pensacola State College and the University of West Florida. Since 2015, they have been involved in a "Galileoscopes for the Eclipse" project. Over 70 of the kits have been built by the students, fitted with safe Baader solar filters from Draco Productions and given to local schools in west Florida and South Alabama.
Events
Escambia Amateur Astronomers Association set up meet dates to study the sky; typically, business meetings are on the Friday closest to the Full Moon for that calendar year.
For a current listing, refer to their Facebook page "Escambia Amateur Astronomers".
See also
List of astronomical societies
References
External links
home web site
Amateur astronomy organizations
Astronomy societies | Escambia Amateur Astronomers Association | [
"Astronomy"
] | 1,235 | [
"Astronomy societies",
"Amateur astronomy organizations",
"Astronomy organizations"
] |
7,291,099 | https://en.wikipedia.org/wiki/Independent%20goods | Independent goods are goods that have a zero cross elasticity of demand. Changes in the price of one good will have no effect on the demand for an independent good. Thus independent goods are neither complements nor substitutes.
For example, a person's demand for nails is usually independent of his or her demand for bread, since they are two unrelated types of goods. Note that this concept is subjective and depends on the consumer's personal utility function.
A Cobb-Douglas utility function implies that goods are independent. For goods in quantities X1 and X2, prices p1 and p2, income m, and utility function parameter a, the utility function
when optimized subject to the budget constraint that expenditure on the two goods cannot exceed income, gives rise to this demand function for good 1: which does not depend on p2.
See also
Consumer theory
Good (economics and accounting)
References
Goods (economics)
Utility function types | Independent goods | [
"Physics"
] | 188 | [
"Materials",
"Goods (economics)",
"Matter"
] |
7,291,166 | https://en.wikipedia.org/wiki/Strange%20B%20meson | The meson is a meson composed of a bottom antiquark and a strange quark. Its antiparticle is the meson, composed of a bottom quark and a strange antiquark.
B–B oscillations
Strange B mesons are noted for their ability to oscillate between matter and antimatter via a box-diagram with measured by CDF experiment at Fermilab.
That is, a meson composed of a bottom quark and strange antiquark, the strange meson, can spontaneously change into an bottom antiquark and strange quark pair, the strange meson, and vice versa.
On 25 September 2006, Fermilab announced that they had claimed discovery of previously-only-theorized Bs meson oscillation. According to Fermilab's press release:
Ronald Kotulak, writing for the Chicago Tribune, called the particle "bizarre" and stated that the meson "may open the door to a new era of physics" with its proven interactions with the "spooky realm of antimatter".
Better understanding of the meson is one of the main objectives of the LHCb experiment conducted at the Large Hadron Collider. On 24 April 2013, CERN physicists in the LHCb collaboration announced that they had observed CP violation in the decay of strange mesons for the first time. Scientists found the Bs meson decaying into two muons for the first time, with Large Hadron Collider experiments casting doubt on the scientific theory of supersymmetry.
CERN physicist Tara Shears described the CP violation observations as "verification of the validity of the Standard Model of physics".
Rare decays
The rare decays of the Bs meson are an important test of the Standard Model. The branching fraction of the strange b-meson to a pair of muons is very precisely predicted with a value of Br(Bs→ μ+μ−)SM = (3.66 ± 0.23) × 10−9. Any variation from this rate would indicate possible physics beyond the Standard Model, such as supersymmetry. The first definitive measurement was made from a combination of LHCb and CMS experiment data:
This result is compatible with the Standard Model and set limits on possible extensions.
See also
B meson
B– oscillation
References
External links
Mesons
Strange quark
B physics | Strange B meson | [
"Physics"
] | 494 | [
"Particle physics stubs",
"Particle physics"
] |
7,292,579 | https://en.wikipedia.org/wiki/WASP-1b | WASP-1b is an extrasolar planet orbiting the star WASP-1 located 1,300 light-years away in the constellation Andromeda.
Orbit and mass
The planet's mass and radius indicate that it is a gas giant with a similar bulk composition to Jupiter. Unlike Jupiter, but similar to many other planets detected around other stars, WASP-1b is located very close to its star, and belongs to the class of planets known as hot Jupiters.
WASP-1 b was discovered via the transit method by SuperWASP, for which the star and planet are named. Follow-up radial velocity measurements confirmed the presence of an unseen companion, and allowed for the mass of WASP-1 b to be determined.
In 2018, it was discovered via observations of the Rossiter-McLaughlin effect that the orbit of WASP-1b is strongly misaligned with rotational axis of the star by 79.0 degrees, making it a nearly "polar" orbit.
See also
HD 209458 b
WASP-2b
References
Further reading
External links
BBC News article
WASP Planets
Exoplanets discovered by WASP
Exoplanets discovered in 2006
Giant planets
Hot Jupiters
Transiting exoplanets
Andromeda (constellation) | WASP-1b | [
"Astronomy"
] | 246 | [
"Andromeda (constellation)",
"Constellations"
] |
7,292,714 | https://en.wikipedia.org/wiki/WASP-2b | WASP-2b is an extrasolar planet orbiting the star WASP-2 located about 500 light years away in the constellation of Delphinus. It was discovered via the transit method, and then follow up measurements using the radial velocity method confirmed that WASP-2b was a planet. The planet's mass and radius indicate that it is a gas giant with a similar bulk composition to Jupiter. Unlike Jupiter, but similar to many other planets detected around other stars, WASP-2b is located very close to its star, and belongs to the class of planets known as hot Jupiters. A 2008 study concluded that the WASP-2b system (among others) is a binary star system allowing even more accurate determination of stellar and planetary parameters.
See also
HD 209458 b
WASP-1b
SuperWASP
References
External links
NewScientistSpace: Third 'puffed-up planet' discovered
BBC News article
WASP Planets
Exoplanets discovered by WASP
Exoplanets discovered in 2006
Giant planets
Hot Jupiters
Transiting exoplanets
Delphinus | WASP-2b | [
"Astronomy"
] | 215 | [
"Delphinus",
"Constellations"
] |
16,166,875 | https://en.wikipedia.org/wiki/Design%20Eye%20Position | In the design of human-machine user interfaces (HMIs or UIs), the Design Eye Position (DEP) is the position from which the user is intended to view the workstation for an optimal view of the visual interface. The Design Eye Position represents the ideal but notional location of the operator's view, and is usually expressed as a monocular point midway between the pupils of the average user. The DEP may also allow for a standardisation of monocular and binocular "Field of View" and may be integrated into the CAD/CAM design system used to define the workstation build.
The DEP is particularly important in those operator workstations, such as the cockpit of a military fast jet, where an accurate reading of information and symbols on displays may be critical. When designing such user interfaces, the DEP is used as the reference point for the location of items (e.g., displays or controls) within the interface.
Military Aviation
With collimated displays, such as the cockpit Head Up Display, the projected symbology is aligned very precisely with the outside world to allow for precise delivery of weapons and also for safe landing. Unless located at the Design Eye Position, the pilot cannot see the symbology as it is effectively focussed at infinity. Similarly, Head Down Displays will usually be angled precisely towards the DEP so that all symbols may be equally visible to the pilot without parallax or other display distortion errors.
Pilots who are below or above the 50% percentile point for sitting height, i.e. not of average stature, may need to adjust the seat in order to attain the DEP, even if this means compromising their optimal reach envelope. This is why, for example, rudder pedals may need to be adjustable.
See also
Overillumination
References
Design | Design Eye Position | [
"Engineering"
] | 372 | [
"Design stubs",
"Design"
] |
16,166,901 | https://en.wikipedia.org/wiki/Sclarene | Sclarene is a diterpene present in the foliage of Podocarpus hallii.
References
Diterpenes
Decalins
Polyenes
Vinylidene compounds | Sclarene | [
"Chemistry"
] | 37 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
16,170,200 | https://en.wikipedia.org/wiki/Homing%20%28biology%29 | Homing is the inherent ability of an animal to navigate towards an original location through unfamiliar areas. This location may be a home territory or a breeding spot.
Uses
Homing abilities can be used to find the way back to home in a migration. It is often used in reference to going back to a breeding spot seen years before, as in the case of salmon. Homing abilities can also be used to go back to familiar territory when displaced over long distances, such as with the red-bellied newt.
True navigation
Some animals use true navigation for their homing. This means in familiar areas they will use landmarks such as roads, rivers or mountains when flying, or islands and other landmarks while swimming. However, this only works in familiar territory. Homing pigeons, for example, will often navigate using familiar landmarks, such as roads. Sea turtles will also use landmarks to orient themselves.
Magnetic orientation
Many animals use magnetic orientation based on the Earth's magnetic field to find their way home. This is usually used together with other methods, such as a sun compass, as in bird migration and in the case of turtles. This is also commonly used when no other methods are available, as in the case of lobsters, which live underwater, and mole rats, which home through their burrows.
Celestial orientation
Celestial orientation, navigation using the stars, is commonly used for homing. Displaced marbled newts, for example, can only home when stars are visible.
Olfaction
There is evidence that olfaction, or smell, is used in homing with several salamanders, such as the red-bellied newt. Olfaction is also necessary for the homing of salmon.
Topographic memory
Topographic memory, memory of the contours surrounding the destination, is one common method for navigation. This is mainly used by animals with less intelligence, such as molluscs. Limpets use this to find their way back to the home scrape; although whether this is true homing has been disputed.
See also
Natal homing
Philopatry
Homing endonuclease gene
References
Behavior
Animal migration | Homing (biology) | [
"Biology"
] | 425 | [
"Ethology",
"Behavior",
"Animal migration"
] |
16,172,087 | https://en.wikipedia.org/wiki/Dwarf%20manatee | The dwarf manatee (Trichechus pygmaeus, or mistakenly Trichechus bernhardi) is a disputed species of manatee allegedly found in the freshwater habitats of the Amazon, though restricted to one tributary of the Aripuanã River. According to Marc van Roosmalen, the scientist who proposed it as a new species, it lives in shallow, fast-running water, and feeds on different species of aquatic plants from the Amazonian manatee, which prefers deeper, slower-moving waters and the plants found there. The dwarf manatee reportedly migrates upriver during the rainy season when the river floods to the headwaters and shallow ponds. Based on its small range, the dwarf manatee is suggested to be considered critically endangered if indeed a separate species, but is not recognized by the IUCN.
The dwarf manatee is described as typically being about long and weighing about , which would make it the smallest extant sirenian. It is supposedly very dark, almost black, with a white patch on the abdomen. It may actually represent an immature Amazonian manatee, but it is reported to differ in proportions and colour. It is, however, at least very closely related, as mtDNA has failed to reveal any difference between the two. Mutation rates in manatees – if the dwarf manatee is distinct – suggests a divergence time of less than 485,000 years. Daryl Domning, a Smithsonian Institution research associate and one of the world's foremost experts on manatee evolution, has stated that the DNA evidence actually proves that these are merely immature Amazonian manatees.
Taxonomy
The original description was submitted for publication to Nature, but it was rejected, and it was eventually published in the Biodiversity Journal in 2015.
References
External links
Tetrapodzoology - (Multiple new species of large, living mammal (part II))
Wildlife Extra
Manatees
Controversial mammal taxa
?
Mammals described in 2015 | Dwarf manatee | [
"Biology"
] | 386 | [
"Biological hypotheses",
"Controversial mammal taxa",
"Controversial taxa"
] |
16,173,581 | https://en.wikipedia.org/wiki/IGK%40 | Immunoglobulin kappa locus, also known as IGK@, is a region on the p arm of human chromosome 2, region 11.2 (2p11.2), that contains genes for the kappa (κ) light chains of antibodies (or immunoglobulins).
In humans the κ chain is coded for by V (variable), J (joining) and C (constant) genes in this region. These genes undergo V(D)J recombination to generate a diverse repertoire of immunoglobulins.
Genes
The immunoglobulin kappa locus contains the following genes:
IGKC: immunoglobulin kappa constant
IGKJ@: immunoglobulin kappa joining group
IGKJ1, IGKJ2, IGKJ3, IGKJ4, IGKJ5
IGKV@: immunoglobulin kappa variable group
IGKV1-5, IGKV1-6, IGKV1-8, IGKV1-9, IGKV1-12, IGKV1-16, IGKV1-17, IGKV1-27, IGKV1-33
IGKV1D-8, IGKV1D-12, IGKV1D-13, IGKV1D-16, IGKV1D-17, IGKV1D-22, IGKV1D-27, IGKV1D-32, IGKV1D-33, IGKV1D-39, IGKV1D-43
IGKV2-24, IGKV2-28, IGKV2-30, IGKV2-40
IGKV2D-26, IGKV2D-28, IGKV2D-29, IGKV2D-30, IGKV2D-40
IGKV3-11, IGKV3-15, IGKV3-20
IGKV3D-7, IGKV3D-11, IGKV3D-20
IGKV4-1
IGKV5-2
and a number of non-functional and pseudogenes
References
Further reading
Antibodies
Proteins | IGK@ | [
"Chemistry"
] | 526 | [
"Biomolecules by chemical classification",
"Protein stubs",
"Biochemistry stubs",
"Molecular biology",
"Proteins"
] |
16,173,852 | https://en.wikipedia.org/wiki/List%20of%20installation%20software | The following is a list of applications for building installation programs, organized by platform support.
Cross-platform
Linux
Windows
macOS
AmigaOS
See also
List of software package management systems
References
Installation software | List of installation software | [
"Technology"
] | 39 | [
"Computing-related lists",
"Lists of software"
] |
16,174,259 | https://en.wikipedia.org/wiki/Julie%20A.%20Leary | Julie A. Leary is a emeritus professor in the department of molecular and cellular biology at University of California, Davis and the department of chemistry.
Early life and education
Leary obtained a PhD in Chemistry Massachusetts Institute of Technology in 1985, under the direction of Klaus Biemann.
Career and research interests
Proteomics
Glycomics
https://www.researchgate.net/scientific-contributions/Julie-A-Leary-38178727
Leary served as a Member-at-Large for Education for the American Society for Mass Spectrometry (2001-2002).
Awards
2000 Biemann Medal
2010 Fellow of the American Association for the Advancement of Science (AAAS)
References
21st-century American chemists
Massachusetts Institute of Technology alumni
Mass spectrometrists
Living people
University of California, Davis faculty
Year of birth missing (living people) | Julie A. Leary | [
"Physics",
"Chemistry"
] | 173 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
16,174,280 | https://en.wikipedia.org/wiki/Isoparametric%20manifold | In Riemannian geometry, an isoparametric manifold is a type of (immersed) submanifold of Euclidean space whose normal bundle is flat and whose principal curvatures are constant along any parallel normal vector field. The set of isoparametric manifolds is stable under the mean curvature flow.
Examples
A straight line in the plane is an obvious example of isoparametric manifold. Any affine subspace of the Euclidean n-dimensional space is also an example since the principal curvatures of any shape operator are zero.
Another simplest example of an isoparametric manifold is a sphere in Euclidean space.
Another example is as follows. Suppose that G is a Lie group and G/H is a symmetric space with canonical decomposition
of the Lie algebra g of G into a direct sum (orthogonal with respect to the Killing form) of the Lie algebra h or H with a complementary subspace p. Then a principal orbit of the adjoint representation of H on p is an isoparametric manifold in p. Non principal orbits are examples of the so-called submanifolds with principal constant curvatures. Actually, by Thorbergsson's theorem any complete, full and irreducible isoparametric submanifold of codimension > 2 is an orbit of a s-representation, i.e. an H-orbit as above where the symmetric space G/H has no flat factor.
The theory of isoparametric submanifolds is deeply related to the theory of holonomy groups. Actually, any isoparametric submanifold is foliated by the holonomy tubes of a submanifold with constant principal curvatures i.e. a focal submanifold. The paper "Submanifolds with constant principal curvatures and normal holonomy groups" is a very good introduction to such theory. For more detailed explanations about holonomy tubes and focalizations see the book Submanifolds and Holonomy.
References
See also
Isoparametric function
Riemannian geometry
Manifolds | Isoparametric manifold | [
"Mathematics"
] | 410 | [
"Topological spaces",
"Topology",
"Manifolds",
"Space (mathematics)"
] |
16,174,329 | https://en.wikipedia.org/wiki/Gene%20Ball | Gene Ball is a computer science researcher and computer programmer.
Ball obtained a bachelor's degree from the University of Oklahoma, and attended graduate school at the University of Rochester, completing a master's degree and finishing his doctorate in 1982. While at Rochester, he met Rick Rashid, and together they created Alto Trek, one of the earlier networked multiplayer computer games.
In 1979, along with Rashid, Ball worked as a researcher Carnegie Mellon University. In 1983, he left academia for two years, spending 1983 and 1984 designing software at Formative Technologies. In 1985, he became an assistant professor at the University of Delaware at Newark.
From 1991 until 2001 he was a researcher at Microsoft, leading the Persona Project, which focused on developing lifelike computer characters that could conversationally interact with users.
References
Contributor biography in Emotions in Humans and Artifacts, by Robert Trappl, Paolo Petta, and Sabine Payr. .
University of Delaware faculty
Microsoft employees
University of Rochester alumni
University of Oklahoma alumni
Carnegie Mellon University faculty
Living people
Year of birth missing (living people) | Gene Ball | [
"Technology"
] | 213 | [
"Computing stubs",
"Computer specialist stubs"
] |
16,174,439 | https://en.wikipedia.org/wiki/Electronic%20effect | An electric effect influences the structure, reactivity, or properties of a molecule but is neither a traditional bond nor a steric effect. In organic chemistry, the term stereoelectronic effect is also used to emphasize the relation between the electronic structure and the geometry (stereochemistry) of a molecule.
The term polar effect is sometimes used to refer to electronic effects, but also may have the more narrow definition of effects resulting from non-conjugated substituents.
Types
Redistributive effects
Induction is the redistribution of electron density through a traditional sigma bonded structure according to the electronegativity of the atoms involved. The inductive effect drops across every sigma bond involved limiting its effect to only a few bonds.
Conjugation is a redistribution of electron density similar to induction but transmitted through interconnected pi-bonds. Conjugation is not only affected by electronegativity of the connected atoms but also affected by the position of electron lone pairs with respect to the pi-system. Electronic effects can be transmitted throughout a pi-system allowing their influence to extend further than induction.
In the context of electronic redistribution, an electron-withdrawing group (EWG) draws electrons away from a reaction center. When this center is an electron rich carbanion or an alkoxide anion, the presence of the electron-withdrawing substituent has a stabilizing effect. Similarly, an electron-releasing group (ERG) or electron-donating group (EDG) releases electrons into a reaction center and as such stabilizes electron deficient carbocations.
In electrophilic aromatic substitution and nucleophilic aromatic substitution, substituents are divided into activating groups and deactivating groups. Resonance electron-releasing groups are classed as activating, while Resonance electron-withdrawing groups are classed as deactivating.
Non-redistributive effects
Hyperconjugation is the stabilizing interaction that results from the interaction of the electrons in a sigma bond (usually C-H or C-C) with an adjacent empty (or partially filled) non-bonding p-orbital or antibonding π orbital or an antibonding sigma orbital to give an extended molecular orbital that increases the stability of the system. Hyperconjugation can be used to explain phenomena such as the gauche effect and anomeric effect.
Orbital symmetry is important when dealing with orbitals that contain directional components like p and d. An example of such an effect is square planar low-spin d8 transition metal complexes. These complexes exist as square planar complexes due to the directionality of the metal center's d orbitals despite fewer steric congestion in a tetrahedral geometric structure. This is simple one example of many varied examples, including aspects of pericyclic reactions such as the Diels-Alder reaction, among others.
Electrostatic interactions include both attractive and repulsive forces associated with the build-up of charge in a molecule. Electrostatic interactions are generally too weak to be considered traditional bonds or are prevented from forming a traditional bond, possibly by a steric effect. A bond is usually defined as two atoms approaching closer than the sum of their Van der Waal radii. Hydrogen bonding borders on being an actual "bond" and an electrostatic interaction. While an attractive electrostatic interaction is considered a "bond" if it gets too strong, a repulsive electrostatic interaction is always an electrostatic effect regardless of strength. An example of a repulsive effect is a molecule contorting to minimize the coulombic interactions of atoms that hold like charges.
Electronic spin state at it simplest describes the number of unpaired electrons in a molecule. Most molecules including the proteins, carbohydrates, and lipids that make up the majority of life have no unpaired electrons even when charged. Such molecules are called singlet molecules, since their paired electrons have only one spin state. In contrast, dioxygen under ambient conditions has two unpaired electrons. Dioxygen is a triplet molecule, since the two unpaired electrons allow for three spin states. The reaction of a triplet molecule with a singlet molecule is spin-forbidden in quantum mechanics. This is the major reasons there is a very high reaction barrier for the extremely thermodynamically favorable reaction of singlet organic molecules with triplet oxygen. This kinetic barrier prevents life from bursting into flames at room temperature.
Electronic spin states are more complex for transition metals. To understand the reactivity of transition metals, it is essential to understand the concept of d electron configuration as well as high-spin and low-spin configuration. For example, a low-spin d8 transition metal complex is usually square planar substitutionally inert with no unpaired electrons. In contrast, a high-spin d8 transition metal complex is usually octahedral, substitutionally labile, with two unpaired electrons.
Jahn–Teller effect is the geometrical distortion of non-linear molecules under certain situations. Any non-linear molecule with a degenerate electronic ground state will undergo a geometrical distortion that removes that degeneracy. This has the effect of lowering the overall energy. The Jahn–Teller distortion is especially common in certain transition metal complexes; for example, copper(II) complexes with 9 d electrons.
Trans influence is the influence that a ligand in a square or octahedral complex has on the bond to the ligand trans to it. It is caused by electronic effects, and manifests itself as the lengthening of the trans bonds and as an effect on the overall energy of the complex.
Comparison with steric effects
The structure, properties, and reactivity of a molecule are dependent on straightforward bonding interactions including covalent bonds, ionic bonds, hydrogen bonds, and other forms of bonding. This bonding supplies a basic molecular skeleton that is modified by repulsive forces generally considered steric effects. Basic bonding and steric effects are at times insufficient to explain many structures, properties, and reactivity. Thus, steric effects are often contrasted and complemented by electronic effects, implying the influence of effects such as induction, conjunction, orbital symmetry, electrostatic interactions, and spin state. There are more esoteric electronic effects but these are among the most important when considering chemical structure and reactivity.
Special computational procedure was developed to separate steric and electronic effects of an arbitrary group in the molecule and to reveal their influence on structure and reactivity.
References
Physical organic chemistry | Electronic effect | [
"Chemistry"
] | 1,330 | [
"Physical organic chemistry"
] |
16,174,922 | https://en.wikipedia.org/wiki/Energy%20subsidy | Energy subsidies are measures that keep prices for customers below market levels, or for suppliers above market levels, or reduce costs for customers and suppliers. Energy subsidies may be direct cash transfers to suppliers, customers, or related bodies, as well as indirect support mechanisms, such as tax exemptions and rebates, price controls, trade restrictions, and limits on market access.
During FY 2016–22, most US federal subsidies were for renewable energy producers (primarily biofuels, wind, and solar), low-income households, and energy-efficiency improvements. During FY 2016–22, nearly half (46%) of federal energy subsidies were associated with renewable energy, and 35% were associated with energy end uses. Federal support for renewable energy of all types more than doubled, from $7.4 billion in FY 2016 to $15.6 billion in FY 2022.
The International Renewable Energy Agency tracked some $634 billion in energy-sector subsidies in 2020, and found that around 70% were fossil fuel subsidies. About 20% went to renewable power generation, 6% to biofuels and just over 3% to nuclear.
Overview of all sources of energy
If governments choose to subsidize one particular source of energy more than another, that choice can impact the environment. That distinguishing factor informs the below discussion on all energy subsidies of all sources of energy in general.
Main arguments for energy subsidies are:
Security of supply – subsidies are used to ensure adequate domestic supply by supporting indigenous fuel production in order to reduce import dependency, or supporting overseas activities of national energy companies, or to secure the electricity grid.
Environmental and health improvement – subsidies are used to improve health by reducing air pollution, and to fulfill international climate pledges. For example the IEA says the purchase price of heat pumps should be subsidized.
Economic benefits – subsidies in the form of reduced prices are used to stimulate particular economic sectors or segments of the population, e.g. alleviating poverty and increasing access to energy in developing countries. With regards to fossil fuel prices in particular, Ian Parry, the lead author of a 2021 IMF report said, "Some countries are reluctant to raise energy prices because they think it will harm the poor. But holding down fossil fuel prices is a highly inefficient way to help the poor, because most of the benefits accrue to wealthier households. It would be better to target resources towards helping poor and vulnerable people directly."
Employment and social benefits – subsidies are used to maintain employment, especially in periods of economic transition. In 2021, with regards to fossil fuel prices in particular, Ipek Gençsü, at the Overseas Development Institute, said: "[Subsidy reform] requires support for vulnerable consumers who will be impacted by rising costs, as well for workers in industries which simply have to shut down. It also requires information campaigns, showing how the savings will be redistributed to society in the form of healthcare, education and other social services. Many people oppose subsidy reform because they see it solely as governments taking something away, and not giving back."
Main arguments against energy subsidies are:
Some energy subsidies, such as the fossil fuel subsidies (oil, coal, and gas subsidies), counter the goal of sustainable development, as they may lead to higher consumption and waste, exacerbating the harmful effects of energy use on the environment, create a heavy burden on government finances and weaken the potential for economies to grow, undermine private and public investment in the energy sector. Also, most benefits from fossil fuel subsidies in developing countries go to the richest 20% of households.
Impede the expansion of distribution networks and the development of more environmentally benign energy technologies, and do not always help the people that need them most.
The study conducted by the World Bank finds that subsidies to the large commercial businesses that dominate the energy sector are not justified. However, under some circumstances it is reasonable to use subsidies to promote access to energy for the poorest households in developing countries. Energy subsidies should encourage access to the modern energy sources, not to cover operating costs of companies. The study conducted by the World Resources Institute finds that energy subsidies often go to capital intensive projects at the expense of smaller or distributed alternatives.
Types of energy subsidies are below. ("Fossil-fuel subsidies generally take two forms. Production subsidies...[and]...consumption subsidies."):
Direct financial transfers – grants to suppliers; grants to customers; low-interest or preferential loans to suppliers.
Preferential tax treatments – rebates or exemption on royalties, duties, supplier levies and tariffs; tax credit; accelerated depreciation allowances on energy supply equipment.
Trade restrictions – quota, technical restrictions and trade embargoes.
Energy-related services provided by government at less than full cost – direct investment in energy infrastructure; public research and development.
Regulation of the energy sector – demand guarantees and mandated deployment rates; price controls; market-access restrictions; preferential planning consent and controls over access to resources.
Failure to impose external costs – environmental externality costs; energy security risks and price volatility costs.
Depletion Allowance – allows a deduction from gross income of up to ~27% for the depletion of exhaustible resources (oil, gas, minerals).
Overall, energy subsidies require coordination and integrated implementation, especially in light of globalization and increased interconnectedness of energy policies, thus their regulation at the World Trade Organization is often seen as necessary.
Support for new technology
Early support of solar power by the United States and Germany greatly helped renewable energy commercialization to reduce greenhouse gas emissions worldwide, but may not have helped local manufacturing. Support for nuclear fusion continues, although it is not expected to be commercially viable in time to contribute to countries net zero targets. Energy storage research is also supported.
Fossil fuel subsidies
See also
Fossil fuel subsidies
Corporate welfare
Building-integrated photovoltaics
Government subsidies
Feed-in tariff
Gasoline subsidies
Renewable Energy Certificates
Renewable energy commercialization
Renewable energy payments
Stranded assets
Financial incentives for photovoltaics
References
Bibliography
External links
Fossil Fuel Subsidy Tracker- a collaboration between the Organisation for Economic Co-operation and Development (OECD) and the International Institute for Sustainable Development (IISD)
Global Subsidies Initiative - a project of the International Institute for Sustainable Development
OECD-IEA analysis of fossil fuels and other support - OECD
European countries spend billions a year on fossil fuel subsidies, survey shows (2017)
Energy economics
Renewable energy commercialization
Subsidies | Energy subsidy | [
"Environmental_science"
] | 1,318 | [
"Energy economics",
"Environmental social science"
] |
16,175,342 | https://en.wikipedia.org/wiki/Divina%20proportione | Divina proportione (15th century Italian for Divine proportion), later also called De divina proportione (converting the Italian title into a Latin one) is a book on mathematics written by Luca Pacioli and illustrated by Leonardo da Vinci, completed by February 9th, 1498 in Milan and first printed in 1509. Its subject was mathematical proportions (the title refers to the golden ratio) and their applications to geometry, to visual art through perspective, and to architecture. The clarity of the written material and Leonardo's excellent diagrams helped the book to achieve an impact beyond mathematical circles, popularizing contemporary geometric concepts and images.
Some of its content was plagiarised from an earlier book by Piero della Francesca, De quinque corporibus regularibus.
Contents of the book
The book consists of three separate manuscripts, which Pacioli worked on between 1496 and 1498. He credits Fibonacci as the main source for the mathematics he presents.
Compendio divina proportione
The first part, Compendio divina proportione (Compendium on the Divine Proportion), studies the golden ratio from a mathematical perspective (following the relevant work of Euclid), giving mystical and religious meanings to this ratio, in seventy-one chapters. Pacioli points out that golden rectangles can be inscribed by an icosahedron, and in the fifth chapter, gives five reasons why the golden ratio should be referred to as the "Divine Proportion":
Its value represents divine simplicity.
Its definition invokes three lengths, symbolizing the Holy Trinity.
Its irrationality represents God's incomprehensibility.
Its self-similarity recalls God's omnipresence and invariability.
Its relation to the dodecahedron, which represents the quintessence
It also contains a discourse on the regular and semiregular polyhedra, as well as a discussion of the use of geometric perspective by painters such as Piero della Francesca, Melozzo da Forlì and Marco Palmezzano.
Trattato dell'architettura
The second part, Trattato dell'architettura (Treatise on Architecture), discusses the ideas of Vitruvius (from his De architectura) on the application of mathematics to architecture in twenty chapters. The text compares the proportions of the human body to those of artificial structures, with examples from classical Greco-Roman architecture.
Libellus in tres partiales divisus
The third part, Libellus in tres partiales divisus (Book divided into three parts), is a translation into Italian of Piero della Francesca's Latin book De quinque corporibus regularibus [On [the] Five Regular Solids]. It does not credit della Francesca for this material, and in 1550 Giorgio Vasari wrote a biography of della Francesca, in which he accused Pacioli of plagiarism and claimed that he stole della Francesca's work on perspective, on arithmetic and on geometry. Because della Francesca's book had been lost, these accusations remained unsubstantiated until the 19th century, when a copy of della Francesca's book was found in the Vatican Library and a comparison confirmed that Pacioli had copied it.
Illustrations
After these three parts are appended two sections of illustrations, the first showing twenty-three capital letters drawn with a ruler and compass by Pacioli and the second with some sixty illustrations in woodcut after drawings by Leonardo da Vinci. Leonardo drew the illustrations of the regular solids while he lived with and took mathematics lessons from Pacioli. Leonardo's drawings are probably the first illustrations of skeletonic solids which allowed an easy distinction between front and back.
Another collaboration between Pacioli and Leonardo existed: Pacioli planned a book of mathematics and proverbs called De Viribus Quantitatis (The powers of numbers) which Leonardo was to illustrate, but Pacioli died before he could publish it.
History
Pacioli produced three manuscripts of the treatise by different scribes. He gave the first copy with a dedication to the Duke of Milan, Ludovico il Moro; this manuscript is now preserved in Switzerland at the Bibliothèque de Genève in Geneva. A second copy was donated to Galeazzo da Sanseverino and now rests at the Biblioteca Ambrosiana in Milan. On 1 June 1509 the first printed edition was published in Venice by Paganino Paganini; it has since been reprinted several times.
The book was displayed as part of an exhibition in Milan between October 2005 and October 2006 together with the Codex Atlanticus. The "M" logo used by the Metropolitan Museum of Art in New York was adapted from one in Divina proportione.
See also
List of works by Leonardo da Vinci
Frederik Macody Lund
Samuel Colman
References
Works cited
External links
Full text of original edition
Full text of 1509 edition
Title page of a reprint in Vienna, 1889
A video featuring a 1509 edition on display at Stevens Institute of Technology
Full text of original edition (1498) in English
1509 books
History of geometry
History of mathematics
Mathematics books
Mathematics manuscripts
Medieval literature
Leonardo da Vinci | Divina proportione | [
"Mathematics"
] | 1,060 | [
"History of geometry",
"Geometry"
] |
16,175,377 | https://en.wikipedia.org/wiki/List%20of%20pest-repelling%20plants | This list of pest-repelling plants includes plants used for their ability to repel insects, nematodes, and other pests. They have been used in companion planting as pest control in agricultural and garden situations, and in households.
Certain plants have shown effectiveness as topical repellents for haematophagous insects, such as the use of lemon eucalyptus in PMD, but incomplete research and misunderstood applications can produce variable results.
The essential oils of many plants are also well known for their pest-repellent properties. Oils from the families Lamiaceae (mints), Poaceae (true grasses), and Pinaceae (pines) are common haematophagous insect repellents worldwide.
Table of pest-repelling plants
Plants that can be planted or used fresh to repel pests include:
References
See also
Pelargonium citrosum
pest-repelling plants
+
Insect repellents
pest-repelling plants
pest-repelling plants | List of pest-repelling plants | [
"Chemistry",
"Biology"
] | 200 | [
"Plant toxin insecticides",
"Lists of plants",
"Chemical ecology",
"Plants",
"Lists of biota"
] |
16,176,024 | https://en.wikipedia.org/wiki/Gonocyte | Gonocytes are the precursors of spermatogonia that differentiate in the testis from primordial germ cells around week 7 of embryonic development and exist up until the postnatal period, when they become spermatogonia. Despite some uses of the term to refer to the precursors of oogonia, it was generally restricted to male germ cells. Germ cells operate as vehicles of inheritance by transferring genetic and epigenetic information from one generation to the next. Male fertility is centered around continual spermatogonia which is dependent upon a high stem cell population. Thus, the function and quality of a differentiated sperm cell is dependent upon the capacity of its originating spermatogonial stem cell (SSC).
Gonocytes represent the germ cells undergoing the successive, short-term and migratory stages of development. This occurs between the time they inhabit the forming gonads on the genital ridge to the time they migrate to the basement membrane of the seminiferous cords. Gonocyte development consists of several phases of cell proliferation, differentiation, migration and apoptosis. The abnormal development of gonocytes leads to fertility-related diseases.
They are also identified as prespermatogonia, prospermatogonia and primitive germ cells, although gonocyte is most common.
History
Gonocytes are described as large and spherical, with a prominent nucleus and two nucleoli. The term, gonocyte, was created in 1957 by Canadian scientists Yves Clermont and Bernard Perey. They considered it essential to study the origin of spermatogonia and carried out a study on rats to investigate this. In 1987, Clermont referred to gonocytes as the cells that differentiate into type A spermatogonia, which differentiate into type B spermatogonia and spermatocytes.
Very few studies used gonocytes to also refer to the female germ cells in the ovarium primordium. The specification of gonocytes to be confined to male germ cells occurred after foundational differences between the mechanisms of male and female fetal germ cells were uncovered. Some scientists prefer the terms “prospermatogonia” and “prespermatogonia” for their functional clarity.
Later studies found that the process from primordial germ cell to spermatogonial development is gradual, without clear gene expression markers to distinguish the precursor cells. A 2006 study found that some gonocytes differentiate straight into committed spermatogonia (type B) rather than spermatogonial stem cells (type A).
Origin of Spermatogonial Stem Cell Pool
Gonocytes are long-lived precursor germ cells responsible for the production of spermatogonial stem cells (SSCs). Gonocytes relate to both fetal and neonatal germ cells from the point at which they enter the testis primordial until they reach the base membrane at the seminiferous cords and differentiate. At the time of gastrulation, certain cells are set aside for later gamete development. These cells are called post migratory germ cells (PGCs). The gonocyte population develops from the post migratory germ cells (PGCs) around embryonic day (ED) 15. At this point of development, PGCs become dormant and remain inactivated until birth. Shortly after birth, the cell cycle continues and the production of postnatal spermatogonia commences. Gonocytes migrate to the basement membrane to proliferate. Gonocytes that do not migrate undergo apoptosis and are cleared from the seminiferous epithelium. Spermatogonia are formed in infancy and differentiate throughout adult life.
Formation of Spermatogonial Lineage
There are currently two proposed models for the formation of the spermatogonial lineage during neonatal development. Both models theorize that the gonocyte population develops from a subset of post migratory germ cells (PGCs) but, differ in the proposed subsets of derived gonocytes. One of the models proposes that the PGCs give rise to a single subset of pluripotent gonocytes that either become SSCs from which progenitors then arise or differentiate into type A spermatogonia directly. The other model proposes that the PGCs give rise to multiple predetermined subsets of gonocytes that produce the foundational SSC pool, initial progenitor spermatogonial population, and initial differentiating type A spermatogonia.
Development
The development of germ cells can be divided into two phases. The first phases involves the fetal and neonatal phases of germ cell development that lead to the formation of the SSCs. The second phase is spermatogenesis, which is a cycle of regulated mitosis, meiosis and differentiation (via spermiogenesis) leading to the production of mature spermatozoa, also known as sperm cells.
Gonocytes are functionally present during the first phase of germ cell maturation and development. This period consists of the primordial germ cells (PGC), the initial cells that commence germ cell development in the embryo, and the gonocytes, which after being differentiated from PGCs, undergo regulated proliferation, differentiation, migration and apoptosis to produce the SSCs. Gonocytes therefore correspond to the developmental stages between the PGCs and SSCs.
Formation
Gonocytes are formed from the differentiation of PGCs. Embryonic cells initiate germ cell development in the proximal epiblast located near the extra-embryonic ectoderm by the release of bone morphogenetic protein 4 (BMP4) and BMP8b. These proteins specify embryonic cells into PGCs expressing the genes PRDM1 and PRDM14 at embryonic day (E) 6.25. The PGCs which are positively stained by alkaline phosphatase and expressing Stella at E7.25 are also specified. In between E7.5 and E12.5, these PGCs migrate towards the genital ridge, where they form the testicular cords, via the cytokine interactions of the CXCR4 and c-Kit membrane receptors and their ligands SDF1 and SCF respectively. During this migratory period, PGCs undergo epigenetic reprogramming through genome-wide DNA demethylation. Once resident in the genital ridge, these germ cells and surrounding supporting cells undergo sex determination driven by the expression of the SRY gene. It is only after these developmental steps that the germ cells present in the developed testicular cords are identified as gonocytes.
Proliferation
In order to provide the long-term production of sperm, gonocytes undergo proliferation to produce a populate pool of SSCs. Once enclosed by Sertoli cells to form the testicular cords, gonocytes undergo a succession of differing fetal and neonatal periods of mitosis, with a phase of quiescence in between. The mitotic activity that occurs in the neonatal period is necessary for the migration of gonocytes to the basement membrane of the seminiferous cords in order to differentiate into the SSCs. As many populations of gonocytes are in different stages of development, mitotic and quiescent gonocytes coexist in neonatal developing testes.
Proliferation in fetal and neonatal gonocytes is differently regulated. Retinoic acid (RA), the bioactive metabolite of retinal, is a morphogen shown to modulate fetal gonocyte proliferation. Investigation of fetal gonocyte activity in organ cultures recorded RA to slightly stimulate proliferation. Moreover, RA inhibited differentiation by stopping the fetal gonocytes from entering mitotic arrest while simultaneously triggering apoptosis. RA, by decreasing the overall fetal gonocyte population via apoptosis, is speculated to allow the elimination of mutated and dysfunctional germ cells. The activation of protein kinase C by phorbol ester PMA also decreased fetal gonocyte mitotic activity.
There are a number of factors that influence neonatal gonocyte proliferation, including 17β‐estradiol (E2), Leukemia inhibitory factor (LIF), platelet-derived growth factor (PDGF)-BB, and RA. The production of PDGF-BB and E2 by surrounding Sertoli cells activate their respective receptors on neonatal gonocytes, triggering proliferation via an interactive, crosstalk mechanism. The regulation of LIF is speculated to allow gonocytes to become sensitive to Sertoli cell factors that trigger proliferation, such as PDGF-BB and E2. Compared to fetal gonocytes, RA exerts a similar functional role in neonatal gonocytes; It simultaneously stimulates proliferation and apoptosis for regulation of gonocyte and future SSCs population.
Migration
The migration of gonocytes to the basement membrane of the seminiferous cords is necessary for their differentiation into SSCs. This process is regulated by different factors.
Various studies provide comprehensive comparison of the expression of c-Kit on the membrane of cells and migratory-related behavior, for example PGCs. Although c-Kit expression is evident in a small fraction of neonatal gonocytes, they also express of PDGF receptor beta (PDGFRβ) on their membrane to aid in their migration. Inhibition of PDGF receptors and c-Kit by in vivo treatment of imatinib, an inhibitory drug, interrupted migration, leading to a number of gonocytes centrally located in the seminiferous cords.
The ADAM-Integrin-Tetraspanin complexes, which is a family of proteins, also mediate gonocyte migration. These complexes consist of various proteins that bind to integrins found on the basement membrane of the seminiferous cords and at locations where spermatogonia normally reside, allowing the gonocyte to migrate and bind to the basement membrane.
Differentiation
The differentiation of gonocytes to SSC only occur once the cells have established close contact with the basement membrane in the seminiferous cords. RA is the best characterised activator of gonocyte differentiation. De novo synthesis of RA involves retinol, the precursor to RA, being transported to the membrane receptor STRA6 by the retinol-binding protein released by Sertoli cells. Binding of retinol to STRA6 endocytoses retinol into the cell, whereby it undergoes oxidation reactions to form RA. RA is also directly transported from the surrounding Sertoli cells or the vasculature. RA internalization triggers a variety of pathways that modulate the differentiation, such as PDGF receptor pathways and Janus kinase 2 (JAK2) signaling pathway.
Anti-Müllerian hormone (AMH), a glycoprotein gonadal hormone produced by Sertoli cells in early development, is the only hormone to significantly increase the number of successfully differentiated gonocytes.
The timing of differentiation is regulated by NOTCH signaling. The functional components of the NOTCH signaling pathway are expressed and released by both developing and adult Sertoli cells. Activation of the signaling pathway is crucial for gonocyte development as it triggers gonocytes to depart from quiescence and enter into differentiation. Over activation of the pathway allows effective inhibition of quiescence and gonocyte differentiation.
Structure of Gonocytes
Gonocytes are large cells with a spherical euchromatic nucleus, two nucleoli and a surrounding, ring-like cytosol. Throughout the majority of their developmental period, gonocytes are structurally supported by the cytoplasmic extensions of surrounding Sertoli cells and are suspended by Sertoli cell nuclei from the basement membrane. Gonocytes are attached to Sertoli cells by gap junctions, desmosome junctions and a number of different cell adhesion molecules such as connexin 43, PB-cadherin and NCAM for regulation of cell-to-cell communication. Gonocytes dissociate from these junctions and migrate so that the basal side of the cell is in close proximity with the basement membrane, where they undergo phenotypic changes and take the appearance of spermatogonia.
Diseases
Dysfunctional development in germ cells plays a significant role in fertility-related diseases. The development of PGCs to gonocytes, and gonocyte differentiation to SSCs is critical for adult fertility and the defective growth often leads to infertility.
Testicular cancer
Testicular germ cell tumors, that occur primarily in young adults, are the consequent of preinvasive cells called carcinoma in situ (CIS). The development of CIS is due to fetal germ cells, such as gonocytes, arrested in quiescence and unable to properly differentiate. This leads to malignant transformation of the germ cells until it becomes an overt germ cell cancer after puberty.
Cryptorchidism
Cryptorchidism, also known as undescended testis, is a common birth defect affecting male genital formation. Individuals diagnosed with cryptorchidism are often at risk of testicular cancer and infertility due to dysfunction in the development of the neonatal germ cells, in particular, the disruption of the differentiation of gonocytes into adult dark-spermatogonia. It is proposed that this dysfunction is a product of heat stress caused by the undescended testes remaining in the abdomen and unable to regulate its temperature which is often accomplished by the scrotum.
References
Developmental biology
Animal reproductive system
Germ cells | Gonocyte | [
"Biology"
] | 2,794 | [
"Behavior",
"Developmental biology",
"Reproduction"
] |
16,177,728 | https://en.wikipedia.org/wiki/Yamazaki%20Mazak%20Corporation | is a Japanese machine tool builder based in Oguchi, Japan. In most of the world they are referred to as Mazak.
History
The company was founded in 1919 in Nagoya by Sadakichi Yamazaki as a small company making pots and pans. During the 1920s it progressed through mat-making machinery to woodworking machinery to metalworking machine tools, especially lathes. The company was part of Japan's industrial buildup before and during World War II, then, like the rest of Japanese industry, was humbled by the war's outcome.
During the 1950s and 1960s, under the founder's sons, Yamazaki revived, and during the 1960s it established itself as an exporter to the American market. During the 1970s and 1980s it established a larger onshore presence in the US, including machine tool-building operations, and since then it has become one of the most important companies in that market and the global machine tool market.
In 1980s, the European manufacturing plant was established in Worcester, U.K., and a worldwide sales and customer support network was created. Currently, the corporation runs 10 factories worldwide - 5 in Japan, 2 in China, 1 in Singapore, 1 in the US, 1 in the UK.
Gallery
References
Bibliography
External links
Mazak Global portal to regional sites
Exclusive representative of Mazak machine tools for some provinces of Lombardy, Italy: https://tecnomacsystems.com/prodotti-nuovo.php
Manufacturing companies of Japan
Industrial machine manufacturers
Machine tool builders
Multinational companies headquartered in Japan
Companies based in Aichi Prefecture
Japanese brands
Manufacturing companies established in 1919
Japanese companies established in 1919 | Yamazaki Mazak Corporation | [
"Engineering"
] | 333 | [
"Industrial machine manufacturers",
"Industrial machinery"
] |
16,178,171 | https://en.wikipedia.org/wiki/Gloeobacter | Gloeobacter is a genus of cyanobacteria. It is the sister group to all other photosynthetic cyanobacteria. Gloeobacter'''s order, Gloeobacterales, is unique among cyanobacteria in not having thylakoids, which are characteristic for all other cyanobacteria and chloroplasts. Instead, the light-harvesting complexes (also called phycobilisomes), that consist of different proteins, sit on the inside of the plasma membrane among the (cytoplasm). Subsequently, the proton gradient in Gloeobacter is created across the plasma membrane, whereas it forms across the thylakoid membrane in cyanobacteria and chloroplasts.
The whole genome of G. violaceus (strain PCC 7421) and of G. kilaueensis have been sequenced. Many genes for photosystem I and II were found missing, likely related to the fact that photosynthesis in these bacteria does not take place in the thylakoid membrane as in other cyanobacteria, but in the plasma membrane.
Description Gloeobacter violaceus produces several pigments, including chlorophyll a, β-carotene, oscillol diglycoside, and echinenone. The purple coloration is due to the relatively low chlorophyll content. G. kilaueensis grows with a few other bacteria as a purple-colored biofilm around 0.5 mm thick. Cultivated colonies are dark purple, smooth, shiny, and raised. G. kilaueensis is mostly unicellular, capsule-shaped, about 3.5×1.5 μm, and imbedded in mucus. They divide over the width of the cell. Cells color gramnegative, and lack vancomycin resistance. They are not motile and do not glide. Growth ceases in complete darkness, so Gloeobacter is very likely obligatory photoautotrophic.
Species and distribution Gloeobacter violaceus was found on a limestone rock in the Swiss canton Obwalden.G. kilaueensis occurred within a lava cave at the Kilauea-caldera on Hawaii. It grew there at a temperature around 30 °C at very high humidity, with moisture condensing and dripping off the biofilm.Gloeobacter could have split off from the other cyanobacteria between 3.7 and 3.2 billion years ago. The species of Gloeobacter may have branched 280 million years ago.Anthocerotibacter panamensis, found in a sample of hornwort from a rainforest in Panama, also lacks thylakoids. It has very few of the genes that are required to perform photosynthesis, but is still able to perform it, very slowly. It may have been split from Gloeobacter'' about 1.4 Ga ago. This genus is also a member of the family Gloeobacteraceae or family Anthocerotibacteraceae within order Gloeobacterales.
Phylogeny
See also
List of bacteria genera
List of bacterial orders
References
Cyanobacteria genera
Cyanobacteria | Gloeobacter | [
"Biology"
] | 690 | [
"Algae",
"Cyanobacteria"
] |
16,179,698 | https://en.wikipedia.org/wiki/Isrotel%20Tower | The Isrotel Tower is a hotel located on the beachfront of Tel Aviv, Israel.
The tower is 108 meters high, has 29 floors and is operated by the Israeli Isrotel hotel group. A Gvirtzman Architects designed the towers which were completed in 1966, whilst the main core was completed in the 1980s. The diameter of the structure is 29 meters and the tower is constructed on the site of the Gan Rina Theatre. The hotel consists of 90 suites whilst the top floors house 62 apartments. The tower is constructed on a narrow pedestal.
The Nakash family purchased the tower for $150 million USD in April 2013.
See also
List of skyscrapers in Israel
Architecture of Israel
Tourism in Israel
References
External links
Isrotel Tower Tel Aviv
Isrotel Tower
Skyscrapers in Tel Aviv
Hotels in Tel Aviv
Residential buildings completed in 1997
Postmodern architecture
Hotels established in 1997
Skyscraper hotels
Residential skyscrapers in Israel
Skyscrapers in Israel | Isrotel Tower | [
"Engineering"
] | 190 | [
"Postmodern architecture",
"Architecture"
] |
16,179,834 | https://en.wikipedia.org/wiki/Rabies%20vaccine | The rabies vaccine is a vaccine used to prevent rabies. There are several rabies vaccines available that are both safe and effective. Vaccinations must be administered prior to rabies virus exposure or within the latent period after exposure to prevent the disease. Transmission of rabies virus to humans typically occurs through a bite or scratch from an infectious animal, but exposure can occur through indirect contact with the saliva from an infectious individual.
Doses are usually given by injection into the skin or muscle. After exposure, the vaccination is typically used along with rabies immunoglobulin. It is recommended that those who are at high risk of exposure be vaccinated before potential exposure. Rabies vaccines are effective in humans and other animals, and vaccinating dogs is very effective in preventing the spread of rabies to humans. A long-lasting immunity to the virus develops after a full course of treatment.
Rabies vaccines may be used safely by all age groups. About 35 to 45 percent of people develop a brief period of redness and pain at the injection site, and 5 to 15 percent of people may experience fever, headaches, or nausea. After exposure to rabies, there is no contraindication to its use, because the untreated virus is virtually 100% fatal.
The first rabies vaccine was introduced in 1885 and was followed by an improved version in 1908. Over 29 million people worldwide receive human rabies vaccine annually. It is on the World Health Organization's List of Essential Medicines.
Medical uses
Before exposure
The World Health Organization (WHO) recommends vaccinating those who are at high risk of the disease, such as children who live in areas where it is common. Other groups may include veterinarians, researchers, or people planning to travel to regions where rabies is common. Three doses of the vaccine are given over a one-month period on days zero, seven, and either twenty-one or twenty-eight.
After exposure
For individuals who have been potentially exposed to the virus, four doses over two weeks are recommended, as well as an injection of rabies immunoglobulin with the first dose. This is known as post-exposure vaccination. For people who have previously been vaccinated, only a single dose of the rabies vaccine is required. However, vaccination after exposure is neither a treatment nor a cure for rabies; it can only prevent the development of rabies in a person if given before the virus reaches the brain. Because the rabies virus has a relatively long incubation period, post-exposure vaccinations are typically highly effective.
Additional doses
Immunity following a course of doses is typically long lasting, and additional doses are usually not needed unless the person has a high risk of contracting the virus. Those at risk may have tests done to measure the amount of rabies antibodies in the blood, and then get rabies boosters as needed. Following administration of a booster dose, one study found 97% of immunocompetent individuals demonstrated protective levels of neutralizing antibodies after ten years.
Safety
Rabies vaccines are safe in all age groups. About 35 to 45 percent of people develop a brief period of redness and pain at the injection site, and 5 to 15 percent of people may experience fever, headaches, or nausea. Because of the certain fatality of the virus, receiving the vaccine is always advisable.
Vaccines made from nerve tissue are used in a few countries, mainly in Asia and Latin America, but are less effective and have greater side effects. Their use is thus not recommended by the World Health Organization.
Types
The human diploid cell rabies vaccine (HDCV) was started in 1967. Human diploid cell rabies vaccines are inactivated vaccines made using the attenuated Pitman-Moore L503 strain of the virus.
In addition to these developments, newer and less expensive purified chicken embryo cell vaccines (CCEEV) and purified Vero cell rabies vaccines are now available and are recommended for use by the WHO. The purified Vero cell rabies vaccine uses the attenuated Wistar strain of the rabies virus, and uses the Vero cell line as its host. CCEEVs can be used in both pre- and post-exposure vaccinations. CCEEVs use inactivated rabies virus grown from either embryonated eggs or in cell cultures and are safe for use in humans and animals.
The vaccine was attenuated and prepared in the H.D.C. strain WI-38 which was gifted to Hilary Koprowski at the Wistar Institute by Leonard Hayflick, an Associate Member, who developed this normal human diploid cell strain.
Verorab, developed by Sanofi-Aventis and Speeda, developed by Liaoning Chengda are purified vero cell rabies vaccine (PVRV). The first is approved by the World Health Organization. Verorab is approved for medical use in Australia and the European Union and is indicated for both pre-exposure and post-exposure prophylaxis against rabies.
History
Virtually all infections with rabies resulted in death until two French scientists, Louis Pasteur and Émile Roux, developed the first rabies vaccination in 1885. Nine-year-old Joseph Meister (1876–1940), who had been mauled by a rabid dog, was the first human to receive this vaccine. The treatment started with a subcutaneous injection on 6 July 1885, at 8:00pm, which was followed with 12 additional doses administered over the following 10 days. The first injection was derived from the spinal cord of an inoculated rabbit which had died of rabies 15 days earlier. All the doses were obtained by attenuation, but later ones were progressively more virulent.
The Pasteur-Roux vaccine attenuated the harvested virus samples by allowing them to dry for five to ten days. Similar nerve tissue-derived vaccines are still used in some countries, and while they are much cheaper than modern cell culture vaccines, they are not as effective. Neural tissue vaccines also carry a certain risk of neurological complications.
Society and culture
Economics
When the modern cell-culture rabies vaccine was first introduced in the early 1980s, it cost $45 per dose, and was considered to be too expensive. The cost of the rabies vaccine continues to be a limitation to acquiring pre-exposure rabies immunization for travelers from developed countries. In 2015, in the United States, a course of three doses could cost over , while in Europe a course costs around . It is possible and more cost-effective to split one intramuscular dose of the vaccine into several intradermal doses. This method is recommended by the World Health Organization (WHO) in areas that are constrained by cost or with supply issues. The route is as safe and effective as intramuscular according to the WHO.
Veterinary use
Pre-exposure immunization has been used on domesticated and wild populations. In many jurisdictions, domestic dogs, cats, ferrets, and rabbits are required to be vaccinated.
There are two main types of vaccines used for domesticated animals and pets (including pets from wildlife species):
Inactivated rabies virus (similar technology to that given to humans) administered by injection
Modified live viruses administered orally (by mouth): Live rabies virus from attenuated strains. Attenuated means strains that have developed mutations that cause them to be weaker and do not cause disease.
Imrab is an example of a veterinary rabies vaccine containing the Pasteur strain of killed rabies virus. Several different types of Imrab exist, including Imrab, Imrab 3, and Imrab Large Animal. Imrab 3 has been approved for ferrets and, in some areas, pet skunks.
Dogs
Aside from vaccinating humans, another approach was also developed by vaccinating dogs to prevent the spread of the virus. In 1979, the Van Houweling Research Laboratory of the Silliman University Medical Center in Dumaguete in the Philippines developed and produced a dog vaccine that gave a three-year immunity from rabies. The development of the vaccine resulted in the elimination of rabies in many parts of the Visayas and Mindanao Islands. The successful program in the Philippines was later used as a model by other countries, such as Ecuador and the Mexican state of Yucatán, in their fight against rabies conducted in collaboration with the World Health Organization.
In Tunisia, a rabies control program was initiated to give dog owners free vaccination to promote mass vaccination which was sponsored by their government. The vaccine is known as Rabisin (Mérial), which is a cell based rabies vaccine only used countrywide. Vaccinations are often administered when owners take in their dogs for check-ups and visits at the vet.
Oral rabies vaccines (see below for details) have been trialled on feral/stray dogs in some areas with high rabies incidence, as it could potentially be more efficient than catching and injecting them. However these have not been deployed for dogs at large scale yet.
Wild animals
Wildlife species, primarily bats, raccoons, skunks, and foxes, act as reservoir species for different variants of the rabies virus in distinct geographic regions of the United States. This results in the general occurrence of rabies as well as outbreaks in animal populations. Approximately 90% of all reported rabies cases in the US are from wildlife.
Oral rabies vaccine
Oral rabies vaccines are distributed across the landscape, targeting reservoir species, in an effort to produce a herd immunity effect. The idea of wildlife vaccination was conceived during the 1960s, and modified-live rabies viruses were used for the experimental oral vaccination of carnivores by the 1970s. Development of an oral immunization for wildlife began in the United States with laboratory trials using the live, attenuated Evelyn-Rokitnicki-Abselseth (ERA) vaccine, derived from the Street Alabama Dufferin (SAD) strain. The first ORV field trial using the live attenuated vaccine to immunize foxes occurred in Switzerland during 1978.
There are currently three different types of oral wildlife rabies vaccine in use:
Modified live virus: Attenuated vaccine strains of rabies virus such as SAG2 and SAD B19
Recombinant vaccinia virus expressing rabies glycoprotein (V-RG): This is a strain of the vaccinia virus (originally a smallpox vaccine) that has been engineered to encode the gene for the rabies glycoprotein.
V-RG has been proven safe in over 60 animal species including cats and dogs. The idea of wildlife vaccination was conceived during the 1960s, and modified-live rabies viruses were used for the experimental oral vaccination of carnivores by the 1970s.
ONRAB: an experimental live recombinant adenovirus vaccine
Other oral rabies experimental vaccines in development include recombinant adenovirus vaccines.
Oral rabies vaccination (ORV) programs have been used in many countries in an effort to control the spread of rabies and limit the risk of human contact with the rabies virus. ORV programs were initiated in Europe in the 1980s, Canada in 1985, and in the United States in 1990. ORV is a preventive measure to eliminate rabies in wild animal vectors of disease, mainly foxes, raccoons, raccoon dogs, coyotes and jackals, but also can be used for dogs in developing countries. ORV programs typically attractive baits to deliver the vaccine to targeted animals. In the United States, RABORAL V-RG (Boehringer Ingelheim, Duluth, GA, USA) has been the only licensed ORV for rabies virus management since 1997. However, ONRAB "Ultralite" (Artemis Technologies Inc., Guelph, Ontario, Canada) baits have been distributed by the United States Department of Agriculture (USDA) in select areas of the eastern United States under an experimental permit to target raccoons since 2011. RABORAL V-RG baits consist of a small packet containing the oral vaccine which is then either coated in a fishmeal paste or encased in a fishmeal-polymer block. ONRAB "Ultralite" baits consist of a blister pack with a coating matrix of vanilla flavor, green food coloring, vegetable oil and hydrogenated vegetable fat. When an animal bites into the bait, the packets burst and the vaccine is administered. Current research suggests that if adequate amounts of the vaccine is ingested, immunity to the virus should last for upwards of one year. By immunizing wild or stray animals, ORV programs work to create a buffer zone between the rabies virus and potential contact with humans, pets, or livestock. Landscape features such as large bodies of water and mountains are often used to enhance the effectiveness of the buffer. The effectiveness of ORV campaigns in specific areas is determined through trap-and-release methods. Titer tests are performed on the blood drawn from the sample animals in order to measure rabies antibody levels in the blood. Baits are usually distributed by aircraft to more efficiently cover large, rural regions. In order to place baits more precisely and to minimize human and pet contact with baits, they are distributed by hand in suburban or urban regions. The standard bait distribution density is 75 baits/km2 in rural areas and 150 baits/km2 in urban and developed areas.
Implementation of ORV programs in the United States has led to the elimination of the coyote rabies virus variant in 2003 and gray fox variant during 2013. Furthermore, ORV has been successful in preventing the westward expansion of the raccoon rabies enzootic front beyond Alabama.
References
External links
Animal vaccines
French inventions
Inactivated vaccines
Rabies
Vaccines
World Health Organization essential medicines (vaccines)
Wikipedia medicine articles ready to translate
fr:Rage (maladie)#Histoire des vaccins | Rabies vaccine | [
"Biology"
] | 2,888 | [
"Vaccination",
"Vaccines"
] |
16,179,920 | https://en.wikipedia.org/wiki/App%20Store%20%28Apple%29 | The App Store is an app marketplace developed and maintained by Apple, for mobile apps on its iOS and iPadOS operating systems. The store allows users to browse and download approved apps developed within Apple's iOS SDK. Apps can be downloaded on the iPhone, iPod Touch, or iPad, and some can be transferred to the Apple Watch smartwatch or 4th-generation or newer Apple TVs as extensions of iPhone apps.
The App Store opened on July 10, 2008, with an initial 500 applications available. The number of apps peaked at around 2.2 million in 2017, but declined slightly over the next few years as Apple began a process to remove old or 32-bit apps. , the store features more than 1.8 million apps.
While Apple touts the role of the App Store in creating new jobs in the "app economy" and claims to have paid over $155 billion to developers, the App Store has also attracted criticism from developers and government regulators that it operates a monopoly and that Apple's 30% cut of revenues from the store is excessive. In October 2021, the Netherlands Authority for Consumers and Markets (ACM) concluded that in-app commissions from Apple's App Store are anti-competitive and would demand that Apple change its in-app payment system policies.
History
While originally developing iPhone prior to its unveiling in 2007, Apple's then-CEO Steve Jobs did not intend to let third-party developers build native apps for iOS, instead directing them to make web applications for the Safari web browser. However, backlash from developers prompted the company to reconsider, with Jobs announcing in October 2007 that Apple would have a software development kit available for developers by February 2008. The SDK was released on March 6, 2008.
The iPhone App Store opened on July 10, 2008. On July 11, the iPhone 3G was released and came pre-loaded with support for App Store. Initially apps could be free or paid, but then in 2009, Apple added the ability to add in-app purchases which quickly became the dominant way to monetize apps, especially games.
After the success of Apple's App Store and the launch of similar services by its competitors, the term "app store" has been adopted to refer to any similar service for mobile devices. However, Apple applied for a U.S. trademark on the term "App Store" in 2008, which was tentatively approved in early 2011. In June 2011, U.S. District Judge Phyllis Hamilton, who was presiding over Apple's case against Amazon, said she would "probably" deny Apple's motion to stop Amazon from using the "App Store" name. In July, Apple was denied preliminary injunction against Amazon's Appstore by a federal judge.
The term app has become a popular buzzword; in January 2011, app was awarded the honor of being 2010's "Word of the Year" by the American Dialect Society. "App" has been used as shorthand for "application" since at least the late 1970s, and in product names since at least 2006, for example then-named Google Apps.
Apple announced Mac App Store, a similar app distribution platform for its macOS personal computer operating system, in October 2010, with the official launch taking place in January 2011 with the release of its 10.6.6 "Snow Leopard" update.
In February 2013, Apple informed developers that they could begin using appstore.com for links to their apps. In June at its developer conference, Apple announced an upcoming "Kids" section in App Store, a new section featuring apps categorized by age range, and the section was launched alongside the release of iOS 7 in September 2013.
In 2016, multiple media outlets reported that apps had decreased significantly in popularity. Recode wrote that "The app boom is over", an editorial in TechCrunch stated that "The air of hopelessness that surrounds the mobile app ecosystem is obvious and demoralizing", and The Verge wrote that "the original App Store model of selling apps for a buck or two looks antiquated". Issues included consumer "boredom", a lack of app discoverability, and, as stated by a report from 2014, a lack of new app downloads among smartphone users.
In October 2016, in an effort to improve app discoverability, Apple rolled out the ability for developers to purchase advertising spots in App Store to users in the United States. The ads, shown at the top of the search results, are based strictly on relevant keywords, and are not used to create profiles on users. Apple expanded search ads to the United Kingdom, Australia and New Zealand in April 2017, along with more configurable advertising settings for developers. Search ads were expanded to Canada, Mexico and Switzerland in October 2017. In December 2017, Apple revamped its search ads program to offer two distinctive versions; "Search Ads Basic" is a pay-per-install program aimed at smaller developers, in which they only pay when users actually install their app. Search Ads Basic also features an easier setup process and a restricted monthly budget. "Search Ads Advanced" is a new name given to the older method, in which developers have to pay whenever users tap on their apps in search results, along with unlimited monthly budgets. .
In January 2017, reports surfaced that documentation for a new beta for the then-upcoming release of iOS 10.3 detailed that Apple would let developers respond to customer reviews in the App Store, marking a significant change from the previous limitation, which prevented developers from communicating with users. The functionality was officially enabled on March 27, 2017 when iOS 10.3 was released to users.
Apple also offered an iTunes Affiliate Program, which lets people refer others to apps and other iTunes content, along with in-app purchases, for a percentage of sales. The commission rate for in-app purchases was reduced from 7% to 2.5% in May 2017 and discontinued completely in 2018.
In September 2017, App Store received a major design overhaul with the release of iOS 11. The new design features a greater focus on editorial content and daily highlights, and introduces a "cleaner and more consistent and colorful look" similar to several of Apple's built-in iOS apps.
Prior to September 2017, Apple offered a way for users to manage their iOS app purchases through the iTunes computer software. In September, version 12.7 of iTunes was released, removing the App Store section in the process. However, the following month, iTunes 12.6.3 was also released, retaining the App Store, with 9to5Mac noting that the secondary release was positioned by Apple as "necessary for some businesses performing internal app deployments".
In December 2017, Apple announced that developers could offer applications for pre-order, letting them make apps visible in the store between 2–90 days ahead of release.
On January 4, 2018, Apple announced that the App Store had a record-breaking holiday season according to a new press release. During the week starting on Christmas Eve, a record number of customers made App Store purchases, spending more than $890 million in that seven-day period. On New Year's Day 2018 alone, customers made $300 million in purchases.
In September 2019, Apple launched Apple Arcade, a subscription service for video games within the App Store.
In March 2020 Apple made "Sign in with Apple" mandatory for any apps that use third party logins (such as signing in with a Google account, etc.) As part of the new App Store guidelines, the deadline for developers to implement the feature was April 30.
In 2019 and 2020, Apple was frequently criticized by other companies such as Spotify, Airbnb and Hey and regulators for potentially running the App Store as a monopoly and overcharging developers, and was the target of lawsuits and investigations in the EU and United States. A conflict between Epic Games, the creator of the Fortnite game and Apple led to the lawsuit Epic Games v. Apple. In December 2020, Apple announced that they would introduce a "Small Business Program" which lowers Apple's revenue cut for app developers making less than USD 1 million per year from 30% to 15%. Additionally, governments such as in China, India and Russia have increasingly required Apple to remove specific apps, with the threatened removal of some apps often becoming part of geopolitical feuds.
In January 2022, Apple added support for unlisted apps to the App Store. These apps can only be downloaded via direct links, and do not appear as search results.
Later in December 2022, a report by Bloomberg noted that the company had begun making preparations for opening up sideloading and alternative app stores on iOS, as compliance with the EU's Digital Markets Act that had passed in September of that year. The same report also noted Apple planned to open up the NFC and camera systems on iOS and the Find My network to AirTag competitors like Tile.
Following a European Commission antitrust investigation, on January 25, 2024, Apple allowed game streaming apps and services, such as Xbox Cloud Streaming and GeForce Now, on the App Store. Apple also allowed iPhone users in the European Union to use third-party app stores and browser engines.
Development and monetization
iOS SDK
The iOS SDK (Software development kit) allows for the development of mobile apps on iOS. It is a free download for users of Mac personal computers. It is not available for Microsoft Windows PCs. The SDK contains sets giving developers access to various functions and services of iOS devices, such as hardware and software attributes. It also contains an iPhone simulator to mimic the look and feel of the device on the computer while developing. New versions of the SDK accompany new versions of iOS. In order to test applications, get technical support, and distribute apps through App Store, developers are required to subscribe to the Apple Developer Program.
Combined with Xcode, the iOS SDK helps developers write iOS apps using officially supported programming languages, including Swift and Objective-C. Other companies have also created tools that allow for the development of native iOS apps using their respective programming languages.
Monetization
To publish apps on App Store, developers must pay a $99 yearly fee for access to Apple's Developer Program. Apple announced that, in the United States starting in 2018, it would waive the fee for nonprofit organizations and governments. Fee waivers have since been extended to non-profits, educational organizations and governments in additional countries.
Developers have a few options for monetizing their applications. The "Free Model" enables free apps, increasing likelihood of engagement. The "Freemium Model" makes the app download free, but users are offered optional additional features in-app that require payments. The "Subscription Model" enables ongoing monetization through renewable transactions. The "Paid Model" makes the app itself a paid download and offers no additional features. Less frequently, the "Paymium Model" has both a paid app downloads and paid in-app content.
In-app subscriptions were originally introduced for magazines, newspapers and music apps in February 2011, giving developers 70% of revenue earned and Apple 30%. Publishers could also sell digital subscriptions through their website, bypassing Apple's fees, but were not allowed to advertise their website alternative through the apps themselves.
In an interview with The Verge in June 2016, Phil Schiller, Apple's senior vice president of Worldwide Marketing, said that Apple had a "renewed focus and energy" on the App Store, and announced multiple significant changes, including advertisements in search results and a new app subscription model. The subscription model saw the firmly established 70/30 revenue split between developers and Apple change into a new 85/15 revenue split if a user stays subscribed to the developer's app for a year, and opens the possibility of subscriptions to all apps, not just select categories.
App data and insights analyst company App Annie released a report in October 2016, announcing that China had overtaken the United States as Apple's biggest market in App Store revenue. In the third quarter of 2016, Chinese users spent $1.7 billion vs. approximately $1.5 billion by American users.
In June 2017, Apple announced that App Store had generated over $70 billion in revenue for developers since its 2008 launch. By 2020, this had increased to $155 billion.
tvOS apps
The App Store is also available on tvOS, the operating system for the Apple TV. It was announced on September 9, 2015, at the Apple September 2015 event, alongside the 4th generation Apple TV.
tvOS ships with development tools for developers. tvOS adds support for an SDK for developers to build apps for the TV including all of the APIs included in iOS 9 such as Metal. It also adds an which allows users to browse, download, and install a wide variety of applications. In addition, developers can now use their own interface inside of their application rather than only being able to use Apple's interface. Since tvOS is based on iOS, it is easy to port existing iOS apps to the Apple TV with Xcode while making only a few refinements to the app to better suit the larger screen. Apple provides Xcode free of charge to all registered Apple developers. To develop for the new Apple TV, it is necessary to make a parallax image for the application icon. In order to do this, Apple provides a Parallax exporter and previewer in the development tools for the Apple TV.
Number of iOS applications
On July 10, 2008, Apple's then-CEO Steve Jobs told USA Today that App Store contained 500 third-party applications for the iPhone and the iPod Touch, and of these 125 were free. Ten million downloads were recorded in the first weekend. By September, the number of available apps had increased to 3,000, with over 100 million downloads.
Over the years, the store has surpassed multiple major milestones, including 50,000, 100,000, 250,000, 500,000, 1 million, and 2 million apps. The billionth application was downloaded on April 24, 2009.
The number of apps on the app store shrank for the first time in 2017 as Apple began to remove older apps which did not comply with current app guidelines and technologies. As of 2020, it was estimated to house around 1.8 million apps.
Number of iPad applications
The iPad was released in April 2010, with approximately 3,000 apps available. By July 2011, 16 months after the release, there were over 100,000 apps available designed specifically for the device.
Most downloaded apps
Yearly
Apple publishes a list on a yearly basis, giving credit to the apps with the highest number of downloads in the past year.
Of all time
These are the most downloaded iOS applications and the highest revenue-generating iOS applications of all time from 2010 to 2018.
Application ratings
Apple rates applications worldwide based on their content, and determines the age group for which each is appropriate. According to the iPhone OS 3.0 launch event, the iPhone will allow blocking of objectionable apps in the iPhone's settings. The following are the ratings that Apple has detailed:
App approval process
Applications are subject to approval by Apple, as outlined in the SDK agreement, for basic reliability testing and other analysis. Applications may still be distributed "ad hoc" if they are rejected, by the author manually submitting a request to Apple to license the application to individual iPhones, although Apple may withdraw the ability for authors to do this at a later date.
, Apple employed mostly static analysis for their app review process, which means that dynamic code reassembly techniques could defeat the review process.
In June 2017, Apple updated its App Store review guidelines to specify that app developers will no longer have the ability to use custom prompts for encouraging users to leave reviews for their apps. With the release of iOS 11 in late 2017, Apple also let developers choose whether to keep current app reviews when updating their apps or to reset. Additionally, another update to App Store policies allows users to optionally "tip" content creators, by voluntarily sending them money.
Privacy
A privacy experiment conducted in 2019 by the Washington Post determined that third-party apps transmitted a host of personal data without the user's knowledge or consent, including phone number, email, exact location, device model and IP address, to "a dozen marketing companies, research firms and other personal data guzzlers" via 5,400 hidden app trackers. Some of the information shared with third parties was found to be in violation of the apps' own privacy regulations. Apple responded to the controversy by introducing "privacy nutrition labels" on the App Store, forcing all apps to disclose their data use.
Controversial apps and removals
In November 2012, Boyfriend Maker, which is a dating sim game, was removed due to "reports of references to violent sexual acts and paedophilia" deemed inappropriate to Boyfriend Maker's age rating of 4+. A revised version called Boyfriend Plus was approved by Apple in April 2013.
In March 2013, HiddenApps was approved and appeared in App Store. The app provided access to developer diagnostic menus, allowed for stock apps to be hidden, and enabled an opt-out feature for iAds, Apple's developer-driven advertisement system. The app was removed shortly afterwards for violating guidelines.
In April 2013, Apple removed AppGratis, a then-successful app store market that promoted paid apps by offering one for free each day. Apple told All Things Digital that the app violated two of its developer agreement clauses, including "Apps that display Apps other than your own for purchase or promotion in a manner similar to or confusing with the App Store will be rejected" and "Apps cannot use Push Notifications to send advertising, promotions, or direct marketing of any kind". Apple did, however, tell the developers they were "welcome to resubmit" after changing the app, though there was "not much hope that it could survive in anything like its current incarnation".
In November 2014, Apple removed the marijuana social networking app MassRoots, with the reason given that it "encourage[d] excessive consumption of alcohol or illegal substances." In February 2015, MassRoots was reintroduced into the store after Apple changed its enforcement guidelines to allow cannabis social apps in the 23 states where it is legal.
In September 2015, it was discovered that "hundreds" of apps submitted and approved on App Store were using XcodeGhost, a malicious version of the Xcode development software. The issues prompted Apple to remove infected apps from the store and issue a statement that it was "working with the developers to make sure they're using the proper version of Xcode". A security firm later published lists of infected apps, including a China-only version of Angry Birds 2, CamCard, Lifesmart, TinyDeal.com, and WeChat. In the aftermath, Apple stated that it would make Xcode faster to download in certain regions outside the United States, and contacted all developers to ensure they only download the code from the Mac App Store or Apple's website, and provided a code signature for developers to test if they are running a tampered version of Xcode.
In June 2017, a scamming trend was discovered on the store, in which developers make apps built on non-existent services, attach in-app purchase subscriptions to the opening dialogue, then buy App Store search advertising space to get the app into the higher rankings. In one instance, an app by the name of "Mobile protection :Clean & Security VPN" would require payments of $99.99 for a seven-day subscription after a short trial. Apple has not yet responded to the issues.
In addition, Apple has removed software licensed under the GNU General Public License (GPL) from App Store, due to text in Apple's Terms of Service agreement imposing digital rights management and proprietary legal terms incompatible with the terms of the GPL.
Large-scale app removals
On September 1, 2016, Apple announced that starting September 7, it would be removing old apps that do not function as intended or that do not follow current review guidelines. Developers will be warned and given 30 days to update their apps, but apps that crash on startup will be removed immediately. Additionally, the app names registered by developers cannot exceed 50 characters, in an attempt to stop developers from inserting long descriptions or irrelevant terms in app names to improve the app's ranking in App Store search results. App intelligence firm Sensor Tower revealed in November 2016 that Apple, as promised from its September announcement of removing old apps, had removed 47,300 apps from App Store in October 2016, a 238 percent increase of its prior number of average monthly app removals.
In June 2017, TechCrunch reported that Apple had turned its app removal focus on apps copying functionality from other, popular apps. An example cited included "if a popular game like Flappy Bird or Red Ball hits the charts, there will be hundreds or thousands of clones within weeks that attempt to capitalize on the initial wave of popularity". The report also noted removals of music apps serving pirated tracks. The publication wrote that, since the initial September app removals began, Apple had removed "multiple hundreds of thousands" of apps.
In December 2017, a new report from TechCrunch stated that Apple had begun enforcing new restrictions on the use of "commercialized template or app generation services". Originally introduced as part of Apple's 2017 developer conference, new App Store guidelines allow the company to ban apps making use of templates or commercial app services. This affected many small businesses, with TechCrunchs report citing that "local retailers, restaurants, small fitness studios, nonprofits, churches and other organizations" benefit from using templates or app services due to minimal costs. Developers had received notice from Apple with a January 1, 2018 deadline to change their respective apps. The news caught the attention of Congress, with Congressman Ted Lieu writing a letter to Apple at the beginning of December, asking it to reconsider, writing that "It is my understanding that many small businesses, research organizations, and religious institutions rely on template apps when they do not possess the resources to develop apps in-house", and that the new rules cast "too wide a net", specifically "invalidating apps from longstanding and legitimate developers who pose no threat to the App Store's integrity". Additionally, the news of stricter enforcement caused significant criticism from app development firms; one company told TechCrunch that it chose to close down its business following the news, saying that "The 4.2.6 [rule enforcement] was just a final drop that made us move on a bit faster with that decision [to close]", and another company told the publication that "There was no way in June [when the guidelines changed] that we would have said, ‘that's going to target our apps' ... Apple had told us you aren't being targeted by this from a quality standpoint. So being hit now under the umbrella of spam is shocking to every quality developer out there and all the good actors". Furthermore, the latter company stated that "there's only so much you can do with apps that perform the same utility – ordering food". A third company said that "Rule 4.2.6 is a concrete illustration of the danger of Apple's dominant position", and a fourth said that "They’ve wiped out pretty much an entire industry. Not just DIY tools like , but also development suites like Titanium". Towards the end of the year, Apple updated the guideline to clarify that companies and organizations are allowed to use template apps, but only as long as they directly publish their app on their own; it remained a violation of the rule for commercial app services to publish apps for the respective clients.
Censorship by governments
China
In January 2017, Apple complied with a request from the Chinese government to remove the Chinese version of The New York Times app. This followed the government's efforts in 2012 to block the Times website after stories of hidden wealth among family members of then-leader of China, Wen Jiabao, were published. In a statement, an Apple spokesperson told the media that "we have been informed that the app is in violation of local regulations", though would not specify which regulations, and added that "As a result the app must be taken down off the China app store. When this situation changes the app store will once again offer the New York Times app for download in China". The following July, it was reported that Apple had begun to remove listings in China for apps that circumvent government Internet censorship policies and new laws restricting virtual private network (VPN) services. Apple issued a statement, explaining that the app removals were a result of developers not complying with new laws in China requiring a government license for businesses offering VPNs, and that "These apps remain available in all other markets where they do business". In an earnings call the following month, Cook elaborated on the recent news, explainining that "We would obviously rather not remove the apps, but like we do in other countries, we follow the law wherever we do business". Besides VPN services, a number of Internet calling apps, including Microsoft's Skype, were also removed from the Chinese App Store in 2017, with Apple telling The New York Times that, similar to the VPN apps, these new apps also violated local law. Microsoft explained to BBC News that its Skype app had been "temporarily removed" and that it was "working to reinstate the app as soon as possible", though many news outlets reported on the Chinese government's increased efforts and pressure to crack down on Internet freedom.
Following Apple CEO Tim Cook's appearance at China's World Internet Conference in December 2017, in which Cook stated that Apple and China share a vision of "developing a digital economy for openness and shared benefits", free speech and human rights activists criticized Cook and the company. Maya Wang at Human Rights Watch told The Washington Post that "Cook's appearance lends credibility to a state that aggressively censors the internet, throws people in jail for being critical about social ills, and is building artificial intelligence systems that monitors everyone and targets dissent. ... The version of cyberspace the Chinese government is building is a decidedly dystopian one, and I don't think anyone would want to share in this ‘common future.’ Apple should have spoken out against it, not endorsed it." U.S. Senator Patrick Leahy told CNBC that "American tech companies have become leading champions of free expression. But that commitment should not end at our borders. ... Global leaders in innovation, like Apple, have both an opportunity and a moral obligation to promote free expression and other basic human rights in countries that routinely deny these rights."
Cook told Reuters that "My hope over time is that some of the things, the couple of things that's been pulled, come back. I have great hope on that and great optimism on that". However, TechCrunchs Jon Russell criticized this line of thinking, writing that "Firstly, Apple didn't just remove a 'couple of things' from the reach of China-based users", but rather "a couple of hundred" apps, acknowledging that "even that is under counting". Furthermore, Russell listed censorship efforts by the Chinese government, including VPN bans and restrictions on live video and messaging apps, and wrote that "Apple had little choice but to follow Beijing's line in order to continue to do business in the lucrative Chinese market, but statements like Cook's today are dangerous because they massively underplay the severity of the situation". Florida Senator Marco Rubio also criticized Cook's appearance at the World Internet Conference, describing the situation as "here's an example of a company, in my view, so desperate to have access to the Chinese market place that they are willing to follow the laws of that country even if those laws run counter to what those companies’ own standards are supposed to be". In August 2018, as a result of Chinese regulations, 25,000 illegal apps were pulled down by Apple from the App Store in China.
In October 2019, Apple rejected, approved, and finally removed an app used by participants in the 2019–20 Hong Kong protests.
Apple began removing thousands of video game apps from their platform in China during December 2020 in accordance to regulations regarding licensing enacted by the country's Cyberspace Administration, in many cases without explicitly stating the offences grounding their removal. Apple released a memo that month telling developers of premium games and apps with in-app purchases had until December 31 to submit proof of a government license. Research from the Campaign for Accountability notes there are more than 3,000 apps not appearing in China which are available in other countries, a third of which the advocacy group claims to have been removed due to advocating for various human rights issues, including LGBTQ+ rights and the Hong Kong protests. A director of the aforementioned campaign, Katie Paul, criticised Apple's removals stating "if it's going to bend to political pressure, the company should explain why and what they would lose if they didn't do that." CEO Tim Cook has previously defended such company actions, stating in a memo to employees in 2019 that "national and international debates will outlive us all, and while important, they do not govern the facts."
In August 2023, at the request of the Chinese government, Apple took down more than 100 AI-related apps similar to ChatGPT in the Chinese app store.
According to the regulations of Chinese government, new apps on the China app store from September 2023 must be licensed by the Chinese government. Older apps must obtain a license before March 2024.
Russia
Apple removed the Smart Voting app from the App Store before the 2021 Russian legislative election. The application, which had been created by associates of imprisoned opposition leader Alexei Navalny, offered voting advice for all voting districts in Russia. It was removed after a meeting with Russian Federation Council officials on September 16, 2021. Apple also reportedly disabled its iCloud Private Relay privacy feature which masks users' browsing activity. Russian opposition figures condemned these moves as political censorship.
In 2024, Russian regulator Roskomnadzor asked Apple to take down 25 VPN apps from the Russian App Store, but Apple quietly took down more.
Removal of vaping apps
In November 2019, Apple removed all applications related to vaping from the App Store, citing warning from health experts. Apple made this decision to reduce the promotion of e-cigarette use.
Antitrust allegations
Apple has faced criticism, lawsuits and government investigations alleging that its control over the distribution of iOS and iPadOS apps through the App Store constituted monopolistic practices.
Epic Games
Since as early as 2017, Tim Sweeney had questioned the need for digital storefronts like Valve's Steam, Apple's iOS App Store, and Google Play, to take a 30% revenue sharing cut, and argued that when accounting for current rates of content distribution and other factors needed, a revenue cut of 8% should be sufficient to run any digital storefront profitably.
On August 13, 2020, Epic Games updated Fortnite across all platforms, including the iOS version, to reduce the price of "V-Bucks" (the in-game currency) by 20% if they purchased directly from Epic. For iOS users, if they purchased through the Apple storefront, they were not given this discount, as Epic said they could not extend the discount due to the 30% revenue cut taken by Apple. Within hours, Apple had removed Fortnite from their storefronts stating the means of bypassing their payment systems violated the terms of service. Epic immediately filed separate lawsuits against Apple and Google for antitrust and anticompetitive behavior in the United States District Court for the Northern District of California. Epic did not seek monetary damages in either case but instead was "seeking injunctive relief to allow fair competition in these two key markets that directly affect hundreds of millions of consumers and tens of thousands, if not more, of third-party app developers." In comments on social media the next day, Sweeney said that they undertook the actions as "we're fighting for the freedom of people who bought smartphones to install apps from sources of their choosing, the freedom for creators of apps to distribute them as they choose, and the freedom of both groups to do business directly. The primary opposing argument is: 'Smartphone makers can do whatever they want.' This as an awful notion. We all have rights, and we need to fight to defend our rights against whoever would deny them."
Apple responded to the lawsuit that it would terminate Epic's developer accounts by August 28, 2020, leading Epic to file a motion for a preliminary injunction to force Apple to return Fortnite to the App Store and prevent them from terminating Epic's developer accounts, as the latter action would leave Epic unable to update the Unreal Engine for any changes to iOS or macOS and leave developers that relied on Unreal at risk. The court granted the preliminary injunction against Apple from terminating the developer accounts as Epic had shown "potential significant damage to both the Unreal Engine platform itself, and to the gaming industry generally", but refused to grant the injunction related to Fortnite as "The current predicament appears of [Epic's] own making."
See also
List of free and open-source iOS applications
List of iOS games
List of mobile app distribution platforms
References
External links
Apple Developer program
Apple Inc. services
Mobile software distribution platforms
IOS software
ITunes
Computer-related introductions in 2008
American inventions
German inventions
Swiss inventions
TvOS software
sv:Itunes Store#App Store | App Store (Apple) | [
"Technology"
] | 6,881 | [
"Mobile content",
"Mobile software distribution platforms"
] |
16,180,439 | https://en.wikipedia.org/wiki/N-Gage%20%28service%29 | N-Gage, also referred to as N-Gage 2.0, was a mobile gaming digital distribution platform from Nokia that was available for several Nokia smartphones running on S60 (Symbian). The successor to the original N-Gage gaming device and launched as part of their Ovi initiative in 2007, it aimed to offer AAA games for trial and purchase into a single application with full compatibility to all devices, along with online multiplayer and social features using N-Gage Arena via in-house servers. Games on the platform were natively coded or ported using C++ although N-Gage used APIs from its own SDK separate from Symbian's. Testing began in Finland in February 2007, but the service faced numerous delays before the service finally rolled out on April 3, 2008 with five launch titles, initially for Nokia N81, N82 and N95 owners.
Less than two years after its full launch, on October 30, 2009, Nokia announced that no new N-Gage games would be produced. A total of 49 games were released for it. Nokia moved its games onto their Ovi Store thereafter. N-Gage games can still be played on compatible devices, but support for the online features ceased in September 2010. There have been various opinions on why N-Gage 2.0 failed.
Development
Nokia's N-Gage gaming smartphone from 2003 did not perform as well as expected, and its upgraded QD version did not make any significant impact on the N-Gage's reputation. Instead of developing a new gaming device, there was a change in concept as Nokia explained to the world during E3 2005 that they were planning to put a N-Gage platform on several smartphone devices, rather than releasing a specific device It was often nicknamed as N-Gage Next Generation by the public.
Working behind closed doors, it took a little more than a year before, at E3 2006, finally announcing the N-Gage mobile gaming service, set for a 2007 release. They also started showing off next-gen titles such as System Rush: Evolution and Hooked On: Creatures of the Deep, with the fighting game ONE perhaps being the most visually impressive—even making use of motion capture. Also shown was Pocket Aces, Space Impact, and Pro Series Golf.
N-Gage was unveiled behind closed doors in January 2007 at a conference where reportedly developers and publishers such as EA Mobile, Capcom and Glu Mobile were present. In February 2007, Nokia announced a pilot service in Finland to promote the upcoming service. Nokia showed off previews of the service at the 2007 Game Developers Conference in San Francisco, California. On 27 August 2007, Nokia confirmed a previously leaked N-Gage logo is the official logo for the upcoming service.
Launch
The N-Gage gaming service in its final form was finally announced by Nokia on 29 August 2007. Nokia used the tagline Get out and play to promote the platform. It was supposed to be released in December 2007, but it was delayed as Nokia's team were making sure the service ran 'smoothly'. By this time, Nokia had attracted a healthy number of third-party publishers, including Electronic Arts, THQ, Gameloft and Capcom.
First Access
A public beta test of the N-Gage application took place from 4 February 2008 to 27 March 2008, though limited only for the N81. This period of time was referred to as "First Access" and was only a public test of the client which could be downloaded for free from the N-Gage website. While not the final version, the user had access to most of the features that the new application had to offer along with three games to try out: Hooked On: Creatures of the Deep, System Rush: Evolution and Space Impact: Kappa Base. None of the games are entirely free, but all offer a limited trial for testing purposes.
Just one day after the start of the First Access, hackers had already manage to unpack the N-Gage installation file into components, which can then be installed separately, thus removing the N81-only limitation. N-Gage was subsequently reported working on other Nokia Nseries devices, such as N73 and N95.
Head of New Experience, Nokia Play, Jaakko Kaidesoja had this to say to Pocket Gamer in an interview on 21 February 2008 when asked about what early feedback they had received:"The feedback has been positive and well received within the company and some critical comments were well received as well. We know it's not perfect yet and there are some features people want more of. Those are the things we want to check and get on the roadmap."
Public release
After numerous delays and many vague release dates, the N-Gage platform finally went live to the public on 3 April 2008 through the N-Gage official website. The launch titles also changed from six to five: Asphalt 3: Street Rules, Brain Challenge, Hooked On: Creatures of the Deep, System Rush: Evolution, and World Series of Poker: Pro Challenge. The first two titles weren't even included on the original list (which included Block Breaker Deluxe and Tetris instead). The sixth and postponed game was Space Impact: Kappa Base. The five initially supported handsets were: Nokia N81, N81 8GB, N82, N95 and N95 8GB.
Some hours after the launch, the man behind the official N-Gage Blog, Ikona, said about the delay: "We are currently ensuring Block Breaker Deluxe, Space Impact: Kappa Base, and Tetris are running smoothly with our new application. These should be available in the showroom next week or two." Four days later, on April 7 - Nokia posted their official press release commenting on the release of their new mobile service, and at which point FIFA 08 also became available for purchase. There were reports in May 2008 that some gamers were "angry" about N-Gage's digital rights management (DRM) protection in that every game purchased would be not locked to the user's account but to the handset, meaning they have to buy the game again if they change handsets.
Compatibility
The N-Gage platform was compatible with: Nokia N78, N79, N81, N81 8GB, N82, N85, N86, N86 8MP, N95, N95 8GB, N96, N97, Nokia 5320 XpressMusic, 5630 XpressMusic, 5730 XpressMusic Nokia 6210 Navigator, 6710 Navigator, 6720 Classic, E52, E55 and E75. Due to memory issues, support for the Nokia N73, N93 and N93i was cancelled.
Because N-Gage is a software based solution, the first generation MMC games are not compatible with the new platform, though some games made a comeback in the form of a sequel (e.g. System Rush: Evolution) or a remake/port (e.g. Mile High Pinball). Similarly, games developed for this next-gen N-Gage platform do not work on the original N-Gage nor N-Gage QD, adding to the fact that newer S60 software, including the N-Gage client and games, aren't binary-compatible with older S60 devices and vice versa.
Interface and social features
The N-Gage client app functioned as an app store, software updater, instant messaging client, and personal achievement record. Nokia was inspired by Microsoft’s Xbox Live service in creating the user interface of the app. At the top of the N-Gage launcher are five tabs for each function. The My Games screen shows all the games that are currently installed on the phone. The Profile tab displays the user's profile, showing how many N-Gage points the user scored scored, their reputation level (ranging between 1-5 stars), the number of friends they have, and their avatar/picture. Users could also track progress through trophies/achievements.
The Showroom displayed all games that were available for download as well as Game Extras for expanding a game with extra content. Games could be downloaded directly to the phone over the air (by GPRS, 3G or WiFi), or the user may choose to download it to a computer and then install it on to the phone using a USB-cable and Nokia PC Suite.
N-Gage Arena was the online service for the N-Gage community and included message boards, live chats, share user created content, tournament activities, and instant messaging. Users could also invite friends to play a game.
Closure and legacy
On 30 October 2009, Nokia announced that no new N-Gage games would be produced, effectively shutting down the N-Gage platform. All N-Gage services, which includes purchasing of games and various online features, had reportedly ceased operation by the end of 2010. Later on 31 March 2011 Nokia closed their DRM activation service, leaving customers unable to reactivate their purchases in the case of a device format or software update. No transition of their purchases was made to the Ovi Store, and no compensation was given because, according to support staff, software purchases are only supported for one year.
Some gaming websites e.g. Pocket Gamer link N-Gage's failure to the overwhelming competition it faces from Apple's App Store, while Ovi Gaming cited poor implementation and support from their parent company, Nokia. A bad development model and marketing have also been cited. Ewan Spence of All About Symbian wrote that keeping the "N-Gage" name, despite the failure of its predecessor, was a mistake. He also noted that N-Gage titles simply didn't sell well enough compared to their Java and iPhone OS counterparts.
Awards
Several of the N-Gage 2.0 games were nominated for International Mobile Gaming Awards in 2007.
Two out of three N-Gage 2.0 titles received an award:
ONE by Digital Legends won the Best 3D award.
Dirk Dagger and the Fallen Idol by Jadestone won the Best Gameplay award.
Hooked On: Creatures of the Deep by Infinite Dreams Inc. was nominated for Best Gameplay, but did not receive the award.
On 8 May 2008, Hooked On: Creatures of the Deep won a Games Award during the 2008 Meffy Awards in Cannes.
Technical details
Specifications
In order for the N-Gage platform and games to run smoothly, all N-Gage compatible mobile devices share a common set of specifications:
Screen: landscape or portrait 320 x 240 pixels (except N97, with a 640 x 360 pixels screen, graphics are stretched and displayed in a letterbox format to keep aspect ratio)
OS: Symbian S60 3rd edition (S60 5th edition on N97)
Interface: 5 way (up, down, left, right, center) directional pad, Dedicated action buttons Circle and Square (Mapped onto keypad '5' and '0' in portrait mode) and 2 contextual buttons. Touch screen interactions were not supported (N97 emulated the actions buttons into the on-screen buttons)
Connectivity: 3G or Wifi (Required for the connecting to the N-Gage platform for downloading games, online functions such as rankings and multiplayer)
CPU: ARM11 with speed ranges from 369 MHz (N81) to 600 MHz (E52)
GPU: 3D Graphics Hardware Accelerator supported (games running on devices such as the HW-Accelerated N95 have enhanced performance)
Audio: Stereo channel
Software development
N-Gage games are packaged differently than normal Symbian applications and have the extension ".n-gage" and can only run via the N-Gage application. The game resources are protected by DRM.
They cannot use any native Symbian APIs, instead they use a proprietary API from the N-Gage SDK. N-Gage was also designed to make it easier for developers to port games to the platform: the SDK abstracts Symbian OS and provides a POSIX compliant, standard C/C++ layer over Symbian OS. This meant that developers no longer have to learn Symbian OS C++ idioms, like active objects and descriptors, before they can port their code. Hence it speeded up the process of porting to N-Gage as opposed to the original N-Gage hardware device.
The N-Gage API is in fact an extension of the RGA API available in the Open C++ plug-in. Only select companies were allowed access to the N-Gage SDK. To gain access they first must have been approved by Nokia and sign a NDA.
Games library
As of 23 October 2009, there were 49 games released officially on N-Gage. Many other games were cancelled with the shutting down of N-Gage. Some of these games are sequels, remakes or ports of the first generation N-Gage MMC games.
Cancelled titles
See also
Scalable Network Application Package (SNAP)
Nokia Game
Club Nokia
Xbox Live
Steam
Game Center
References
External links
N-Gage's official website
Nokia's official website
Get Out And Play, an N-Gage promoting website, owned by Nokia
Shutdown announcement on the N-Gage Blog (archived)
Mobile software
Nokia platforms
Nokia services
Online video game services
Seventh-generation video game consoles
Mobile software distribution platforms | N-Gage (service) | [
"Technology"
] | 2,724 | [
"Mobile content",
"Mobile software distribution platforms"
] |
11,935,110 | https://en.wikipedia.org/wiki/STAT6 | Signal transducer and activator of transcription 6 (STAT6) is a transcription factor that belongs to the Signal Transducer and Activator of Transcription (STAT) family of proteins. The proteins of STAT family transmit signals from a receptor complex to the nucleus and activate gene expression. Similarly as other STAT family proteins, STAT6 is also activated by growth factors and cytokines. STAT6 is mainly activated by cytokines interleukin-4 and interleukin-13.
Molecular biology
In the human genome, STAT6 protein is encoded by the STAT6 gene, located on the chromosome 12q13.3-q14.1. The gene encompasses over 19 kb and consists of 23 exons. STAT6 shares structural similarity with the other STAT proteins and is composed of the N-terminal domain, DNA binding domain, SH3- like domain, SH2 domain and transactivation domain (TAD).
STAT proteins are activated by the Janus family (JAKs) tyrosine kinases in response to cytokine exposure. STAT6 is activated by cytokines interleukin-4 (IL-4), and interleukin-13 (IL-13) with their receptors that both contain the α subunit of the IL-4 receptor (IL-4Rα). Tyrosine phosporylation of STAT6 after stimulation by IL-4 results in the formation of STAT6 homodimers that bind specific DNA elements via a DNA-binding domain.
Function
STAT6-mediated signaling pathway is required for the development of T-helper type 2 (Th2) cells and Th2 immune response. Expression of Th2 cytokines, including IL-4, IL-13, and IL-5, was reduced in STAT6-deficient mice. STAT 6 protein is crucial in IL4 mediated biological responses. It was found that STAT6 induce the expression of BCL2L1/BCL-X(L), which is responsible for the anti-apoptotic activity of IL4. IL-4 stimulates the phosphorylation of IL-4 receptor, which recruits cytosolic STAT6 by its SH2 domain and STAT6 is phosphorylated on tyrosine 641 (Y641) by JAK1, which results in the dimerization and nuclear translocation of STAT6 to activate target genes. Knockout studies in mice suggested the roles of this gene in differentiation of T helper 2 (Th2), expression of cell surface markers, and class switch of immunoglobulins.
Activation of STAT6 signaling pathway is necessary in macrophage function, and is required for the M2 subtype activation of macrophages. STAT6 protein also regulates other transcription factor as Gata3, which is important regulator of Th2 differentiation. STAT6 is also required for the development of IL-9-secreting T cells.
STAT6 also plays a critical role in Th2 lung inflammatory responses including clearance of parasitic infections and in the pathogenesis of asthma. Th2-cell derived cytokines as IL-4 and IL-13 induce the production of IgE which is a major mediator in allergic response. Association studies searching for relation of polymorphisms in STAT6 with IgE level or asthma discovered a few polymorphisms significantly associated with examined traits. Only two polymorphisms showed repeatedly significant clinical association and/or functional effect on STAT6 function (GT repeats in exon 1 and rs324011 polymorphism in intron 2).
Interactions
STAT6 has been shown to interact with:
CREB-binding protein,
EP300,
IRF4,
NFKB1,
Nuclear receptor coactivator 1, and
SND1.
Pathology
Gene fusion
Recurrent somatic fusions of the two genes, NGFI-A–binding protein 2 (NAB2) and STAT6, located at chromosomal region 12q13, have been identified in solitary fibrous tumors.
Amplification
STAT6 is amplified in a subset of dedifferentiated liposarcoma.
See also
Interleukin 4
References
Further reading
External links
Gene expression
Immune system
Proteins
Transcription factors
Signal transduction | STAT6 | [
"Chemistry",
"Biology"
] | 869 | [
"Biomolecules by chemical classification",
"Immune system",
"Gene expression",
"Signal transduction",
"Organ systems",
"Molecular genetics",
"Induced stem cells",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Proteins",
"Neurochemistry",
"Transcription factors"
] |
11,935,111 | https://en.wikipedia.org/wiki/STAT4 | Signal transducer and activator of transcription 4 (STAT4) is a transcription factor belonging to the STAT protein family, composed of STAT1, STAT2, STAT3, STAT4, STAT5A, STAT5B, STAT6. STAT proteins are key activators of gene transcription which bind to DNA in response to cytokine gradient. STAT proteins are a common part of Janus kinase (JAK)- signalling pathways, activated by cytokines.STAT4 is required for the development of Th1 cells from naive CD4+ T cells and IFN-γ production in response to IL-12. There are two known STAT4 transcripts, STAT4α and STAT4β, differing in the levels of interferon-gamma (IFN-γ )production downstream.
Structure
Human as well murine STAT4 genes lie next to STAT1 gene locus suggesting that the genes arose by gene duplication. STAT proteins have six functional domains: 1. N-terminal interaction domain – crucial for dimerization of inactive STATs and nuclear translocation; 2.helical coiled coil domain – association with regulatory factors; 3. central DNA-binding domain – binding to the enhancer region of IFN-γ activated sequence (GAS) family genes; 4. linker domain – assisting during the DNA binding process; 5. Src homology 2 (SH2) domain – critical for specific binding to the cytokine receptor after tyrosine phosphorylation; 6. C-terminal transactivation domain – triggering the transcriptional process. The length of the protein is 748 amino acids, and the molecular weight is 85 941 Dalton.
Expression
Distribution of STAT4 is restricted to myeloid cells, thymus and testis. In resting human T cells it is expressed at very low levels, but its production is amplified by PHA stimulation.
Cytokines activating STAT4
IL-12
Pro-inflammatory cytokine IL-12 is produced in heterodimer form by B cells and antigen-presenting cells. Binding of IL-12 to IL-12R, which is composed of two different subunits (IL12Rβ1 and IL12Rβ2), leads to the interaction of IL12Rβ1 and IL12Rβ2 with JAK2 and TYK2, which is followed by phosphorylation of STAT4 tyrosine 693. The pathway then induces IFNγ production and Th1 differentiation. STAT4 is critical in promotion of antiviral response of natural killer (NK) cell by targeting of promotor regions of Runx1 and Runx3.
IFNα and IFNβ
Secreted by leukocytes, respectively fibroblasts, IFNα IFNβ together regulate antiviral immunity, cell proliferation and anti-tumor effects. In viral infection signalling pathway, either of IFNα or β binds to IFN receptor (IFNAR), composed of IFNAR1 and IFNAR2, immediately followed by the phosphorylation of STAT1, STAT4 and IFN target genes. During the initial phase of viral infection in NK cells, STAT1 activation is replaced by the activation of STAT4.
IL-23
Monocytes, activated dendritic cells (DC) and macrophages stimulate the accumulation of IL-23 after exposure to molecules of Gram-positive/negative bacteria or viruses. Receptor for IL-23 contains IL12β1 and IL23R subunits, which upon binding of IL-23 promotes the phosphorylation STAT4. The presence of IL12β1 enables similar, although weaker downstream activity as compared to IL-12. During chronic inflammation, IL-23/STAT4 signalling pathway is involved in the induction of differentiation and expansion of Th17 pro-inflammatory T helper cells.
Additionally, other cytokines like IL2, IL 27, IL35, IL18 and IL21 are known to activate STAT4.
Inhibitors of STAT4 signalling pathways
In cells with progressively increasing expression of IL12 and IL6, SOCSs production and activity suppress cytokine signalling and phosphorylation of JAK-STAT pathways in a negative feedback loop.
Other suppressors of the pathways are: protein inhibitor of activated STAT (PAIS) (regulation of transcriptional activity in the nucleus, observed in STAT4-DNA binding complex), protein tyrosine phosphatase (PTP) (removal of phosphate groups from phosphorylated tyrosine in JAK/STAT pathway proteins), STAT-interacting LIM protein (SLIM) (STAT ubiquitin E3 ligase blocking the phosphorylation of STAT4) or microRNA (miRNA) (degradation of STAT4 mRNA and its post-transcriptional regulation).
Target genes
STAT4 binds to hundreds of sites in the genome, among others to the promoters of genes for cytokines (IFN-γ, TNF), receptors (IL18R1, IL12rβ2, IL18RAP), and signaling factors (MYD88).
Disease
STAT4 is involved in several autoimmune and cancer diseases in animal models humans, significantly in the disease progression and pathology. STAT4 were significantly increased in patients with colitis ulcerative and skin T cells of psoriatic patients. Moreover, STAT4 -/- mice developed less severe experimental autoimmune encephalo-myelitis (EAE) than the wild type mice.
Intronic single nucleotide polymorphism (SNP) mostly in third intron of the STAT4 has shown to be associated with immune dysregulation and autoimmunity including systemic lupus erythematosus (SLE) and rheumatoid arthritis as well as Sjögren's disease (SD), systemic sclerosis, psoriasis and also type-1 diabetes. High incident of STAT4 genetic polymorphisms and susceptibility to autoimmune diseases is a reason to consider the STAT4 as general autoimmune disease susceptibility locus.
References
Further reading
External links
Gene expression
Immune system
Proteins
Transcription factors
Signal transduction | STAT4 | [
"Chemistry",
"Biology"
] | 1,276 | [
"Biomolecules by chemical classification",
"Immune system",
"Gene expression",
"Signal transduction",
"Organ systems",
"Molecular genetics",
"Induced stem cells",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Proteins",
"Neurochemistry",
"Transcription factors"
] |
11,935,113 | https://en.wikipedia.org/wiki/STAT2 | Signal transducer and activator of transcription 2 is a protein that in humans is encoded by the STAT2 gene. It is a member of the STAT protein family. This protein is critical to the biological response of type I interferons (IFNs). It functions as a transcription factor downstream of type I interferons. STAT2 sequence identity between mouse and human is only 68%.
Function
The protein encoded by this gene is a member of the STAT protein family. In response to cytokines and growth factors, STAT family members are phosphorylated by the receptor associated kinases, and then form homo- or heterodimers that translocate to the cell nucleus where they act as transcription activators. In response to IFN, this protein forms a complex with STAT1 and IFN regulatory factor family protein p48 (IRF9) and form ISGF-3 (IFN-stimulated gene factor-3), in which this protein acts as a transactivator, but lacks the ability to bind DNA directly. The protein mediates innate antiviral activity. Mutations in this gene result in Immunodeficiency 44. ISGF-3 proceeds the activation of genes via the IFN-stimulated response element (ISRE). ISRE-driven genes include Ly-6C, the double-stranded RNA kinase (PKR), 2´ to 5´ oligoadenylate synthase (OAS), MX and potentially MHC class I. Transcription adaptor P300/CBP (EP300/CREBBP) has been shown to interact specifically with this protein, which is thought to be involved in the process of blocking IFN-alpha response by adenovirus.
STAT2 knockout mice are unresponsive to type I IFN and extremely vulnerable to viral infection. They indicate the loss of the type I IFN autocrine loop and several defects in macrophages and T cell responses. Stat2-/- cells show differences in the biological response to IFN-α.
Interactions
STAT2 has been shown to interact with:
CREB-binding protein,
IFNAR1
IFNAR2,
IRF9,
MED14,
SMARCA4, and
STAT1.
STAT2 deficiency
Knockout mice
In double knockout STAT2 mice, an increased proliferation of M1, M2, and M1/M2 coexpressing macrophages during influenza-bacterial super-infection is observed. The bacterial clearance was also impaired by neutralization of IFN-γ (M1) and Arginase-1 (M2) what suggests that pulmonary macrophages expressing a mixed M1/M2 phenotype promote bacterial control during influenza-bacterial super-infection. Therefore the STAT2 signaling is associated with suppressing macrophage activation and bacterial control during influenza-bacterial super-infection. These mice demonstrate no developmental defects. The knockout STAT2 and double knockout STAT mice in Vesicular stromatitis Indiana virus (VSV) model produce at least 10 times more virus plaque-forming units than the wild type (WT). IFN-α pretreatment supplied protection in WT and STAT2+/- cells but not in double knockout STAT2 cells. IFN-γ pretreatment did not provide any antiviral response during infection of VSV. This finding could be explained by the reduced level of STAT1 in cells of STAT2 knockout mice. Additionally, the double knockout STAT2 mice are more sensitive to mouse cytomegalovirus (MCMV), severe fever thrombocytopenia syndrome virus, influenza virus, dengue virus (DNV) and Zika virus than control mice, which suggests that STAT2 plays a critical role in the suppression of virus replication in mice.
Human autosomal recessive (AR) STAT2 deficiency
AR STAT2 deficiency was first time observed in 2 siblings. After routine immunization with measles-mump-rubella, one sibling developed disseminated vaccine-strain measles (MMR) but recovered and second sibling died in infancy from a viral infection due to primary immunodeficiency disorder. Later, the results showed that siblings were homozygous for absent expression of gene for STAT2. Patients with AR STAT2 deficiency have mutations which bring substitutions at important splice sites what leads to defected splicing and premature stop codons leading to a loss of expression of an interferon-stimulated gene. The typical clinical phenotype is disseminated infection after immunization with the live attenuated MMR vaccine. Some patients had also an onset of severe disease in infancy like infection with RSV, norovirus, coxsackievirus, adenovirus or enterovirus. One of the patients had CNS disease after the primary infection with EBV. EBV suppression was delayed in peripheral blood and cerebrospinal fluid as type I interferon signalling plays important role in the initial immune response against EBV. During next 3 years, PCR test showed persistent EBV presence in blood as well as in cerebrospinal fluid despite anti-EBV IgG. CMV and VZV infections were severe as well in few patients. The virus infection was treated by high-dose of intravenous immunoglobulin (IVIG) after which patients recovered and became afebrile within 24 hours. IVIG has anti-inflammatory effect and suggests that the passive immunization could help to control the ongoing viral infections. Therefore, the monthly IgG therapy could be beneficial for patients with STAT2 deficiency during childhood, until their adaptive immune system has sufficiently developed. From the age 5 years, the frequency and severity of viral infections decreased and the age of 10 years the patients were mostly off all medication. In general, the patients with STAT2 deficiency are relatively healthy with no specific defects in their adaptive immunity or developmental abnormalities. These findings show that type I IFN signaling trough ISGF3 is not essential for host defense against the majority of common childhood viral pathogens. Despite a profoundly defective innate IFN response and evident susceptibility to some viral infections, STAT2-deficient individuals can live a relatively healthy life.
It was also reported a homozygous STAT2 missense mutation (R148W/Q) which results to a STAT2 gain of function underlying fatal early-onset autoinflammation in three patients. This mutation leads to a persistent type I IFN response due to defective binding of the mutated STAT2 to ubiquitin specific peptidase 1 (USP18) which is an essential in the negative autofeedback loop where USP18 sterically hinders the binding of JAK1 to IFNAR1. Therefore complete AR STAT2 deficiency usually causes disseminated LAV infection and recurrent natural viral infections. Penetrance is not complete for several viral infections and for complicated live measles vaccine disease. These observation suggest that the phenotype of AR STAT2 deficiency could range from asymptomatic (the healthy adult) to fatal (childhood death from a crushing viral disease). The phenotype is less severe than human complete AR STAT1 deficiency but more severe than IFNAR1 or IFNAR2 deficiency. The human phenotype is less severe than in mice.
References
Further reading
External links
Gene expression
Immune system
Proteins
Transcription factors
Signal transduction | STAT2 | [
"Chemistry",
"Biology"
] | 1,531 | [
"Biomolecules by chemical classification",
"Immune system",
"Gene expression",
"Signal transduction",
"Organ systems",
"Molecular genetics",
"Induced stem cells",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Proteins",
"Neurochemistry",
"Transcription factors"
] |
11,935,297 | https://en.wikipedia.org/wiki/Biosurfactant | Biosurfactant usually refers to surfactants of microbial origin. Most of the biosurfactants produced by microbes are synthesized extracellularly and many microbes are known to produce biosurfactants in large relative quantities. Some are of commercial interest. As a secondary metabolite of microorganisms, biosurfactants can be processed by the cultivation of biosurfactant producing microorganisms in the stationary phase on many sorts of low-priced substrates like biochar, plant oils, carbohydrates, wastes, etc. High-level production of biosurfactants can be controlled by regulation of environmental factors and growth circumstances.
Classification
Biosurfactants are usually categorized by their molecular structure. Like synthetic surfactants, they are composed of a hydrophilic moiety made up of amino acids, peptides, (poly)saccharides, or sugar alcohols and a hydrophobic moiety consisting of fatty acids. Correspondingly, the significant classes of biosurfactants include glycolipids, lipopeptides and lipoproteins, and polymeric surfactants as well as particulate surfactants.
Examples
Common biosurfactants include:
Bile salts are mixtures of micelle-forming compounds that encapsulate food, enabling absorption through the small intestine.
Lecithin, which can be obtained either from soybean or from egg yolk, is a common food ingredient.
Rhamnolipids, which can be produced by some species of Pseudomonas, e.g., Pseudomonas aeruginosa.
Sophorolipids are produced by various nonpathogenic yeasts.
Emulsan produced by Acinetobacter calcoaceticus.
Surfactin is a non-ribosomal lipopeptide produced by Bacillus subtilis
Microbial biosurfactants are obtained by including immiscible liquids in the growth medium.
Applications
Potential applications include herbicides and pesticides formulations, detergents, healthcare and cosmetics, pulp and paper, coal, textiles, ceramic processing and food industries, uranium ore-processing, and mechanical dewatering of peat.
Oil spill remediation
Biosurfactants enhance the emulsification of hydrocarbons, thus they have the potential to solubilise hydrocarbon contaminants and increase their availability for microbial degradation. In addition, biosurfactants can modify the cell surface of bacteria that biodegrade hydrocarbons, which can also increase the biodegradability of these pollutants to cells. These compounds can also be used in enhanced oil recovery and may be considered for other potential applications in environmental protection.
References
External links
Production and Characterization of Biosurfactants Using Bacteria Isolated from Acidic Hot Springs
Surfactants
Bioremediation
Cleaning product components
Environmental terminology | Biosurfactant | [
"Chemistry",
"Technology",
"Biology",
"Environmental_science"
] | 602 | [
"Biodegradation",
"Ecological techniques",
"Bioremediation",
"Environmental soil science",
"Components",
"Cleaning product components"
] |
11,935,345 | https://en.wikipedia.org/wiki/NPAS1 | NPAS1 is a basic helix-loop-helix transcription factor.
See also
NPAS3
References
External links
Transcription factors
PAS-domain-containing proteins | NPAS1 | [
"Chemistry",
"Biology"
] | 32 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
11,935,396 | https://en.wikipedia.org/wiki/TAL1 |
T-cell acute lymphocytic leukemia protein 1 (i.e. TAL1 but also termed stem cell leukemia/T-cell acute leukemia 1 [i.e. SCL/TAL1]) is a protein that in humans is encoded by the TAL1 gene.
The protein encoded by TAL1 is a basic helix-loop-helix transcription factor.
Interactions
TAL1 has been shown to interact with:
CBFA2T3,
EP300,
GATA1,
LDB1,
LMO1,
LMO2,
SIN3A,
Sp1 transcription factor, and
TCF3.
References
Further reading
External links
Transcription factors | TAL1 | [
"Chemistry",
"Biology"
] | 134 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
11,936,154 | https://en.wikipedia.org/wiki/Unicode%20and%20HTML%20for%20the%20Hebrew%20alphabet | The Unicode and HTML for the Hebrew alphabet are found in the following tables. The Unicode Hebrew block extends from U+0590 to U+05FF and from U+FB1D to U+FB4F. It includes letters, ligatures, combining diacritical marks (niqqud and cantillation marks) and punctuation. The Numeric Character References are included for HTML. These can be used in many markup languages, and they are often used on web pages to create the Hebrew glyphs presentable by the majority of web browsers.
Unicode
Character table
Compact table
Note I: The ligatures are intended for Yiddish. They are not used in Hebrew.
Note II: The symbol is called gershayim and is a punctuation mark used in the Hebrew language to denote acronyms. It is written before the last letter in the acronym. Gershayim is also the name of a note of cantillation in the reading of the Torah, printed above the accented letter.
Remaining graphs are in the Alphabetic Presentation Forms block:
Note: In Yiddish orthography only, the glyph, (), pronounced , can be optionally used, rather than typing then (). In Hebrew spelling this would be pronounced . is written under the previous letter then ().
HTML code tables
Note: HTML numeric character references can be in decimal format (&#DDDD;) or hexadecimal format (&#xHHHH;). For example, ג and ג (where "05D2" in hexadecimal is the same as "1490" in decimal) both represent the Hebrew letter gimmel.
See also
Alphabetic Presentation Forms (Unicode block)
Hebrew (Unicode block)
Hebrew alphabet
Niqqud
Yiddish orthography
External links
Unicode Hebrew: Range 0590-05FF
Alphabetic Presentation Forms: Range FB00-FB4F
Hebrew Unicode Chart
Character encoding
Hebrew alphabet | Unicode and HTML for the Hebrew alphabet | [
"Technology"
] | 417 | [
"Natural language and computing",
"Character encoding"
] |
11,936,548 | https://en.wikipedia.org/wiki/HD%2011506 | HD 11506 is a star in the equatorial constellation of Cetus. It has a yellow hue and can be viewed with a small telescope but is too faint to be visible to the naked eye, having an apparent visual magnitude of 7.51. The distance to this object is 167 light-years based on parallax, but it is drifting closer to the Sun with a radial velocity of −7.5 km/s. It has an absolute magnitude of 3.94.
This object is an ordinary G-type main-sequence star with a stellar classification of G0V, which indicates it is generating energy via hydrogen fusion at its core. It is around 1.6 billion years old and is spinning with a projected rotational velocity of 5 km/s. The star has 112% of the mass of the Sun and 106% of the Sun's radius. The spectrum shows a higher than solar abundance of elements other than hydrogen and helium – what astronomers term the metallicity. The star is radiating 117% of the Sun's luminosity from its photosphere at an effective temperature of 5,833 K.
Planetary system
The superjovian planet HD 11506 b was discovered orbiting the star by the N2K Consortium in 2007 using the Doppler spectroscopy method. In 2009, a second planet discovery was claimed based on Bayesian analysis of the original data. However, in 2015 additional radial velocity measurements showed that the planetary parameters were significantly different than those determined by Bayesian analysis. An additional linear trend in the radial velocities indicated a stellar or planetary companion on a long term orbit.
In 2022, the presence of a third planet was confirmed, and the mass and inclination of both planet b and the new planet d were measured via astrometry. A 2024 study also confirmed HD 11506 d, but found a significantly wider orbit and greater mass than previously estimated. This object orbits with a 73-year period, and at about 13 times the mass of Jupiter, it is at the borderline of being a brown dwarf.
References
G-type main-sequence stars
Planetary systems with three confirmed planets
Cetus
BD-20 0358
011506
008770 | HD 11506 | [
"Astronomy"
] | 449 | [
"Cetus",
"Constellations"
] |
11,936,619 | https://en.wikipedia.org/wiki/HD%2011506%20b | HD 11506 b is an extrasolar planet that orbits the star HD 11506 167 light years away in the constellation of Cetus. This planet was discovered in 2007 by the N2K Consortium using the Keck telescope to detect the radial velocity variation of the star caused by the planet. A second planet, HD 11506 c, was discovered in 2015.
In 2022, the true mass and inclination of HD 11506 b were measured via astrometry, along with the discovery of a third planet in the system.
See also
HD 11506 c
References
Exoplanets discovered in 2007
Giant planets
Cetus
Exoplanets detected by radial velocity
Exoplanets detected by astrometry | HD 11506 b | [
"Astronomy"
] | 145 | [
"Cetus",
"Constellations"
] |
11,936,677 | https://en.wikipedia.org/wiki/HD%2017156 | HD 17156, named Nushagak by the IAU, is a yellow subgiant star approximately 255 light-years away in the constellation of Cassiopeia. The apparent magnitude is 8.17, which means it is not visible to the naked eye but can be seen with good binoculars. A search for a binary companion star using adaptive optics at the MMT Observatory was negative.
The star is more massive and larger than the Sun while Its absolute magnitude of 3.70 and spectral type of G0, show that it is both hotter and more luminous. Based on asteroseismic density constraints and stellar isochrones, it was found that the age is 3.37 billion years making it about two thirds as old as the Sun. Spectral observations show that the star is metal-rich.
An extrasolar planet, HD 17156 b, was discovered with the radial velocity method in 2007, and subsequently was observed to transit the star. At the time it was the transiting planet with the longest period.
Name
The star was given the name Nushagak by the IAU, chosen by United States representatives for the NameExoWorlds content, with the comment that "Nushagak is a regional river near Dilingham, Alaska, which is famous for its wild salmon that sustain local Indigenous communities." HD 17156 b was given the designation Mulchatna, as Mulchatna is a tributary of the Nushagak river.
Planetary system
It is the first star in Cassiopeia around which an orbiting planet was discovered (in 2007) using the radial velocity method. Later observations showed that this planet also transited the star.
In February 2008, a second planet was proposed, with a 5:1 mean motion resonance to the inner planet HD 17156 b, though in 2017 this planet candidate was retracted.
See also
List of stars with extrasolar planets
References
External links
Extrasolar Planet Interactions by Rory Barnes & Richard Greenberg, Lunar and Planetary Lab, University of Arizona
Cassiopeia (constellation)
Planetary transit variables
Planetary systems with one confirmed planet
G-type subgiants
017156
013192
Durchmusterung objects
Nushagak | HD 17156 | [
"Astronomy"
] | 451 | [
"Cassiopeia (constellation)",
"Constellations"
] |
11,938,133 | https://en.wikipedia.org/wiki/DailyMed | DailyMed is a website operated by the U.S. National Library of Medicine (NLM) to publish up-to-date and accurate drug labels (also called a "package insert") to health care providers and the general public. The contents of DailyMed is provided and updated daily by the U.S. Food and Drug Administration (FDA). The FDA in turn collects this information from the pharmaceutical industry.
The documents published use the HL7 version 3 Structured Product Labeling (SPL) standard, which is an XML format that combines the human readable text of the product label with structured data elements that describe the composition, form, packaging, and other properties of the drug products in detail according to the HL7 Reference Information Model (RIM).
, it contained information about 140,232 drug listings.
It includes an RSS feed for updated drug information.
History
In 2006 the FDA revised the drug label and also created DailyMed to keep prescription information up to date.
See also
Consumer Product Information Database, ingredients of household products
Environmental Working Group, which maintains a database of cosmetics ingredients
References
External links
labels.fda.gov Drug labels at FDA website
American medical websites
United States National Library of Medicine
Medical search engines
Medical databases
Online databases
Health informatics | DailyMed | [
"Chemistry",
"Biology"
] | 255 | [
"Pharmacology",
"Health informatics",
"Medicinal chemistry stubs",
"Pharmacology stubs",
"Medical technology"
] |
11,938,726 | https://en.wikipedia.org/wiki/Euchiloglanis | Euchiloglanis is a genus of sisorid catfishes native to Asia.
Species
There are currently four recognized species in this genus:
Euchiloglanis dorsoarcus V. H. Nguyễn, 2005
Euchiloglanis kishinouyei Sh. Kimura, 1934
Euchiloglanis longibarbatus W. Zhou, X. Li & A. W. Thomson, 2011
Euchiloglanis nami Hau Duc Tran, Duc Huu Nguyen, Huong Thanh Thi Dang, Huy Quang Nguyen and Nga Thi Nguyen. 2023
Euchiloglanis phongthoensis V. H. Nguyễn, 2005
References
Sisoridae
Fish of Asia
Taxa named by Charles Tate Regan
Freshwater fish genera
Catfish genera
Taxa described in 1907 | Euchiloglanis | [
"Biology"
] | 166 | [
"Biological hypotheses",
"Controversial fish taxa",
"Controversial taxa"
] |
11,939,462 | https://en.wikipedia.org/wiki/Ligase%20ribozyme | The RNA Ligase ribozyme was the first of several types of synthetic ribozymes produced by in vitro evolution and selection techniques. They are an important class of ribozymes because they catalyze the assembly of RNA fragments into phosphodiester RNA polymers, a reaction required of all extant nucleic acid polymerases and thought to be required for any self-replicating molecule. Ideas that the origin of life may have involved the first self-replicating molecules being ribozymes are called RNA World hypotheses. Ligase ribozymes may have been part of such a pre-biotic RNA world.
In order to copy RNA, fragments or monomers (individual building blocks) that have 5′-triphosphates must be ligated together. This is true for modern (protein-based) polymerases, and is also the most likely mechanism by which a ribozyme self-replicase in an RNA world might function. Yet no one has found a natural ribozyme that can perform this reaction.
In vitro evolution and selection
RNA in vitro evolution or SELEX enables the artificial evolution and selection of RNA molecules that possess a desired property, such as binding affinity for a particular ligand or an activity such as that of an enzyme or catalyst. The first such selections involved isolation of various aptamers that bind to small molecules. The first catalytic RNAs produced by in vitro evolution were RNA ligases, catalytic RNAs that join two RNA fragments to produce a single adduct. The most active ligase known to date is the Class I ligase, isolated from random sequence (work of David Bartel, while in the Szostak lab). Other examples of RNA ligases include the L1 ligase (Robertson and Ellington), the R3C ligase (Joyce), the DSL ligase (Inoue). All these ligases catalyze the formation of a 3′–5′ phosphodiester bond between two RNA fragments.
The L1 ligase
Michael Robertson and Andrew Ellington evolved a ligase ribozyme that performs the desired 5′–3′ RNA assembly reaction, and called this the L1 ligase. To better understand the details of how this ribozyme folds into a structure that permits it to catalyze this fundamental reaction, the X-ray crystal structure has been solved. The structure is composed of three helical stems called stem A, B and C, that connect at a three helix junction.
References
Further reading
Non-coding RNA
Ribozymes
RNA splicing | Ligase ribozyme | [
"Chemistry"
] | 531 | [
"Catalysis",
"Ribozymes"
] |
11,939,508 | https://en.wikipedia.org/wiki/ACTH%20stimulation%20test | The ACTH test (also called the cosyntropin, tetracosactide, or Synacthen test) is a medical test usually requested and interpreted by endocrinologists to assess the functioning of the adrenal glands' stress response by measuring the adrenal response to adrenocorticotropic hormone (ACTH; corticotropin) or another corticotropic agent such as tetracosactide (cosyntropin, tetracosactrin; Synacthen) or alsactide (Synchrodyn). ACTH is a hormone produced in the anterior pituitary gland that stimulates the adrenal glands to release cortisol, dehydroepiandrosterone (DHEA), dehydroepiandrosterone sulfate (DHEA-S), and aldosterone.
During the test, a small amount of synthetic ACTH is injected, and the amount of cortisol (and sometimes aldosterone) that the adrenals produce in response is measured. This test may cause mild side effects in some individuals.
This test is used to diagnose or exclude primary and secondary adrenal insufficiency, Addison's disease, and related conditions. In addition to quantifying adrenal insufficiency, the test can distinguish whether the cause is adrenal (low cortisol and aldosterone production) or pituitary (low ACTH production). The insulin tolerance test is recognized as the gold standard assay of adrenal insufficiency, but due to the cumbersome requirement for a two-hour test and the risks of seizures or myocardial infarction, the ACTH stimulation test is commonly used as an easier, safer, though not as accurate, alternative. The test is extremely sensitive (97% at 95% specificity) to primary adrenal insufficiency, but less so to secondary adrenal insufficiency (57–61% at 95% specificity); while secondary adrenal insufficiency may thus be dismissed by some interpreters on the basis of the test, additional testing may be called for if the probability of secondary adrenal insufficiency is particularly high.
Adrenal insufficiency is a potentially life-threatening condition. Treatment should be initiated as soon as the diagnosis is confirmed, or sooner if the patient presents in apparent adrenal crisis.
Versions of the test
This test can be given as a low-dose short test, a conventional-dose short test, or as a prolonged-stimulation test.
In the low-dose short test, 1 μg of an ACTH drug is injected into the patient. In the conventional-dose short test, 250 μg of drug are injected. Both of these short tests last for about an hour and provide the same information. Studies have shown the cortisol response of the adrenals is the same for the low-dose and conventional-dose tests.
The prolonged-stimulation test, which is also called a long conventional-dose test, can last up to 48 hours. This form of the test can differentiate between primary, secondary, and tertiary adrenal insufficiency. This form of the test is rarely performed because earlier testing of cortisol and ACTH levels in association with the short test may provide all the necessary information.
Preparation
The test should not be given if on glucocorticoids or adrenal extract supplement, as these will affect test results. Stress and recently administered radioisotope scans can artificially increase levels and may invalidate test results. Spironolactone, contraceptives, licorice, estrogen, androgen (including DHEA) and progesterone therapy may also affect both aldosterone and cortisol stimulation test results. To stimulate aldosterone, consumption of salt should be reduced to a minimum, and foods high in sodium avoided for 24 hours prior to testing. Women should ideally undergo testing during the first week of their menstrual cycle as aldosterone (and occasionally cortisol) may be falsely elevated in the luteal cycle secondary to progesterone inhibition, leading to a compensatory rise in aldosterone levels.
Administration
Traditionally, cortisol and ACTH levels (separate lavender top tube) are drawn at baseline (time = 0). Next, synthetic ACTH or another corticotropic agent is injected IM or IV, depending on the agent. Approximately 20 mL of heparinized venous blood is collected at 30 and 60 minutes after the synthetic ACTH injection to measure cortisol levels.
ACTH samples are kept on ice and sent immediately to the laboratory, whereas cortisol does not need to be kept on ice.
Potential side effects
Commonly reported reactions are nausea, anxious sweating, dizziness, itchy skin, redness and or swelling of injection site, palpitations (a fast or fluttering heart beat), and facial flushing (may also include arms and torso), but should disappear within a few hours. Rarely seen, but serious side effects include rash, fainting, headache, blurred vision, severe swelling, severe dizziness, trouble breathing, irregular heartbeat.
Interpretation of results
Cosyntropin stimulation testing
In healthy individuals, the cortisol level should increase above 18–20 μg/dl within 60 minutes on a 250 mcg cosyntropin stimulation test.
Interpretation for primary adrenal insufficiency, Addison's disease
In Addison's disease, both the cortisol and aldosterone levels are low, and the cortisol will not rise during the cosyntropin stimulation test
Interpretation for secondary adrenal insufficiency
In secondary adrenal insufficiency, due to exogenous steroid administration suppressing pituitary production of ACTH or due to primary pituitary disorder causing insufficient ACTH production, the adrenal glands will atrophy over time and cortisol production will fall and patients will fail stimulation testing. Early in the development of secondary adrenal insufficiency, the adrenals may not have atrophied and can still stimulate, resulting in a normal cosyntropin stimulation test.
If secondary adrenal insufficiency is diagnosed, the insulin tolerance test (ITT) or the CRH (corticotropin-releasing hormone) stimulation test can be used to distinguish between a hypothalamic (tertiary) and pituitary (secondary) cause but is rarely used in clinical practice.
ACTH plasma test plus cortisol stimulation
Measuring a morning, fasting ACTH level helps assess for the etiology of adrenal insufficiency.
Interpretation for primary adrenal insufficiency and Addison's disease
ACTH will be high – usually well above upper limits of reference range.
Interpretation for secondary adrenal insufficiency
ACTH will be low – usually below 35, but most people with secondary fall within the range limit. This is inappropriately normal for the low cortisol level.
In some cases, the actual cause of low ACTH is from low CRH in the hypothalamus. It is possible to have separate ACTH and CRH impairment such as can happen in a head injury.
Aldosterone stimulation
The ACTH stimulation test is occasionally used to test adrenal production of aldosterone at the same time as cortisol to also help in determining if primary (hyperreninemic) or secondary (hyporeninemic) hypoaldosteronism is present. Human ACTH has a slight stimulatory effect on aldosterone, but the amount of synthetic ACTH given in the stimulation is equivalent to more than a whole days production of natural ACTH, so the aldosterone response can be easily measured in blood serum. Same as cortisol, aldosterone should double from a respectable base value (around 20 ng/dl, must fast salt 24 hours and sit upright for blood draw) in a healthy individual.
Interpretation for primary aldosterone deficiency
The aldosterone response in the ACTH stimulation test is blunted or absent in patients with primary adrenal insufficiency including Addison's disease. The base value is usually in the mid-teens or less and rise to less than double the base value thus indicating primary hypoaldosteronism (sodium low, potassium and renin enzyme will be high) and is an indicator of primary adrenal insufficiency or Addison's disease.
Interpretation for secondary aldosterone deficiency
Aldosterone response of several factors from a low base value. This factoring indicates secondary hypoaldosteronism (sodium low, potassium and renin enzyme will be low). Usually doubling to quadrupling from a low base aldosterone value is what is seen in secondary adrenal insufficiency. Decoupling of aldosterone in the ACTH stimulation test is possible (i.e. 2 ng/dl stimming to 20). A result of doubling or more of aldosterone may help in tandem with a cortisol stimulation that doubled or more confirm a diagnosis of secondary adrenal insufficiency. In rare cases, an aldosterone stimulation which did not double, but with the presence of low potassium, low renin and low ACTH indicates atrophy of aldosterone production from the prolonged lack of renin.
Similar to the cortisol stimulation in ACTH deficiency, the test interpreter may lack knowledge of how to properly interpret for secondary hypoaldosteronism and think a result of aldosterone doubling or more from a low base value is good.
Future perspectives
Recent data showed that Synacthen test results can be used to predict future recovery of HPA axis function in patients with reversible causes of Adrenal Insufficiency.
Other hormones and chemicals that will rise in the ACTH stimulation test
Progesterone – precursor to cortisol and aldosterone
17α-Hydroxyprogesterone – a progestogen steroid hormone related to progesterone
Luteinizing hormone – a pituitary hormone that stimulates sex hormone production
DHEA and DHEA-S – androgen hormones produced in the adrenal glands
Simple diagnostic chart
Veterinary medicine
The test is also used to diagnose hypoadrenocorticism in dogs and sometimes cats.
See also
Dexamethasone suppression test
Insulin tolerance test, another test used to identify sub-types of adrenal insufficiency
Metyrapone, a drug used in the diagnosis of adrenal insufficiency
Triple bolus test
Renin, enzyme that converts angiotensinogen 1 to angiotensin 2, a precursor to aldosterone
Renin–angiotensin–aldosterone system
HPA axis, explains the connections of the hypothalamus, pituitary and adrenal glands
Hypopituitarism
Pituitary adenoma
Adrenal adenoma
Corticorelin
References
External links
ACTH stimulation test – Procedures/Diagnostic tests Warren Grant Magnuson Clinical Center National Institutes of Health.
Blood tests
Endocrine procedures
Hormones of the hypothalamus-pituitary-adrenal axis
Dynamic endocrine function tests | ACTH stimulation test | [
"Chemistry"
] | 2,356 | [
"Blood tests",
"Chemical pathology"
] |
11,940,432 | https://en.wikipedia.org/wiki/Frame%20%28design%20magazine%29 | FRAME magazine (capitalized by its creators; the E in FRAME often appears mirror-reversed on the magazine's cover) is a magazine devoted to interior design, architecture, product design and exhibition design based in Amsterdam, Netherlands. The magazine was first published in 1997 by Frame Publishers and have about 6 issues a year. Robert Thiemann is the founder and editor-in-chief of the magazine.
Frame magazine is one of the leading interior design publications. Since its launch in 1997, the magazine has remained faithful to its mission: putting interior architecture on the map as a creative profession that's equally important as design and architecture. The magazine is sold in 77 countries and is printed in English and Korean.
The magazine is published by the parent company Frame Publishers, which also produces various design related books and occasional monographs on the work of prominent companies and people in the design world.
References
External links
Frame Magazine
1997 establishments in the Netherlands
Design magazines
Magazines established in 1997
Magazines published in Amsterdam | Frame (design magazine) | [
"Engineering"
] | 199 | [
"Design magazines",
"Design"
] |
11,940,462 | https://en.wikipedia.org/wiki/Abitare | Abitare (which translates to "live" or "dwell"), published monthly in Milan, Italy, is a design magazine. It was first published in 1961.
History and profile
Abitare was launched in Milan in 1961 by Piera Peroni. It was devoted to architecture, interior design, furniture, product design and graphic arts and was published both in Italian and English.
In 1976, the magazine was sold to Segesta Publishing group. Later it became part of the RCS Group and began to be published by RCS MediaGroup.
Shortly after the founding of the magazine, postwar architect, Eugenio Gentili Tedeschi joined Peroni. In addition to writing for the magazine, he later served as acting de facto editor-in-chief with Franca Santi. Stefano Boeri, Chiara Maranzana, Mario Piazza and Maria Giulia Zunino were among the editors-in-chief of the magazine.
The magazine momentarily ceased print publication in March 2014. However, its online version continued to publish content. The magazine was relaunched in October 2014 with a new format and new graphics under the direction of Silvia Botti.
See also
List of magazines published in Italy
References
External links
Official website
1961 establishments in Italy
Architecture magazines
Design magazines
English-language magazines
Italian-language magazines
Magazines established in 1961
Magazines published in Milan
Monthly magazines published in Italy | Abitare | [
"Engineering"
] | 278 | [
"Design magazines",
"Design"
] |
11,940,501 | https://en.wikipedia.org/wiki/Macaulay%20brackets | Macaulay brackets are a notation used to describe the ramp function
A popular alternative transcription uses angle brackets, viz. .
Another commonly used notation is + or + for the positive part of , which avoids conflicts with for set notation.
In engineering
Macaulay's notation is commonly used in the static analysis of bending moments of a beam. This is useful because shear forces applied on a member render the shear and moment diagram discontinuous. Macaulay's notation also provides an easy way of integrating these discontinuous curves to give bending moments, angular deflection, and so on. For engineering purposes, angle brackets are often used to denote the use of Macaulay's method.
The above example simply states that the function takes the value for all x values larger than a. With this, all the forces acting on a beam can be added, with their respective points of action being the value of a.
A particular case is the unit step function,
See also
Singularity function
References
Mathematical analysis | Macaulay brackets | [
"Mathematics"
] | 203 | [
"Mathematical analysis",
"Mathematical analysis stubs"
] |
11,940,902 | https://en.wikipedia.org/wiki/Metro%20Ethernet%20Routing%20Switch%208600 | Metro Ethernet Routing Switch 8600 or MERS 8600 is a modular chassis router and/or switch manufactured by Nortel now acquired by Ciena. The MERS 8600 supports the Provider Backbone Bridges (PBB), Provider Backbone Transport (PBT) technologies and carrier class Operations Administration & Maintenance (OAM) tools.
Configurable as a 1.440 Terabit Switch cluster using SMLT and RSMLT protocols, cluster failover (normally less than 100 millisecond).
BT uses the MERS 8600 PBB/PBT technologies in its 21st Century Network (21CN) and India has selected this platform for the most extensive IP network ever deployed by an international airport in India.
The MERS 8600 has 3 chassis options
8006, 6-slot chassis for backbones of low density or high space premium
8010, 10-slot chassis for high availability and high scalability
8010CO, 10-slot NEBS-compliant chassis.
The chassis can be configured with one or two CPU modules (8692SF), and is normally configured with two or three load balancing power supplies.
Modules
CPU Modules
8692omSF Switch Fabric and CPU 8692 with Expansion Mezzanine card, Supports 50 ms fail-over on NNI trunks with MultiLink Trunking
8692omSF Switch Fabric
8691 omSF Switch Fabric and CPU Module
10 Gigabit Ethernet
8683 XLR, 3 ports 10 Gigabit Ethernet XFP (LAN PHY only)
Packet Over SONET
8683POSM, POS Baseboard supports up to 6 OC-3 or 3 OC-12 ports
VPN Modules
8668 VPN Module
Gigabit Ethernet
8630 GBR, 30 ports 1000 BaseX small form factor pluggable interfaces (SX, LX, CWDM, TX)
8608GBM, 8-port Gigabit Ethernet, GBIC-based
8608GTM, 8 ports 1000BASE-T, fixed Gigabit Ethernet
100/10 Megabit Ethernet
8632TXM, 32 ports 10/100 plus 2 GBIC ports
8648TXM, 48 10/100TX ports
See also
Ciena
Metro Ethernet
Wavelength-division multiplexing
Provider Backbone Bridges
Provider Backbone Transport
References
External links
Metro Ethernet Routing Switch 8600 -Broken
Resilient Terabit Cluster - Always on Networking - Broken
Ciena
Networking hardware
Nortel products
Nortel protocols | Metro Ethernet Routing Switch 8600 | [
"Engineering"
] | 507 | [
"Computer networks engineering",
"Networking hardware"
] |
11,941,349 | https://en.wikipedia.org/wiki/Residential%20treatment%20center | A residential treatment center (RTC), sometimes called a rehab, is a live-in health care facility providing therapy for substance use disorders, mental illness, or other behavioral problems. Residential treatment may be considered the "last-ditch" approach to treating abnormal psychology or psychopathology.
A residential treatment program encompasses any residential program which treats a behavioural issue, including milder psychopathology such as eating disorders (e.g. weight loss camp) or indiscipline (e.g. fitness boot camps as lifestyle interventions). Sometimes residential facilities provide enhanced access to treatment resources, without those seeking treatment considered residents of a treatment program, such as the sanatoriums of Eastern Europe. Controversial uses of residential programs for behavioural and cultural modification include conversion therapy and mandatory American and Canadian residential schools for indigenous populations. A common feature of residential programs is controlled social access to people outside the program, and limited access for outside parties to witness daily conditions within the program. Within psychiatry, it is understood that it can be almost impossible to change entrenched behaviour without impacting habitual relationships, at least in the short term, but the relatively closed nature of many residential programs also makes it possible to conceal abusive practice.
Upon discharge, the patient may be enrolled in an intensive outpatient program for follow-up outside the residential setting.
Historical background in the United States
In the 1600s, Great Britain established the Poor Law that allowed poor children to become trained in apprenticeships by removing them from their families and forcing them to live in group homes. In the 1800s, the United States copied this system, but often mentally ill children were placed in jail with adults because society did not know what to do with them. There were no RTCs in place to provide the 24-hour care they needed, and they were placed in jail when they could not live in the home. In the 1900s, Anna Freud and her peers were part of the Vienna Psychoanalytic Society, and they worked on how to care for children. They worked to create residential treatment centers for children and adolescents with emotional and behavioral disorders.
The year 1944 marked the beginning of Bruno Bettelheim's work at the Orthogenic School in Chicago, and Fritz Redl and David Wineman's work at the Pioneer House in Detroit. Bettelheim helped increase awareness of staff attitudes on children in treatment. He reinforced the idea that a psychiatric hospital was a community, where staff and patients influenced each other and patients were shaped by each other's behaviors. Bettelheim also believed that families should not have frequent contact with their child while he or she was in treatment. This differs from community-based therapy and family therapy of recent years, in which the goal of treatment is for a child to remain in the home. Also, emphasis is placed on the family's role in improving long term outcomes after treatment in a RTC. The Pioneer House created a special-education program to help improve impulse control and sociability in children. After WWII, Bettelheim and the joint efforts of Redl and Wineman were instrumental in establishing residential facilities as therapeutic-treatment alternative for children and adolescents who can not live at home
In the 1960s, the second generation of psychoanalytical RTC was created. These programs continued the work of the Vienna Psychoanalytic Society in order to include families and communities in the child's treatment. One example of this is the Walker Home and School which was established by Dr. Albert Treischman in 1961 for adolescent boys with severe emotional or behavioral disorders. He involved families in order to help them develop relationships with their children within homes, public schools and communities. Family and community involvement made this program different from previous programs.
Beginning in the 1980s, cognitive behavioral therapy was more commonly used in child psychiatry, as a source of intervention for troubled youth, and was applied in RTCs to produce better long-term results. Attachment theory also developed in response to the rise of children admitted to RTCs who were abused or neglected. These children needed specialized care by caretakers who were knowledgeable about trauma.
In the 1990s, the number of children entering RTCs increased dramatically, leading to a policy shift from institution- based services to a family-centered community system of care. This also reflected the lack of appropriate treatment resources. However, residential treatment centers have continued to grow and today house over 50,000 children. The number of residential treatment centers treating individuals of all ages in the United States is currently estimated at 28,900 facilities.
Children and teens
RTCs for adolescents, sometimes referred to as teen rehab centers if they also deal with addition, provide treatment for issues and disorders such as oppositional defiant disorder, conduct disorder, depression, bipolar disorder, attention deficit hyperactivity disorder (ADHD), educational issues, some personality disorders, and phase-of-life issues, as well as substance use disorders. Most use a behavior modification paradigm. Others are relationally oriented. Some utilize a community or positive peer-culture model. Generalist programs are usually large (80-plus clients and as many as 250) and level-focused in their treatment approach. That is, in order to manage clients' behavior, they frequently put systems of rewards and punishments in place. Specialist programs are usually smaller (less than 100 clients and as few as 10 or 12). Specialist programs typically are not as focused on behavior modification as generalist programs are.
Different RTCs work with different types of problems, and the structure and methods of RTCs vary. Some RTCs are lock-down facilities; that is, the residents are locked inside the premises. In a locked residential treatment facility, clients' movements are restricted. By comparison, an unlocked residential treatment facility allows them to move about the facility with relative freedom, but they are only allowed to leave the facility under specific conditions. Residential treatment centers should not be confused with residential education programs, which offer an alternative environment for at-risk children to live and learn together outside their homes.
Residential treatment centers for children and adolescents treat multiple conditions from drug and alcohol addictions to emotional and physical disorders as well as mental illnesses. Various studies of youth in residential treatment centers have found that many have a history of family-related issues, often including physical or sexual abuse. Some facilities address specialized disorders, such as reactive attachment disorder (RAD).
Residential treatment centers generally are clinically focused and primarily provide behavior management and treatment for adolescents with serious issues. In contrast, therapeutic boarding schools provide therapy and academics in a residential boarding school setting, employing a staff of social workers, psychologists, and psychiatrists to work with the students on a daily basis. This form of treatment has a goal of academic achievement as well as physical and mental stability in children, adolescents, and young adults. Recent trends have ensured that residential treatment facilities have more input from behavioral psychologists to improve outcomes and lessen unethical practices.
Behavioral interventions
Behavioral interventions have been very helpful in reducing problem behaviors in residential treatment centers. The type of clients receiving services in a facility (children with emotional or behavioral disorders versus intellectual disability versus psychiatric disorders) is a factor in the effectiveness of behavior modification. Behavioral intervention has been found to be successful even when medication interventions fail. However, there is evidence that certain populations may benefit more from interventions that fall outside of the behavior-modification paradigm. For instance, positive outcomes have been reported for neurosequential interventions targeting issues of early childhood trauma and attachment. (Perry, 2006). Although the majority of children who receive services in RTCs present emotional and behavioral disorders (EBDs), such as attention deficit hyperactivity disorder (ADHD), Oppositional Defiant Disorder (ODD), and Conduct Disorder (CD), behavior-modification techniques can be an effective way of decreasing the maladaptive behavior of these clients. Interventions such as response cost, token economies, social skills training groups, and the use of positive social reinforcement can be used to increase prosocial behavior in children (Ormrod, 2009).
Behavioral interventions are successful in treating children with behavioral disorders in part because they incorporate two principles that make up the core of how children learn: conceptual understanding and building on their pre-existing knowledge. Research by Resnick (1989) shows that even infants are able to develop basic quantitative frameworks. New information is incorporated into the framework and serves as the basis for the problem-solving skills a child develops as she or he is exposed to different types of stimuli (e.g., new situations, people, or environments). The experiences and environment that a child is exposed to can have either a positive or negative outcome, which, in turn, impacts how he or she remembers, reasons, and adapts when encountering aversive stimuli. Furthermore, when children have acquired extensive knowledge, it affects what they notice and how they organize, represent, and interpret information in their current environment (Bransford, Brown, & Cocking, 2000). Many of the children housed in RTCs have been exposed to negative environmental factors that have contributed to the behavior problems that they are exhibiting.
Many interventions build on children's prior knowledge of how reward works. Reinforcing children for pro-social behaviors (i.e., using token economies, in which children earn tokens for appropriate behaviors; response cost (losing previously earned tokens following inappropriate behavior; and implementing social-skills training groups, where participants observe and participate in modeling appropriate social behaviors help them develop a deeper understanding of the positive results of pro=social behavior.
Wolfe, Dattilo, & Gast (2003) found that using a token economy in concert with cooperative games increased pro-social behaviors (e.g. statements of encouragement, praise, or appreciation, shaking hands, and giving high fives) while decreasing anti-social ones (swearing, threatening peers with physical harm, name-calling, and physical aggression). The use of a response-cost system has been efficacious in reducing problem behaviors. A single-subject withdrawal design employing non-contingent reinforcement with response cost was used to reduce maladaptive verbal and physical behaviors exhibited by a post-institutional student with ADHD (Nolan & Filter, 2012). Wilhite & Bullock (2012) implemented a social-skills training group to increase the social competence of students with EBDs. Results showed significant differences between pre- and post-intervention disciplinary referrals, as well as several other elements of behavioral-ratings scales. Evidence also exists for the usefulness of social reinforcement as a part of behavioral interventions for children with ADHD. A study by Kohls, Herpertz-Dahlmann, & Kerstin (2009) found that both social and monetary rewards increased inhibition control in both the control and experimental groups. However, results showed that children with ADHD benefitted more from social reinforcement than typical children, indicating that social reinforcement can significantly improve cognitive control in ADHD children. The techniques listed are only a few of the many types of behavioral interventions that can be used to treat children with EBDs. Additional information regarding types of behavioral interventions can be found in the 2003 book Behavioral, Social, and Emotional Assessment of Children and Adolescents by Kenneth Merrell.
Types of Family Therapy used in Residential Treatment Center
Narrative Therapy: Narrative therapy has shown an increase in popularity in the field of family therapy. Narrative therapy developed out from the postmodern viewpoint, which is expressed in its principles: (a) not one universal reality exists, but socially constructed reality; (b) reality is created by language; (c) narrative maintains reality (d) not all narratives are equivalent (Freedman and Combs, 1996).
Narrative family therapy views human issues from those roots as emerging and being sustained by dominant stories that control the life of an individual. Problems arise when individual stories do not match with their experience of living. According to the narrative viewpoint, by offering a new and distinct perspective
In a problem-saturated narrative, therapy is a process of rewriting personal narratives. The process of rewriting the narrative of the client involves (a) expressing the problem(s) they are experiencing; (b) breaking down narratives that trigger problems through questioning; (c) recognizing special outcomes or occasions where a person has not been constrained by their situation; (d) connecting specific results to the future and providing an alternate and desired narrative; (e) inviting supports among the community to spectate the new narrative and (f) logging new document Since postmodern viewpoints prioritize concepts rather than techniques, in narrative therapy, formal methods are restricted. However, some researchers have described techniques that are useful in helping an individual rewrite a specific experience, like retelling stories and writing letters.
Children admitted to a residential treatment center have behavior problems so extreme that residential treatment is their last hope. Parents seem to think the child is the problem needed to be fixed, and everything will be okay; on the other hand, the child generally sees themselves as a victim. Narrative therapy enables these perspectives to be broken down and troubling behaviors of the child to be externalized, which could encourage both the child and the family members to achieve a new perspective no one feels prosecuted or blamed.
Multi Systemic Therapy:
The model has shown success in sustaining long-standing improvements in children's and adolescents' antisocial behaviors. Families in MST have demonstrated improved family stability and post-treatment adaptability and growing support, and reduced conflict- hostility
The method's ultimate objectives include a) eliminating behavior problems, b) enhancing family functioning, c) strengthening the adolescents' ability to perform better at school and other community settings, and d) decreasing out-of-home placement
Controversy
Disability rights organizations, such as the Bazelon Center for Mental Health Law, oppose placement in RTC programs, calling into question the appropriateness and efficacy of such placements, noting the failure of such programs to address problems in the child's home and community environment, and calling attention to the limited mental-health services offered and substandard educational programs. Concerns specifically related to a specific type of residential treatment center called therapeutic boarding schools include:
inappropriate discipline techniques,
medical neglect,
restricted communication such as lack of access to child protection and advocacy hotlines, and
lack of monitoring and regulation.
Bazelon promotes community-based services on the basis that they are more effective and less costly than residential placement.
A 2007 Report to Congress by the Government Accountability Office (GAO) found cases involving serious abuse and neglect at some of these programs.
From late 2007 through 2008, a broad coalition of grass-roots efforts, as well as prominent medical and psychological organizations such as the Alliance for the Safe, Therapeutic and Appropriate use of Residential Treatment (ASTART) and the Community Alliance for the Ethical Treatment of Youth (CAFETY), provided testimony and support that led to the creation of the Stop Child Abuse in Residential Programs for Teens Act of 2008 by the United States Congress Committee on Education and Labor.
Jon Martin-Crawford and Kathryn Whitehead of CAFETY testified at a hearing of the United States Congressional Committee on Education and Labor on April 24, 2008, and described abusive practices they had experienced at the Family Foundation School and Mission Mountain School, both therapeutic boarding schools. In recent years, many states have enacted regulation and oversight of most programs.
Due to the absence of regulation of these programs by the federal government and because at that time many were not subject to state licensing or monitoring, the Federal Trade Commission has issued a guide for parents considering such placement.
Residential treatment programs are often caught in the cross-fire during custody battles, as parents who are denied custody try to discredit the opposing spouse and the treatment program.
Research on effectiveness
Studies of different treatment approaches have found that residential treatment is effective for individuals with a long history of addictive behavior or criminal activity. RTCs offer a variety of structured programs designed to address the specific need of the inmates. Despite the controversy surrounding the efficacy of (RTCs), recent research has revealed that community-based residential treatment programs have positive long-term effects for children and youth with behavioral problems.
Participants in a pilot program employing family-driven care and positive peer modeling displayed no incidence of elopement, self-injurious behaviors, or physical aggression, and just one case of property destruction when compared to a control group (Holstead, 2010). The success of treatment for children in RTCs depends heavily on their background i.e., their state, situation, circumstances and behavioral status before commencement of treatment. Children who displayed lower rates of internalizing and externalizing behavior problems at intake and had a lower level of exposure to negative environmental factors (e.g., domestic violence, parental substance use, high crime rates), showed better results than children whose symptoms were more severe (den Dunnen, 2012).
Additional research demonstrates that planned treatment, or knowing the expected duration of treatment, is strongly correlated with positive treatment outcomes. Long-term results for children using planned treatment showed that they are 21% less likely to engage in criminal behavior and 40% less likely to need hospitalization for mental-health problems (Lindqvist, 2010). Further evidence exists supporting the long-term effectiveness of RTCs for children exhibiting severe mental health issues. Preyde (2011) found that clients showed a statistically significant reduction in symptom severity 12–18 months after leaving an RTC, results which were maintained 36–40 months after their discharge from the facility.
However, although there is a great deal of research supporting the validity of RTCs as a way of treating children and youth with behavioral disorders, little is known about the outcomes-monitoring practices of such facilities. Those that track clients after they leave the RTC only do so for an average of six months. In order to continue to provide effective long-term treatment to at-risk populations, further efforts are needed to encourage the monitoring of outcomes after discharge from residential treatment (J.D. Brown, 2011).
One problem that hinders the effectiveness of RTCs is elopement or "running". A study by Kashubeck found that runaways from RTCs were "more likely to have a history of elopement, a suspected history of sexual abuse, an affective-disorder diagnosis, and parents whose rights had been terminated." By employing these characteristics of patients in the design of treatment, RTCs may be more successful in reducing elopement and otherwise improving the probability of clients' success.
See also
Anti-psychiatry
Behavior modification facility
Child abandonment
Child abuse
Child and family services
Child and Youth Care
Community-based care
Congregate care
Cottage Homes
Family support
Foster care in the United States
Foster care
Group home
Intensive outpatient program
Kinship care
Orphanage
Partial hospitalization
Residential care
Residential child care community
Teaching-family model
Therapeutic boarding school
Total institution
Troubled teen industry
Wraparound (childcare)
References
Further reading
External links
Residential Treatment Programs — Concerns Regarding Abuse and Death in Certain Programs for Troubled Youth - United States Government Accountability Office
Residential Facilities — State and Federal Oversight Gaps May Increase Risk to Youth Well-Being - United States Government Accountability Office
Residential Programs — Selected Cases of Death, Abuse, and Deceptive Marketing - United States Government Accountability Office
Behavior modification
Psychotherapy
Substance-related disorders
Residential treatment centers | Residential treatment center | [
"Biology"
] | 3,898 | [
"Behavior modification",
"Behavior",
"Human behavior",
"Behaviorism"
] |
11,943,240 | https://en.wikipedia.org/wiki/Genetically%20modified%20plant | Genetically modified plants have been engineered for scientific research, to create new colours in plants, deliver vaccines, and to create enhanced crops. Plant genomes can be engineered by physical methods or by use of Agrobacterium for the delivery of sequences hosted in T-DNA binary vectors. Many plant cells are pluripotent, meaning that a single cell from a mature plant can be harvested and then under the right conditions form a new plant. This ability is most often taken advantage by genetic engineers through selecting cells that can successfully be transformed into an adult plant which can then be grown into multiple new plants containing transgene in every cell through a process known as tissue culture.
Research
Much of the advances in the field genetic engineering has come from experimentation with tobacco. Major advances in tissue culture and plant cellular mechanisms for a wide range of plants has originated from systems developed in tobacco. It was the first plant to be genetically engineered and is considered a model organism for not only genetic engineering, but a range of other fields. As such the transgenic tools and procedures are well established making it one of the easiest plants to transform. Another major model organism relevant to genetic engineering is Arabidopsis thaliana. Its small genome and short life cycle makes it easy to manipulate and it contains many homologs to important crop species. It was the first plant sequenced, has abundant bioinformatic resources and can be transformed by simply dipping a flower in a transformed Agrobacterium solution.
In research, plants are engineered to help discover the functions of certain genes. The simplest way to do this is to remove the gene and see what phenotype develops compared to the wild type form. Any differences are possibly the result of the missing gene. Unlike mutagenisis, genetic engineering allows targeted removal without disrupting other genes in the organism. Some genes are only expressed in certain tissue, so reporter genes, like GUS, can be attached to the gene of interest allowing visualisation of the location. Other ways to test a gene is to alter it slightly and then return it to the plant and see if it still has the same effect on phenotype. Other strategies include attaching the gene to a strong promoter and see what happens when it is over expressed, forcing a gene to be expressed in a different location or at different developmental stages.
Ornamental
Some genetically modified plants are purely ornamental. They are modified for flower color, fragrance, flower shape and plant architecture. The first genetically modified ornamentals commercialised altered colour. Carnations were released in 1997, with the most popular genetically modified organism, a blue rose (actually lavender or mauve) created in 2004. The roses are sold in Japan, the United States, and Canada. Other genetically modified ornamentals include Chrysanthemum and Petunia. As well as increasing aesthetic value there are plans to develop ornamentals that use less water or are resistant to the cold, which would allow them to be grown outside their natural environments.
Conservation
It has been proposed to genetically modify some plant species threatened by extinction to be resistant invasive plants and diseases, such as the emerald ash borer in North American and the fungal disease, Ceratocystis platani, in European plane trees. The papaya ringspot virus (PRSV) devastated papaya trees in Hawaii in the twentieth century until transgenic papaya plants were given pathogen-derived resistance. However, genetic modification for conservation in plants remains mainly speculative. A unique concern is that a transgenic species may no longer bear enough resemblance to the original species to truly claim that the original species is being conserved. Instead, the transgenic species may be genetically different enough to be considered a new species, thus diminishing the conservation worth of genetic modification.
Crops
Genetically modified crops are genetically modified plants that are used in agriculture. The first crops provided are used for animal or human food and provide resistance to certain pests, diseases, environmental conditions, spoilage or chemical treatments (e.g. resistance to a herbicide). The second generation of crops aimed to improve the quality, often by altering the nutrient profile. Third generation genetically modified crops can be used for non-food purposes, including the production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation.
There are three main aims to agricultural advancement; increased production, improved conditions for agricultural workers and sustainability. GM crops contribute by improving harvests through reducing insect pressure, increasing nutrient value and tolerating different abiotic stresses. Despite this potential, as of 2018, the commercialised crops are limited mostly to cash crops like cotton, soybean, maize and canola and the vast majority of the introduced traits provide either herbicide tolerance or insect resistance. Soybeans accounted for half of all genetically modified crops planted in 2014. Adoption by farmers has been rapid, between 1996 and 2013, the total surface area of land cultivated with GM crops increased by a factor of 100, from to 1,750,000 km2 (432 million acres). Geographically though the spread has been very uneven, with strong growth in the Americas and parts of Asia and little in Europe and Africa. Its socioeconomic spread has been more even, with approximately 54% of worldwide GM crops grown in developing countries in 2013.
Food
The majority of GM crops have been modified to be resistant to selected herbicides, usually a glyphosate or glufosinate based one. Genetically modified crops engineered to resist herbicides are now more available than conventionally bred resistant varieties; in the USA 93% of soybeans and most of the GM maize grown is glyphosate tolerant. Most currently available genes used to engineer insect resistance come from the Bacillus thuringiensis bacterium. Most are in the form of delta endotoxin genes known as cry proteins, while a few use the genes that encode for vegetative insecticidal proteins. The only gene commercially used to provide insect protection that does not originate from B. thuringiensis is the Cowpea trypsin inhibitor (CpTI). CpTI was first approved for use cotton in 1999 and is currently undergoing trials in rice. Less than one percent of GM crops contained other traits, which include providing virus resistance, delaying senescence, modifying flower colour and altering the plants composition.
Golden rice is the most well known GM crop that is aimed at increasing nutrient value. It has been engineered with three genes that biosynthesise beta-carotene, a precursor of vitamin A, in the edible parts of rice. It is intended to produce a fortified food to be grown and consumed in areas with a shortage of dietary vitamin A. a deficiency which each year is estimated to kill 670,000 children under the age of 5 and cause an additional 500,000 cases of irreversible childhood blindness. The original golden rice produced 1.6μg/g of the carotenoids, with further development increasing this 23 times. In 2018 it gained its first approvals for use as food.
Biopharmaceuticals
Plants and plant cells have been genetically engineered for production of biopharmaceuticals in bioreactors, a process known as Pharming. Work has been done with duckweed Lemna minor, the algae Chlamydomonas reinhardtii and the moss Physcomitrella patens. Biopharmaceuticals produced include cytokines, hormones, antibodies, enzymes and vaccines, most of which are accumulated in the plant seeds. Many drugs also contain natural plant ingredients and the pathways that lead to their production have been genetically altered or transferred to other plant species to produce greater volume and better products. Other options for bioreactors are biopolymers and biofuels. Unlike bacteria, plants can modify the proteins post-translationally, allowing them to make more complex molecules. They also pose less risk of being contaminated. Therapeutics have been cultured in transgenic carrot and tobacco cells, including a drug treatment for Gaucher's disease.
Vaccines
Vaccine production and storage has great potential in transgenic plants. Vaccines are expensive to produce, transport and administer, so having a system that could produce them locally would allow greater access to poorer and developing areas. As well as purifying vaccines expressed in plants, it is also possible to produce edible vaccines in plants. Edible vaccines stimulate the immune system when ingested to protect against certain diseases. Being stored in plants reduces the long-term cost as they can be disseminated without the need for cold storage, do not need to be purified, and have long term stability. Also being housed within plant cells provides some protection from the gut acids upon digestion; the cost of developing, regulating and containing transgenic plants is high, leading to most current plant-based vaccine development being applied to veterinary medicine, where the controls are not as strict.
References
Genetic engineering
Genetically modified organisms in agriculture | Genetically modified plant | [
"Chemistry",
"Engineering",
"Biology"
] | 1,814 | [
"Biological engineering",
"Genetic engineering",
"Molecular biology"
] |
11,943,557 | https://en.wikipedia.org/wiki/Tbjhome | tbjhome is China's only English-language magazine covering lifestyle, design and architecture.
References
2006 establishments in China
Architecture magazines
Art magazines published in China
Design magazines
Lifestyle magazines
Magazines established in 2006
Magazines published in Beijing | Tbjhome | [
"Engineering"
] | 46 | [
"Design magazines",
"Design"
] |
11,943,863 | https://en.wikipedia.org/wiki/Space%20industry | Space industry refers to economic activities related to manufacturing components that go into outer space (Earth's orbit or beyond), delivering them to those regions, and related services. Owing to the prominence of satellite-related activities, some sources use the term satellite industry interchangeably with the term space industry. The term space business has also been used.
A narrow definition of the space industry typically encompasses only hardware providers (primarily those that manufacture launch vehicles and satellites). This definition does not exclude certain activities, such as space tourism.
Therefore, more broadly, the space industry can be described as the activities of the companies and organizations involved in the space economy, and providing goods and services related to space. The space economy has been defined as "all public and private actors involved in developing and providing space-enabled products and services. It comprises a long value-added chaining, starting with research and development actors and manufacturers of space hardware and ending with the providers of space-enabled products and services to final users."
Segments and revenues
The three major sectors of the space industry are: satellite manufacturing, support ground equipment manufacturing, and the launch industry. The satellite manufacturing sector is composed of satellite developers and integrators, and subsystem manufacturers. The ground equipment sector is composed of companies that manufacture systems such as mobile terminals, gateways, control stations, VSATs, direct broadcast satellite dishes, and other specialized equipment. The launch sector is composed of launch services, vehicle manufacturing and subsystem manufacturing.
Every euro spent in the space industry returns around six euros to the economy, according to the European Space Agency. This makes it a critical sector for economic development, competitiveness, and high-tech jobs. With regards to the worldwide satellite industry revenues, in the period 2002 to 2005 those remained at the 35–36 billion USD level. In that, majority of revenue was generated by the ground equipment sector, with the least amount by the launch sector. Space-related services are estimated at US$100 billion. The industry and related sectors employ about 120,000 people in the OECD countries, while the space industry of Russia employs around 250,000 people. Capital stocks estimated the worth of 937 satellites in Earth's orbit in 2005 at around 170 to US$230 billion. In 2005, OECD countries budgeted around US$45 billion for space-related activities; income from space-derived products and services has been estimated at US$110–120 billion in 2006 (worldwide).
History and trends
The space industry began to develop after World War II, as rockets and then satellites entered into military arsenals, and later found civilian applications.
It retains significant ties to the government. In particular, the launch industry features a significant government involvement, with some launch platforms (such as the Space Shuttle) being operated by governments.
In recent years, however, private spaceflight is becoming realistic, and even major government agencies, such as NASA, have begun relying on privately operated launch services. Some future developments of the space industry that are increasingly being considered include new services such as space tourism.
From 2004 to 2013, total orbital launches by country/region were: Russia: 270, US: 181, China: 108, Europe: 59, Japan: 24, India: 19 and Brazil: 1.
Relevant trends in the 2008–2009 for the space industry have been described as:
the appearance of new satellite operators;
a growing demand for Fixed Service Satellites and developing market for Mobile Satellite Services;
a steady amount of commercial satellite orders;
steady performance of the launch sector;
resilience to the financial crisis;
maturing markets for services like Ka-band and remote sensing.
The 2019 Space Report estimates that in 2018 total global space activity was $414.75 Billion. Of that, the report estimates that 21%, or $87.09 Billion, was from U.S. Government Space Budgets.
A report discussing global space spending in 2021 estimated global spending at approximately $92 billion.
The Space Report for Q4 2023 identified 2023 as the busiest year on record for space activities, with 223 launch attempts and 212 successful launches. More than 2,800 satellites were deployed into orbit, a 23% increase from 2022, and commercial launch activity saw a 50% increase compared to 2022.
See also
Commercialization of space
Space-based economy
Space trade
Space manufacturing
Lunar resources
Asteroid mining
Ore resources on Mars
Space industry per country
Space industry of Russia
Space industry of India
Aerospace industry in the United Kingdom
Commercial Spaceflight Federation (US)
Space law
Outer Space Treaty
References
External links
CubeSat Database & Nanosatellites
NewSpace Index
Industries (economics) | Space industry | [
"Astronomy"
] | 938 | [
"Space industry",
"Outer space"
] |
11,944,078 | https://en.wikipedia.org/wiki/Efficient%20energy%20use | Efficient energy use, or energy efficiency, is the process of reducing the amount of energy required to provide products and services. There are many technologies and methods available that are more energy efficient than conventional systems. For example, insulating a building allows it to use less heating and cooling energy while still maintaining a comfortable temperature. Another method is to remove energy subsidies that promote high energy consumption and inefficient energy use. Improved energy efficiency in buildings, industrial processes and transportation could reduce the world's energy needs in 2050 by one third.
There are two main motivations to improve energy efficiency. Firstly, one motivation is to achieve cost savings during the operation of the appliance or process. However, installing an energy-efficient technology comes with an upfront cost, the capital cost. The different types of costs can be analyzed and compared with a life-cycle assessment. Another motivation for energy efficiency is to reduce greenhouse gas emissions and hence work towards climate action. A focus on energy efficiency can also have a national security benefit because it can reduce the amount of energy that has to be imported from other countries.
Energy efficiency and renewable energy go hand in hand for sustainable energy policies. They are high priority actions in the energy hierarchy.
Aims
Energy productivity, which measures the output and quality of goods and services per unit of energy input, can come from either reducing the amount of energy required to produce something, or from increasing the quantity or quality of goods and services from the same amount of energy.
From the point of view of an energy consumer, the main motivation of energy efficiency is often simply saving money by lowering the cost of purchasing energy. Additionally, from an energy policy point of view, there has been a long trend in a wider recognition of energy efficiency as the "first fuel", meaning the ability to replace or avoid the consumption of actual fuels. In fact, International Energy Agency has calculated that the application of energy efficiency measures in the years 1974-2010 has succeeded in avoiding more energy consumption in its member states than is the consumption of any particular fuel, including fossil fuels (i.e. oil, coal and natural gas).
Moreover, it has long been recognized that energy efficiency brings other benefits additional to the reduction of energy consumption. Some estimates of the value of these other benefits, often called multiple benefits, co-benefits, ancillary benefits or non-energy benefits, have put their summed value even higher than that of the direct energy benefits.
These multiple benefits of energy efficiency include things such as reduced greenhouse gas emissions, reduced air pollution and improved health, and improved energy security. Methods for calculating the monetary value of these multiple benefits have been developed, including e.g. the choice experiment method for improvements that have a subjective component (such as aesthetics or comfort) and Tuominen-Seppänen method for price risk reduction. When included in the analysis, the economic benefit of energy efficiency investments can be shown to be significantly higher than simply the value of the saved energy.
Energy efficiency has proved to be a cost-effective strategy for building economies without necessarily increasing energy consumption. For example, the state of California began implementing energy-efficiency measures in the mid-1970s, including building code and appliance standards with strict efficiency requirements. During the following years, California's energy consumption has remained approximately flat on a per capita basis while national US consumption doubled. As part of its strategy, California implemented a "loading order" for new energy resources that puts energy efficiency first, renewable electricity supplies second, and new fossil-fired power plants last. States such as Connecticut and New York have created quasi-public Green Banks to help residential and commercial building-owners finance energy efficiency upgrades that reduce emissions and cut consumers' energy costs.
Related concepts
Energy conservation
Energy conservation is broader than energy efficiency in including active efforts to decrease energy consumption, for example through behaviour change, in addition to using energy more efficiently. Examples of conservation without efficiency improvements are heating a room less in winter, using the car less, air-drying your clothes instead of using the dryer, or enabling energy saving modes on a computer. As with other definitions, the boundary between efficient energy use and energy conservation can be fuzzy, but both are important in environmental and economic terms.
Sustainable energy
Energy efficiency—using less energy to deliver the same goods or services, or delivering comparable services with less goods—is a cornerstone of many sustainable energy strategies. The International Energy Agency (IEA) has estimated that increasing energy efficiency could achieve 40% of greenhouse gas emission reductions needed to fulfil the Paris Agreement's goals. Energy can be conserved by increasing the technical efficiency of appliances, vehicles, industrial processes, and buildings.
Unintended consequences
If the demand for energy services remains constant, improving energy efficiency will reduce energy consumption and carbon emissions. However, many efficiency improvements do not reduce energy consumption by the amount predicted by simple engineering models. This is because they make energy services cheaper, and so consumption of those services increases. For example, since fuel efficient vehicles make travel cheaper, consumers may choose to drive farther, thereby offsetting some of the potential energy savings. Similarly, an extensive historical analysis of technological efficiency improvements has conclusively shown that energy efficiency improvements were almost always outpaced by economic growth, resulting in a net increase in resource use and associated pollution. These are examples of the direct rebound effect.
Estimates of the size of the rebound effect range from roughly 5% to 40%. The rebound effect is likely to be less than 30% at the household level and may be closer to 10% for transport. A rebound effect of 30% implies that improvements in energy efficiency should achieve 70% of the reduction in energy consumption projected using engineering models.
Options
Appliances
Modern appliances, such as, freezers, ovens, stoves, dishwashers, clothes washers and dryers, use significantly less energy than older appliances. Current energy-efficient refrigerators, for example, use 40 percent less energy than conventional models did in 2001. Following this, if all households in Europe changed their more than ten-year-old appliances into new ones, 20 billion kWh of electricity would be saved annually, hence reducing CO2 emissions by almost 18 billion kg. In the US, the corresponding figures would be 17 billion kWh of electricity and CO2. According to a 2009 study from McKinsey & Company the replacement of old appliances is one of the most efficient global measures to reduce emissions of greenhouse gases. Modern power management systems also reduce energy usage by idle appliances by turning them off or putting them into a low-energy mode after a certain time. Many countries identify energy-efficient appliances using energy input labeling.
The impact of energy efficiency on peak demand depends on when the appliance is used. For example, an air conditioner uses more energy during the afternoon when it is hot. Therefore, an energy-efficient air conditioner will have a larger impact on peak demand than off-peak demand. An energy-efficient dishwasher, on the other hand, uses more energy during the late evening when people do their dishes. This appliance may have little to no impact on peak demand.
Over the period 2001–2021, tech companies have replaced traditional silicon switches in an electric circuit with quicker gallium nitride transistors to make new gadgets as energy efficient as feasible. Gallium nitride transistors are, however, more costly. This is a significant change in lowering the carbon footprint.
Building design
A building's location and surroundings play a key role in regulating its temperature and illumination. For example, trees, landscaping, and hills can provide shade and block wind. In cooler climates, designing northern hemisphere buildings with south facing windows and southern hemisphere buildings with north facing windows increases the amount of sun (ultimately heat energy) entering the building, minimizing energy use, by maximizing passive solar heating. Tight building design, including energy-efficient windows, well-sealed doors, and additional thermal insulation of walls, basement slabs, and foundations can reduce heat loss by 25 to 50 percent.
Dark roofs may become up to 39 °C (70 °F) hotter than the most reflective white surfaces. They transmit some of this additional heat inside the building. US Studies have shown that lightly colored roofs use 40 percent less energy for cooling than buildings with darker roofs. White roof systems save more energy in sunnier climates. Advanced electronic heating and cooling systems can moderate energy consumption and improve the comfort of people in the building.
Proper placement of windows and skylights as well as the use of architectural features that reflect light into a building can reduce the need for artificial lighting. Increased use of natural and task lighting has been shown by one study to increase productivity in schools and offices. Compact fluorescent lamps use two-thirds less energy and may last 6 to 10 times longer than incandescent light bulbs. Newer fluorescent lights produce a natural light, and in most applications they are cost effective, despite their higher initial cost, with payback periods as low as a few months. LED lamps use only about 10% of the energy an incandescent lamp requires.
Leadership in Energy and Environmental Design (LEED) is a rating system organized by the US Green Building Council (USGBC) to promote environmental responsibility in building design. They currently offer four levels of certification for existing buildings (LEED-EBOM) and new construction (LEED-NC) based on a building's compliance with the following criteria: Sustainable sites, water efficiency, energy and atmosphere, materials and resources, indoor environmental quality, and innovation in design. In 2013, USGBC developed the LEED Dynamic Plaque, a tool to track building performance against LEED metrics and a potential path to recertification. The following year, the council collaborated with Honeywell to pull data on energy and water use, as well as indoor air quality from a BAS to automatically update the plaque, providing a near-real-time view of performance. The USGBC office in Washington, D.C. is one of the first buildings to feature the live-updating LEED Dynamic Plaque.
Industry
Industries use a large amount of energy to power a diverse range of manufacturing and resource extraction processes. Many industrial processes require large amounts of heat and mechanical power, most of which is delivered as natural gas, petroleum fuels, and electricity. In addition some industries generate fuel from waste products that can be used to provide additional energy.
Because industrial processes are so diverse it is impossible to describe the multitude of possible opportunities for energy efficiency in industry. Many depend on the specific technologies and processes in use at each industrial facility. There are, however, a number of processes and energy services that are widely used in many industries.
Various industries generate steam and electricity for subsequent use within their facilities. When electricity is generated, the heat that is produced as a by-product can be captured and used for process steam, heating or other industrial purposes. Conventional electricity generation is about 30% efficient, whereas combined heat and power (also called co-generation) converts up to 90 percent of the fuel into usable energy.
Advanced boilers and furnaces can operate at higher temperatures while burning less fuel. These technologies are more efficient and produce fewer pollutants.
Over 45 percent of the fuel used by US manufacturers is burnt to make steam. The typical industrial facility can reduce this energy usage 20 percent (according to the US Department of Energy) by insulating steam and condensate return lines, stopping steam leakage, and maintaining steam traps.
Electric motors usually run at a constant speed, but a variable speed drive allows the motor's energy output to match the required load. This achieves energy savings ranging from 3 to 60 percent, depending on how the motor is used. Motor coils made of superconducting materials can also reduce energy losses. Motors may also benefit from voltage optimization.
Industry uses a large number of pumps and compressors of all shapes and sizes and in a wide variety of applications. The efficiency of pumps and compressors depends on many factors but often improvements can be made by implementing better process control and better maintenance practices. Compressors are commonly used to provide compressed air which is used for sand blasting, painting, and other power tools. According to the US Department of Energy, optimizing compressed air systems by installing variable speed drives, along with preventive maintenance to detect and fix air leaks, can improve energy efficiency 20 to 50 percent.
Transportation
Automobiles
The estimated energy efficiency for an automobile is 280 Passenger-Mile/106 Btu. There are several ways to enhance a vehicle's energy efficiency. Using improved aerodynamics to minimize drag can increase vehicle fuel efficiency. Reducing vehicle weight can also improve fuel economy, which is why composite materials are widely used in car bodies.
More advanced tires, with decreased tire to road friction and rolling resistance, can save gasoline. Fuel economy can be improved by up to 3.3% by keeping tires inflated to the correct pressure. Replacing a clogged air filter can improve a cars fuel consumption by as much as 10 percent on older vehicles. On newer vehicles (1980s and up) with fuel-injected, computer-controlled engines, a clogged air filter has no effect on mpg but replacing it may improve acceleration by 6-11 percent. Aerodynamics also aid in efficiency of a vehicle. The design of a car impacts the amount of gas needed to move it through air. Aerodynamics involves the air around the car, which can affect the efficiency of the energy expended.
Turbochargers can increase fuel efficiency by allowing a smaller displacement engine. The 'Engine of the year 2011' is the Fiat TwinAir engine equipped with an MHI turbocharger. "Compared with a 1.2-liter 8v engine, the new 85 HP turbo has 23% more power and a 30% better performance index. The performance of the two-cylinder is not only equivalent to a 1.4-liter 16v engine, but fuel consumption is 30% lower."
Energy-efficient vehicles may reach twice the fuel efficiency of the average automobile. Cutting-edge designs, such as the diesel Mercedes-Benz Bionic concept vehicle have achieved a fuel efficiency as high as , four times the current conventional automotive average.
The mainstream trend in automotive efficiency is the rise of electric vehicles (all-electric or hybrid electric). Electric engines have more than double the efficiency of internal combustion engines. Hybrids, like the Toyota Prius, use regenerative braking to recapture energy that would dissipate in normal cars; the effect is especially pronounced in city driving. Plug-in hybrids also have increased battery capacity, which makes it possible to drive for limited distances without burning any gasoline; in this case, energy efficiency is dictated by whatever process (such as coal-burning, hydroelectric, or renewable source) created the power. Plug-ins can typically drive for around purely on electricity without recharging; if the battery runs low, a gas engine kicks in allowing for extended range. Finally, all-electric cars are also growing in popularity; the Tesla Model S sedan is the only high-performance all-electric car currently on the market.
Street lighting
Cities around the globe light up millions of streets with 300 million lights. Some cities are seeking to reduce street light power consumption by dimming lights during off-peak hours or switching to LED lamps. LED lamps are known to reduce the energy consumption by 50% to 80%.
Aircraft
There are several ways to improve aviation's use of energy through modifications aircraft and air traffic management. Aircraft improve with better aerodynamics, engines and weight. Seat density and cargo load factors contribute to efficiency.
Air traffic management systems can allow automation of takeoff, landing, and collision avoidance, as well as within airports, from simple things like HVAC and lighting to more complex tasks such as security and scanning.
International Action
International agreements and pledges
At the 2023 United Nations Climate Change Conference, one of the adopted declaration was the GLOBAL RENEWABLES AND ENERGY EFFICIENCY PLEDGE signed by 123 countries. The declaration includes obligations to consider energy efficiency as "first fuel" and double the rate of increase in energy efficiency from 2% per year to 4% per year by the year 2030. China and India did not signed this pledge.
International standards
International standards ISO17743 and ISO17742 provide a documented methodology for calculating and reporting on energy savings and energy efficiency for countries and cities.
Examples by country or region
Europe
The first EU-wide energy efficiency target was set in 1998. Member states agreed to improve energy efficiency by 1 percent a year over twelve years. In addition, legislation about products, industry, transport and buildings has contributed to a general energy efficiency framework. More effort is needed to address heating and cooling: there is more heat wasted during electricity production in Europe than is required to heat all buildings in the continent. All in all, EU energy efficiency legislation is estimated to deliver savings worth the equivalent of up to 326 million tons of oil per year by 2020.
The EU set itself a 20% energy savings target by 2020 compared to 1990 levels, but member states decide individually how energy savings will be achieved. At an EU summit in October 2014, EU countries agreed on a new energy efficiency target of 27% or greater by 2030. One mechanism used to achieve the target of 27% is the 'Suppliers Obligations & White Certificates'. The ongoing debate around the 2016 Clean Energy Package also puts an emphasis on energy efficiency, but the goal will probably remain around 30% greater efficiency compared to 1990 levels. Some have argued that this will not be enough for the EU to meet its Paris Agreement goals of reducing greenhouse gas emissions by 40% compared to 1990 levels.
In the European Union, 78% of enterprises proposed energy-saving methods in 2023, 67% listed energy contract renegotiation as a strategy, and 62% stated passing on costs to consumers as a plan to deal with energy market trends. Larger organisations were found more likely to invest in energy efficiency, green innovation, and climate change, with a significant rise in energy efficiency investments reported by SMEs and mid-cap companies.
Germany
Energy efficiency is central to energy policy in Germany.
As of late 2015, national policy includes the following efficiency and consumption targets (with actual values for 2014):
Recent progress toward improved efficiency has been steady aside from the financial crisis of 2007–08.
Some however believe energy efficiency is still under-recognized in terms of its contribution to Germany's energy transformation (or Energiewende).
Efforts to reduce final energy consumption in transport sector have not been successful, with a growth of 1.7% between 2005 and 2014. This growth is due to both road passenger and road freight transport. Both sectors increased their overall distance travelled to record the highest figures ever for Germany. Rebound effects played a significant role, both between improved vehicle efficiency and the distance travelled, and between improved vehicle efficiency and an increase in vehicle weights and engine power.
In 2014, the German federal government released its National Action Plan on Energy Efficiency (NAPE).
The areas covered are the energy efficiency of buildings, energy conservation for companies, consumer energy efficiency, and transport energy efficiency. The central short-term measures of NAPE include the introduction of competitive tendering for energy efficiency, the raising of funding for building renovation, the introduction of tax incentives for efficiency measures in the building sector, and the setting up energy efficiency networks together with business and industry.
In 2016, the German government released a green paper on energy efficiency for public consultation (in German). It outlines the potential challenges and actions needed to reduce energy consumption in Germany over the coming decades. At the document's launch, economics and energy minister Sigmar Gabriel said "we do not need to produce, store, transmit and pay for the energy that we save". The green paper prioritizes the efficient use of energy as the "first" response and also outlines opportunities for sector coupling, including using renewable power for heating and transport. Other proposals include a flexible energy tax which rises as petrol prices fall, thereby incentivizing fuel conservation despite low oil prices.
Spain
In Spain, four out of every five buildings use more energy than they should. They are either inadequately insulated or consume energy inefficiently.
The Unión de Créditos Immobiliarios (UCI), which has operations in Spain and Portugal, is increasing loans to homeowners and building management groups for energy-efficiency initiatives. Their Residential Energy Rehabilitation initiative aims to remodel and encourage the use of renewable energy in at least 3720 homes in Madrid, Barcelona, Valencia, and Seville. The works are expected to mobilize around €46.5 million in energy efficiency upgrades by 2025 and save approximately 8.1 GWh of energy. It has the ability to reduce carbon emissions by 7,545 tonnes per year.
Poland
In May 2016 Poland adopted a new Act on Energy Efficiency, to enter into force on 1October 2016.
Australia
In July 2009, the Council of Australian Governments, which represents the individual states and territories of Australia, agreed to a National Strategy on Energy Efficiency (NSEE). This is a ten-year plan accelerating the implementation of a nationwide adoption of energy-efficient practices and a preparation for the country's transformation into a low carbon future. The overriding agreement that governs this strategy is the National Partnership Agreement on Energy Efficiency.
Canada
In August 2017, the Government of Canada released Build Smart - Canada's Buildings Strategy, as a key driver of the Pan-Canadian Framework on Clean Growth and Climate Change, Canada's national climate strategy.
United States
A 2011 Energy Modeling Forum study covering the United States examined how energy efficiency opportunities will shape future fuel and electricity demand over the next several decades. The US economy is already set to lower its energy and carbon intensity, but explicit policies will be necessary to meet climate goals. These policies include: a carbon tax, mandated standards for more efficient appliances, buildings and vehicles, and subsidies or reductions in the upfront costs of new more energy-efficient equipment.
Programs and organizations:
Alliance to Save Energy
American Council for an Energy-Efficient Economy
Building Codes Assistance Project
Building Energy Codes Program
Consortium for Energy Efficiency
Energy Star, from United States Environmental Protection Agency
See also
Carbon footprint
Energy audit
Energy conservation measures
Energy efficiency implementation
Energy law
Energy recovery
Energy recycling
Energy resilience
List of least carbon efficient power stations
Waste-to-energy
References
Energy efficiency
Energy policy
Industrial ecology
Sustainable energy | Efficient energy use | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 4,533 | [
"Industrial engineering",
"Energy policy",
"Environmental engineering",
"Industrial ecology",
"Environmental social science"
] |
11,944,175 | https://en.wikipedia.org/wiki/Zirconia%20toughened%20alumina | Zirconia toughened alumina is a ceramic material comprising alumina and zirconia. It is a composite ceramic material with zirconia grains in the alumina matrix.
It is also known in industry as ZTA.
Zirconia aluminia (or zirconia toughened alumina), a combination of zirconium oxide and aluminum oxide, is part of a class of composite ceramics called AZ composites. Noted for their mechanical properties, AZ composites are commonly used in structural applications, as cutting tools, and in many medical applications. Additionally, AZ composites feature high strength, fracture toughness, elasticity, hardness, and wear resistance. Zirconia toughened alumina (ZTA), in particular, offers several key properties.
Structure
The mechanical robustness compared to alumina is attributed to the displacive phase transformation of the metastable tetragonal zirconia grains when the material is stressed. The stress concentration at a crack tip can cause a transformation from a tetragonal crystal structure to a monoclinic one, which has an associated volume expansion of zirconia. This volume expansion effectively pushes back the propagation of the crack and results in higher toughness and strength. A common specimen of Zirconia Toughened Alumina will have 10-20% zirconium oxides. The 20-30% increase in strength often meets the design criteria needed at a much lower cost. Depending on the percentage that is Zirconium, the properties of this ceramic can be manipulated for the applications required. Zirconia Toughened Alumina is generally referred as the intermediary between Alumina and Zirconium and as priced as such. This gives the ZTA a much lower price range than other similar materials. The increase in composite strength is done by a process called Stress Induced Transformation Toughening. This process causes internal strains, which causes crack in the structure of the Zirconium. Because of the crack, the Zirconium particles allowed to switch phases and move more freely amongst the Alumina particles. This causes an increase in Zirconia particles with the same amount of Alumina particles, creating the increase is strength.
Chemical and mechanical properties
Uses
Recently, there have been many uses for Zirconia Toughened Alumina, including valve seals, bushing, pump components, joint implants, wire bonding capillaries, cutting tool inserts, and many more. ZTA has a diverse range of properties, giving its importance in an array of applications. In the medical industry, ZTA serves as a ceramic that can be used in joint replacement and rehabilitation. ZTA's high wear resistance helps create high performance implants. Because of ZTA's high strength and corrosion resistance, it enables the material to withstand heavy loads without succumbing to degradation; giving ZTA many uses in load bearing applications. ZTA's toughness also means that it has many uses in cutting tools. ZTA and other Alumina are often used in metal cutting applications. Certain engine components, labware, industrial crucibles, refractory tubes can be manufactured using ZTA. Also, certain abrasive applications, such as sandblasting, can also be manufactured using ZTA.
References
External links
Material Properties Data: Zirconia-Toughened Alumina (ZTA)
Ceramic materials
Zirconium dioxide
Aluminium compounds
Composite materials | Zirconia toughened alumina | [
"Physics",
"Engineering"
] | 725 | [
"Composite materials",
"Materials",
"Ceramic materials",
"Ceramic engineering",
"Matter"
] |
11,944,410 | https://en.wikipedia.org/wiki/Trimaximal%20mixing | Trimaximal mixing (also known as threefold maximal mixing) refers to the highly symmetric, maximally CP-violating, fermion mixing configuration, characterised by a unitary matrix () having all its elements equal in modulus
(, ) as may be written, e.g.:
where and
are the complex cube roots of unity. In the standard PDG convention, trimaximal mixing corresponds to: , and . The Jarlskog -violating parameter takes its extremal value .
Originally proposed as a candidate lepton mixing matrix, and actively studied as such (and even as a candidate quark mixing matrix), trimaximal mixing is now definitively ruled-out as a phenomenologically viable lepton mixing scheme by neutrino oscillation experiments, especially the Chooz reactor experiment, in favour of the no longer tenable (related) tribimaximal mixing scheme.
References
Leptons
Standard Model
Particle physics
Neutrinos | Trimaximal mixing | [
"Physics"
] | 203 | [
"Standard Model",
"Particle physics"
] |
11,945,196 | https://en.wikipedia.org/wiki/Hydrogen%20isocyanide | Hydrogen isocyanide is a chemical with the molecular formula HNC. It is a minor tautomer of hydrogen cyanide (HCN). Its importance in the field of astrochemistry is linked to its ubiquity in the interstellar medium.
Nomenclature
Both hydrogen isocyanide and azanylidyniummethanide are correct IUPAC names for HNC. There is no preferred IUPAC name. The second one is according to the substitutive nomenclature rules, derived from the parent hydride azane () and the anion methanide ().
Molecular properties
Hydrogen isocyanide (HNC) is a linear triatomic molecule with C∞v point group symmetry. It is a zwitterion and an isomer of hydrogen cyanide (HCN). Both HNC and HCN have large, similar dipole moments, with μHNC = 3.05 Debye and μHCN = 2.98 Debye respectively. These large dipole moments facilitate the easy observation of these species in the interstellar medium.
HNC−HCN tautomerism
As HNC is higher in energy than HCN by 3920 cm−1 (46.9 kJ/mol), one might assume that the two would have an equilibrium ratio at temperatures below 100 Kelvin of 10−25. However, observations show a very different conclusion; is much higher than 10−25, and is in fact on the order of unity in cold environments. This is because of the potential energy path of the tautomerization reaction; there is an activation barrier on the order of roughly 12,000 cm−1 for the tautomerization to occur, which corresponds to a temperature at which HNC would already have been destroyed by neutral-neutral reactions.
Spectral properties
In practice, HNC is almost exclusively observed astronomically using the J = 1→0 transition. This transition occurs at ~90.66 GHz, which is a point of good visibility in the atmospheric window, thus making astronomical observations of HNC particularly simple. Many other related species (including HCN) are observed in roughly the same window.
Significance in the interstellar medium
HNC is intricately linked to the formation and destruction of numerous other molecules of importance in the interstellar medium—aside from the obvious partners HCN, protonated hydrogen cyanide (HCNH+), and cyanide (CN), HNC is linked to the abundances of many other compounds, either directly or through a few degrees of separation. As such, an understanding of the chemistry of HNC leads to an understanding of countless other species—HNC is an integral piece in the complex puzzle representing interstellar chemistry.
Furthermore, HNC (alongside HCN) is a commonly used tracer of dense gas in molecular clouds. Aside from the potential to use HNC to investigate gravitational collapse as the means of star formation, HNC abundance (relative to the abundance of other nitrogenous molecules) can be used to determine the evolutionary stage of protostellar cores.
The HCO+/HNC line ratio is used to good effect as a measure of density of gas. This information provides great insight into the mechanisms of the formation of (Ultra-)Luminous Infrared Galaxies ((U)LIRGs), as it provides data on the nuclear environment, star formation, and even black hole fueling. Furthermore, the HNC/HCN line ratio is used to distinguish between photodissociation regions and X-ray-dissociation regions on the basis that [HNC]/[HCN] is roughly unity in the former, but greater than unity in the latter.
The study of HNC is relatively straightforward, which is a major motivation for its research. Its J = 1→0 transition occurs in a clear portion of the atmospheric window, and it has numerous isotopomers that are easily studied. Additionally, its large dipole moment makes observations particularly simple. Moreover, HNC is a fundamentally simple molecule in its molecular nature. This makes the study of the reaction pathways that lead to its formation and destruction a good means of obtaining insight to the workings of these reactions in space. Furthermore, the study of the tautomerization of HNC to HCN (and vice versa), which has been studied extensively, has been suggested as a model by which more complicated isomerization reactions can be studied.
Chemistry in the interstellar medium
HNC is found primarily in dense molecular clouds, though it is ubiquitous in the interstellar medium. Its abundance is closely linked to the abundances of other nitrogen-containing compounds. HNC is formed primarily through the dissociative recombination of HNCH+ and H2NC+, and it is destroyed primarily through ion-neutral reactions with and C+. Rate calculations were done at 3.16 × 105 years, which is considered early time, and at 20 K, which is a typical temperature for dense molecular clouds.
These four reactions are merely the four most dominant, and thus the most significant in the formation of the HNC abundances in dense molecular clouds; there are dozens more reactions for the formation and destruction of HNC. Though these reactions primarily lead to various protonated species, HNC is linked closely to the abundances of many other nitrogen containing molecules, for example, NH3 and CN. The abundance HNC is also inexorably linked to the abundance of HCN, and the two tend to exist in a specific ratio based on the environment. This is because the reactions that form HNC can often also form HCN, and vice versa, depending on the conditions in which the reaction occurs, and also that there exist isomerization reactions for the two species.
Astronomical detections
HCN (not HNC) was first detected in June 1970 by L. E. Snyder and D. Buhl using the 36-foot radio telescope of the National Radio Astronomy Observatory. The main molecular isotope, H12C14N, was observed via its J = 1→0 transition at 88.6 GHz in six different sources: W3 (OH), Orion A, Sgr A(NH3A), W49, W51, DR 21(OH). A secondary molecular isotope, H13C14N, was observed via its J = 1→0 transition at 86.3 GHz in only two of these sources: Orion A and Sgr A(NH3A). HNC was then later detected extragalactically in 1988 using the IRAM 30-m telescope at the Pico de Veleta in Spain. It was observed via its J = 1→0 transition at 90.7 GHz toward IC 342.
A number of detections have been made towards the end of confirming the temperature dependence of the abundance ratio of [HNC]/[HCN]. A strong fit between temperature and the abundance ratio would allow observers to spectroscopically detect the ratio and then extrapolate the temperature of the environment, thus gaining great insight into the environment of the species. The abundance ratio of rare isotopes of HNC and HCN along the OMC-1 varies by more than an order of magnitude in warm regions versus cold regions. In 1992, the abundances of HNC, HCN, and deuterated analogs along the OMC-1 ridge and core were measured and the temperature dependence of the abundance ratio was confirmed. A survey of the W 3 Giant Molecular Cloud in 1997 showed over 24 different molecular isotopes, comprising over 14 distinct chemical species, including HNC, HN13C, and H15NC. This survey further confirmed the temperature dependence of the abundance ratio, [HNC]/[HCN], this time even confirming the dependence of the isotopomers.
These are not the only detections of importance of HNC in the interstellar medium. In 1997, HNC was observed along the TMC-1 ridge and its abundance relative to HCO+ was found to be constant along the ridge—this led credence to the reaction pathway that posits that HNC is derived initially from HCO+. One significant astronomical detection that demonstrated the practical use of observing HNC occurred in 2006, when abundances of various nitrogenous compounds (including HN13C and H15NC) were used to determine the stage of evolution of the protostellar core Cha-MMS1 based on the relative magnitudes of the abundances.
On 11 August 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN, HNC, H2CO, and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON).
See also
Isocyanide
External links
Hydrogen isocyanide on NIST Chemistry WebBook
References
Hydrogen compounds
Triatomic molecules
Isocyanides
Zwitterions | Hydrogen isocyanide | [
"Physics",
"Chemistry"
] | 1,840 | [
"Matter",
"Molecules",
"Functional groups",
"Triatomic molecules",
"Zwitterions",
"Isocyanides",
"Ions"
] |
11,945,338 | https://en.wikipedia.org/wiki/Exhibit%20%28web%20editing%20tool%29 | Exhibit (part of the SIMILE Project) is a lightweight, structured-data publishing framework that allows developers to create web pages with support for sorting, filtering and rich visualizations. Oriented towards semantic web-type problems, Exhibit can be implemented by writing rich data out to HTML then configuring some CSS and JavaScript code.
Overview
Technically, exhibit is a collection of JavaScript files to be included in a web page. When Exhibit pages are loaded by a web browser, the JavaScript reads in one or more JSON data files and builds a local database in the memory of the machine running the browser. Data can then be filtered and sorted directly in the browser without having to re-query the server. The design of the Exhibit is optimized for browsing faceted data.
The Exhibit code base is currently being developed by members of the SIMILE Project at MIT.
References
External links
Website of the Exhibit Widget
Exhibit Wiki
Official SIMILE Project website
Web development software
Semantic Web | Exhibit (web editing tool) | [
"Technology"
] | 202 | [
"Computing stubs",
"World Wide Web stubs"
] |
11,945,645 | https://en.wikipedia.org/wiki/2%CF%80%20theorem | In mathematics, the theorem of Gromov and Thurston states a sufficient condition for Dehn filling on a cusped hyperbolic 3-manifold to result in a negatively curved 3-manifold.
Let be a cusped hyperbolic 3-manifold. Disjoint horoball neighborhoods of each cusp can be selected. The boundaries of these neighborhoods are quotients of horospheres and thus have Euclidean metrics. A slope, i.e. unoriented isotopy class of simple closed curves on these boundaries, thus has a well-defined length by taking the minimal Euclidean length over all curves in the isotopy class. The theorem states: a Dehn filling of with each filling slope greater than results in a 3-manifold with a complete metric of negative sectional curvature. In fact, this metric can be selected to be identical to the original hyperbolic metric outside the horoball neighborhoods.
The basic idea of the proof is to explicitly construct a negatively curved metric inside each horoball neighborhood that matches the metric near the horospherical boundary. This construction, using cylindrical coordinates, works when the filling slope is greater than . See for complete details.
According to the geometrization conjecture, these negatively curved 3-manifolds must actually admit a complete hyperbolic metric. A horoball packing argument due to Thurston shows that there are at most 48 slopes to avoid on each cusp to get a hyperbolic 3-manifold. For one-cusped hyperbolic 3-manifolds, an improvement due to Colin Adams gives 24 exceptional slopes.
This result was later improved independently by and with the 6 theorem. The "6 theorem" states that Dehn filling along slopes of length greater than 6 results in a hyperbolike 3-manifold, i.e. an irreducible, atoroidal, non-Seifert-fibered 3-manifold with infinite word hyperbolic fundamental group. Yet again assuming the geometrization conjecture, these manifolds have a complete hyperbolic metric. An argument of Agol's shows that there are at most 12 exceptional slopes.
References
.
.
.
3-manifolds
Theorems in geometry | 2π theorem | [
"Mathematics"
] | 446 | [
"Mathematical theorems",
"Mathematical problems",
"Geometry",
"Theorems in geometry"
] |
11,945,733 | https://en.wikipedia.org/wiki/Radial%20turbine | A radial turbine is a turbine in which the flow of the working fluid is radial to the shaft. The difference between axial and radial turbines consists in the way the fluid flows through the components (compressor and turbine). Whereas for an axial turbine the rotor is 'impacted' by the fluid flow, for a radial turbine, the flow is smoothly oriented perpendicular to the rotation axis, and it drives the turbine in the same way water drives a watermill. The result is less mechanical stress (and less thermal stress, in case of hot working fluids) which enables a radial turbine to be simpler, more robust, and more efficient (in a similar power range) when compared to axial turbines. When it comes to high power ranges (above 5 MW) the radial turbine is no longer competitive (due to its heavy and expensive rotor) and the efficiency becomes similar to that of the axial turbines.
Advantages and challenges
Compared to an axial flow turbine, a radial turbine can employ a relatively higher pressure ratio (≈4) per stage with lower flow rates. Thus these machines fall in the lower specific speed and power ranges. For high temperature applications rotor blade cooling in radial stages is not as easy as in axial turbine stages. Variable angle nozzle blades can give higher stage efficiencies in a radial turbine stage even at off-design point operation. In the family of water turbines, the Francis turbine is a very well-known IFR turbine which generates much greater power with a relatively large impeller.
Components of radial turbines
The radial and tangential components of the absolute velocity c2 are cr2 and cq2, respectively. The relative velocity of the flow and the peripheral speed of the rotor are w2 and u2 respectively. The air angle at the rotor blade entry is given by
Enthalpy and entropy diagram
The stagnation state of the gas at the nozzle entry is represented by point 01. The gas expands adiabatically in the nozzles from a pressure p1 to p2 with an increase in its velocity from c1 to c2. Since this is an energy transformation process, the stagnation enthalpy remains constant but the stagnation pressure decreases (p01 > p02) due to losses.
The energy transfer accompanied by an energy transformation process occurs in the rotor.
Spouting velocity
A reference velocity (c0) known as the isentropic velocity, spouting velocity or stage terminal velocity is defined as that velocity which will be obtained during an isentropic expansion of the gas between the entry and exit pressures of the stage.
Stage efficiency
The total-to-static efficiency is based on this value of work.
Degree of reaction
The relative pressure or enthalpy drop in the nozzle and rotor blades are determined by the degree of reaction of the stage. This is defined by
The two quantities within the parentheses in the numerator may have the same or opposite signs. This, besides other factors, would also govern the value of reaction. The stage reaction decreases as Cθ2 increases because this results in a large proportion of the stage enthalpy drop to occur in the nozzle ring.
Stage losses
The stage work is less than the isentropic stage enthalpy drop on account of aerodynamic losses in the stage. The actual output at the turbine shaft is equal to the stage work minus the losses due to rotor disc and bearing friction.
Blade to gas speed ratio
The blade-to-gas speed ratio can be expressed in terms of the isentropic stage terminal velocity c0.
for
β2 = 90o
σs ≈ 0.707
Outward-flow radial stages
In outward flow radial turbine stages, the flow of the gas or steam occurs from smaller to larger diameters. The stage consists of a pair of fixed and moving blades. The increasing area of cross-section at larger diameters accommodates the expanding gas.
This configuration did not become popular with the steam and gas turbines. The only one which is employed more commonly is the Ljungstrom double rotation type turbine. It consists of rings of cantilever blades projecting from two discs rotating in opposite directions. The relative peripheral velocity of blades in two adjacent rows, with respect to each other, is high. This gives a higher value of enthalpy drop per stage.
Nikola Tesla's bladeless radial turbine
In the early 1900s, Nikola Tesla developed and patented his bladeless Tesla turbine. One of the difficulties with bladed turbines is the complex and highly precise requirements for balancing and manufacturing the bladed rotor which has to be very well balanced. The blades are subject to corrosion and cavitation. Tesla attacked this problem by substituting a series of closely spaced disks for the blades of the rotor. The working fluid flows between the disks and transfers its energy to the rotor by means of the boundary layer effect or adhesion and viscosity rather than by impulse or reaction. Tesla stated his turbine could realize incredibly high efficiencies by steam. There has been no documented evidence of Tesla turbines achieving the efficiencies Tesla claimed. They have been found to have low overall efficiencies in the role of a turbine or pump. In recent decades there has been further research into bladeless turbine and development of patented designs that work with corrosive/abrasive and hard to pump material such as ethylene glycol, fly ash, blood, rocks, and even live fish.
Notes
References
'Turbines, Compressors and Fans 4th Edition' [Author: S M Yahya; publisher: TATA McGraw-Hill Education (2010)]
'A review of cascade data on secondary losses in turbines' [Author: J Dunham; J. Mech Eng Sci., 12, 1970]
Osterle, J.F., ‘Thermodynamic considerations in the use of gasified coal as a fuel for power conversion systems’, Frontiers of power technology conference proceedings, Oklahoma State University, Carnegie-Mellon University, Pittsburgh, Oct. 1974.
Starkey, N.E., ‘Long life base load service at 1600°F turbine inlet temperature’, ASME J. Eng. Power, Jan. 1967.
Stasa, F.L. and Osterle, F., ‘The thermodynamic performance of two combined cycle power plants integrated with two coal gasification systems’, ASME J. Eng. Power, July 1981.
Traenckner, K., ‘Pulverized-coal gasification Ruhrgas processes’, Trans ASME, 1953.
Ushiyama, I., ‘Theoretically estimating the performance of gas turbines under varying atmospheric condition’, ASME J. Eng. Power, Jan. 1976.
Yannone, R.A. and Reuther, J.F., ‘Ten years of digital computer control of combustion turbines ASME J. Engg. Power, 80-GT-76, Jan. 1981.
Hubert, F.W.L. et al., Large combined cycles for utilities’, Combustion, Vol. I, ASME gas turbine conference and products show, Brussels, May 1970.
Hurst, J.N. and Mottram, A.W.T., ‘Integrated Nuclear Gas turbines’, Paper No. EN-1/41, Symposium on the technology of integrated primary circuits for power reactors, ENEA, Paris, May 1968.
Jackson, A.J.B., ‘Some future trends in aeroengine design for subsonic transport aircraft’,-ASME J. Eng. Power, April 1976.
Kehlhofer, R., ‘Calculation for part-load operation of combined gas/steam turbine plants’, Brown Boveri Rev., 65, 10, pp 672–679, Oct. 1978.
Kingcombe, R.C. and Dunning, S.W., ‘Design study for a fuel efficient turbofan engine’, ASME paper No. 80-GT-141, New Orleans, March 1980.
Mayers, M.A. et al., ‘Combination gas turbine and steam turbine cycles’, ASME paper No. 55-A-184, 1955.
Mcdonald, C.F. and Smith, M.J., ‘Turbomachinery design considerations for nuclear HTGR-GT power plant’, ASME J. Eng. Power, 80-GT-80, Jan. 1981.
Mcdonald, C.F. and Boland, C.R., ‘The nuclear closed-cycle gas turbine (HTGR-GT) dry cooled commercial power plant studies’, ASME J. Eng. Power, 80-GT-82, Jan. 1981.
Nabors, W.M. et al., ‘Bureau of mine progress in developing the coal burning gas turbine power plant’, ASME J. Eng. Power, April 1965.
Turbines | Radial turbine | [
"Chemistry"
] | 1,802 | [
"Turbines",
"Turbomachinery"
] |
11,946,274 | https://en.wikipedia.org/wiki/Short%20time%20duty | The short time duty (or short time operation) indicates an operating mode of increased performance but for a shorter length of time. Commonly it is also referred to the maximum load (the performance) at which and the related maximum length of time a device can be operated without to failure.
Device applications
The load can be a maximum power, temperature, revolution speed, torque, acceleration or whatever can influence the mechanical or chemical properties of the device for its correct function: for instance an electrical motor can be driven at a higher revolution speed than for normal (constant) operation (nominal revolution speed), but after a short time it should be driven down or switched out in order to prevent from damage; another example can be the operating temperature of a simple oven which can be raised to the maximum allowed but only for a short time to avoid that the oven begins to burn. Generally at all performances a part of the input energy will be dissipated as heat. The higher the operation power the higher the heat to dissipate. If the performance is too high, the device will not be able to dissipate the heat to its environment and its temperature will raise proportionally to the energy surplus. If the temperature raises too much the mechanical and chemical properties of the device will begin to change causing permanent deformations for plastical material or fractures for brittle materials. Being the energy the product of power and time, either the power (or any to it related physical value) as by normal operation/duty or the time as by short-time duty have to be limited.
A common application is the high current measure (typically up to 10 or 20 A) with multimeters. The maximum time length of short time duty (typically 10 to 30 seconds) is indicated near the corresponding socket (usually the left one) as well as the time to be waited for, before each of such measures might be repeated (typically "each 15 minutes"). Another common application is inflation with air compressors: those become very hot, if used for a too long time and have to be shut down after a certain time (for small compressors such that of the picture a typical time is 10 minutes).
Protection
Only in some cases there is a built-in device protection in form for instance of a fusible in electrical devices (for instance in a halogen lamp to prevent damage from use of too high power bulbs) or in form for instance of a pressure relief valve in hydraulic devices (e.g. a pressure cooker); many others have no protection (most hair-dryers) and the user realizes that the duty short time is over after for example the overheating has already damaged the device.
Standardisation
Many producers indicates this short time duty only in the instruction handbook of the device, other indicates it on the very device. At this time (2007) there is no international identifier for this property. In Germany for instance it is indicated with KB (Kurzzeitbetrieb = shor time operating/operation ) and the time in minutes, in France there is no abbreviation but it is referred to as service temporaire, in Spanish it is indicated with servicio temporal, in Italian with servizio di durata limitata, and so on.
Human applications
The concept of short time duty is easy to understand when referred to human beings in medical applications both somatic (body) as well as psychological (mind). The analogy with devices is useful to better understand both type of applications and should not be underestimated as if human beings were much better than their machines or devices: in this case a short time duty prevents from diseases.
The best example is the radiation exposure (X-rays, gamma-radiation or particle radiation) for which a maximum short-time-, as well as a cumulative year-, and a life-dose are well defined for every type of radiation in order to avoid cancer (e.g. maximum number of dental X-rays per year). In this case the maximum load of the definition is represented by the dose (see also Radiation Safety). A controversial field is the uses of mobile (cellular) phones, for which there are no proven damages.
Another simple example of well defined short time duty are the exposure times of sunscreen lotions (in this the case the load is also the electromagnetic radiation dose but its energy is just weaker and its frequency is shorter, i.e. ultra-violet light UV).
A last good example are allergies: in this case the load is the concentration of the allergic substance and the short time duty is depending on many factors like age, genetics, medical condition and so on.
Besides this somatic examples many everyday situations can influence the correct operating of human beings: a stressing situation (an annoying noise or even mobbing at work) can be tolerated for a certain short time "duty" (some seconds or days depending on its impact and on patience), but thereafter it can make irritable or bear to a serious depression.
Protection
The human body has also some built-in protections but only to a moderate extent (there is no fusible which can be simply exchanged in case of overload). The best protection for human beings is on the one side health education, which is equivalent to a handbook of its own body (body side), and love, friendship and politeness in case of human relations (mind side).
Standardisation
All human beings have a similar resistance to most biological agents and exposures with some exceptions (e.g. UV-rays short time duty or allergies) but the human gene sequences are remarkably homogeneous, so that there is a quite good standardisation and all human beings recognize their similars as such and automatically know which short time duties they can withstand.
Only by psychological aspects the short time duty is very different from human being to human being depending on their culture and religion (to sneeze in a far east country will rapidly irritate the companionship while just causing commiseration in a west country).
References
Reliability engineering
Security | Short time duty | [
"Engineering"
] | 1,228 | [
"Systems engineering",
"Reliability engineering"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.