id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
78,908,798 | https://en.wikipedia.org/wiki/Zelenirstat | Zelenirstat, also known as PCLX-001, is an investigational new drug that is being evaluated for the treatment of cancer and as an antiviral agent. It is a small molecule inhibitor targets both N-myristoyltransferase 1 (NMT1) and N-myristoyltransferase 2 (NMT2) proteins, which are responsible for myristoylation. Its dual mechanism of action disrupts both cell signaling and energy production in cancer cells.
Zelenirstat is a strong pan-N myristoyl transferase inhibitor, which prevents addition of myristic acid into penultimate glycine of protein with myristoylation signal, and initially has been introduced as anti-tumor drug. It has completed phase I clinical trial and is going through escalation phase. Its prototype DDD85646 and its analogue IMP-1088 have strong antiviral activities against viruses that required myristoylated proteins to complete their life cycle, including hemorrhagic viruses, such as lassa and argentinian virus, and pox viruses, such as vaccinia and monkeypox.
Mechanism of action
Zelenirstat acts by inhibiting NMT I and II enzymes, which are required to complete the myristoylation of proteins. Without myristoylation, these proteins are targeted for proteasomal degradation.
References
Antineoplastic drugs
Antiviral drugs
Aminopyridines
Chloroarenes
Isobutyl compounds
Piperazines
Pyrazoles
Sulfonamides | Zelenirstat | Biology | 325 |
8,519,857 | https://en.wikipedia.org/wiki/Monte%20Carlo%20method%20in%20statistical%20mechanics | Monte Carlo in statistical physics refers to the application of the Monte Carlo method to problems in statistical physics, or statistical mechanics.
Overview
The general motivation to use the Monte Carlo method in statistical physics is to evaluate a multivariable integral. The typical problem begins with a system for which the Hamiltonian is known, it is at a given temperature and it follows the Boltzmann statistics. To obtain the mean value of some macroscopic variable, say A, the general approach is to compute, over all the phase space, PS for simplicity, the mean value of A using the Boltzmann distribution:
.
where
is the energy of the system for a given state defined by
- a vector with all the degrees of freedom (for instance, for a mechanical system, ),
and
is the partition function.
One possible approach to solve this multivariable integral is to exactly enumerate all possible configurations of the system, and calculate averages at will. This is done in exactly solvable systems, and in simulations of simple systems with few particles. In realistic systems, on the other hand, an exact enumeration can be difficult or impossible to implement.
For those systems, the Monte Carlo integration (and not to be confused with Monte Carlo method, which is used to simulate molecular chains) is generally employed. The main motivation for its use is the fact that, with the Monte Carlo integration, the error goes as , independently of the dimension of the integral. Another important concept related to the Monte Carlo integration is the importance sampling, a technique that improves the computational time of the simulation.
In the following sections, the general implementation of the Monte Carlo integration for solving this kind of problems is discussed.
Importance sampling
An estimation, under Monte Carlo integration, of an integral defined as
is
where are uniformly obtained from all the phase space (PS) and N is the number of sampling points (or function evaluations).
From all the phase space, some zones of it are generally more important to the mean of the variable than others. In particular, those that have the value of sufficiently high when compared to the rest of the energy spectra are the most relevant for the integral. Using this fact, the natural question to ask is: is it possible to choose, with more frequency, the states that are known to be more relevant to the integral? The answer is yes, using the importance sampling technique.
Lets assume is a distribution that chooses the states that are known to be more relevant to the integral.
The mean value of can be rewritten as
,
where are the sampled values taking into account the importance probability . This integral can be estimated by
where are now randomly generated using the distribution. Since most of the times it is not easy to find a way of generating states with a given distribution, the Metropolis algorithm must be used.
Canonical
Because it is known that the most likely states are those that maximize the Boltzmann distribution, a good distribution, , to choose for the importance sampling is the Boltzmann distribution or canonic distribution. Let
be the distribution to use. Substituting on the previous sum,
.
So, the procedure to obtain a mean value of a given variable, using metropolis algorithm, with the canonical distribution, is to use the Metropolis algorithm to generate states given by the distribution and perform means over .
One important issue must be considered when using the metropolis algorithm with the canonical distribution: when performing a given measure, i.e. realization of , one must ensure that that realization is not correlated with the previous state of the system (otherwise the states are not being "randomly" generated). On systems with relevant energy gaps, this is the major drawback of the use of the canonical distribution because the time needed to the system de-correlate from the previous state can tend to infinity.
Multi-canonical
As stated before, micro-canonical approach has a major drawback, which becomes relevant in most of the systems that use Monte Carlo Integration. For those systems with "rough energy landscapes", the multicanonic approach can be used.
The multicanonic approach uses a different choice for importance sampling:
where is the density of states of the system. The major advantage of this choice is that the energy histogram is flat, i.e. the generated states are equally distributed on energy. This means that, when using the Metropolis algorithm, the simulation doesn't see the "rough energy landscape", because every energy is treated equally.
The major drawback of this choice is the fact that, on most systems, is unknown. To overcome this, the Wang and Landau algorithm is normally used to obtain the DOS during the simulation. Note that after the DOS is known, the mean values of every variable can be calculated for every temperature, since the generation of states does not depend on .
Implementation
On this section, the implementation will focus on the Ising model. Lets consider a two-dimensional spin network, with L spins (lattice sites) on each side. There are naturally spins, and so, the phase space is discrete and is characterized by N spins, where is the spin of each lattice site. The system's energy is given by , where are the set of first neighborhood spins of i and J is the interaction matrix (for a ferromagnetic ising model, J is the identity matrix). The problem is stated.
On this example, the objective is to obtain and (for instance, to obtain the magnetic susceptibility of the system) since it is straightforward to generalize to other observables. According to the definition, .
Canonical
First, the system must be initialized: let be the system's Boltzmann temperature and initialize the system with an initial state (which can be anything since the final result should not depend on it).
With micro-canonic choice, the metropolis method must be employed. Because there is no right way of choosing which state is to be picked, one can particularize and choose to try to flip one spin at the time. This choice is usually called single spin flip. The following steps are to be made to perform a single measurement.
step 1: generate a state that follows the distribution:
step 1.1: Perform TT times the following iteration:
step 1.1.1: pick a lattice site at random (with probability 1/N), which will be called i, with spin .
step 1.1.2: pick a random number .
step 1.1.3: calculate the energy change of trying to flip the spin i:
and its magnetization change:
step 1.1.4: if , flip the spin ( ), otherwise, don't.
step 1.1.5: update the several macroscopic variables in case the spin flipped: ,
after TT times, the system is considered to be not correlated from its previous state, which means that, at this moment, the probability of the system to be on a given state follows the Boltzmann distribution, which is the objective proposed by this method.
step 2: perform the measurement:
step 2.1: save, on a histogram, the values of M and M2.
As a final note, one should note that TT is not easy to estimate because it is not easy to say when the system is de-correlated from the previous state. To surpass this point, one generally do not use a fixed TT, but TT as a tunneling time. One tunneling time is defined as the number of steps 1. the system needs to make to go from the minimum of its energy to the maximum of its energy and return.
A major drawback of this method with the single spin flip choice in systems like Ising model is that the tunneling time scales as a power law as where z is greater than 0.5, phenomenon known as critical slowing down.
Applicability
The method thus neglects dynamics, which can be a major drawback, or a great advantage. Indeed, the method can only be applied to static quantities, but the freedom to choose moves makes the method very flexible. An additional advantage is that some systems, such as the Ising model, lack a dynamical description and are only defined by an energy prescription; for these the Monte Carlo approach is the only one feasible.
Generalizations
The great success of this method in statistical mechanics has led to various generalizations such as the method of simulated annealing for optimization, in which a fictitious temperature is introduced and then gradually lowered.
See also
Monte Carlo integration
Metropolis algorithm
Importance sampling
Quantum Monte Carlo
Monte Carlo molecular modeling
References
Computational chemistry
Theoretical chemistry
Computational physics | Monte Carlo method in statistical mechanics | Physics,Chemistry | 1,733 |
48,664,163 | https://en.wikipedia.org/wiki/Flora%20Antarctica | The Flora Antarctica, or formally and correctly The Botany of the Antarctic Voyage of H.M. Discovery Ships Erebus and Terror in the years 1839–1843, under the Command of Captain Sir James Clark Ross, is a description of the many plants discovered on the Ross expedition, which visited islands off the coast of the Antarctic continent, with a summary of the expedition itself, written by the British botanist Joseph Dalton Hooker and published in parts between 1844 and 1859 by Reeve Brothers in London. Hooker sailed on HMS Erebus as assistant surgeon.
The botanical findings of the Ross expedition were published in four parts, the last two in two volumes each, making six volumes in all:
Part I Botany of Lord Auckland's Group and Campbell's Island (1844–1845)
Part II Botany of Fuegia, the Falklands, Kerguelen's Land, Etc. (1845–1847)
Part III Flora Novae-Zelandiae (1851–1853) (2 volumes)
Part IV Flora Tasmaniae (1853–1859) (2 volumes)
All were "splendidly" illustrated by Walter Hood Fitch, who prepared thousands of detailed botanical figures on 530 colour plates. The greater part of the plant specimens collected during this expedition are now part of London's Kew Herbarium.
The Flora of Tasmania contains an introductory essay on biogeography written from a Darwinian point of view, making the book the first case study for the theory of evolution by natural selection. This has been seen as the foundation of evolutionary biogeography. Hooker gave Darwin a copy of the work, which proposed that plant groups on different landmasses had common ancestors, spreading via long-vanished land bridges. Darwin doubted the explanation but agreed that geographical distribution would be vital to understanding the origin of species. In the 21st century the book is still treated as a major reference work.
Context
Ross and earlier expeditions
The British government fitted out an expedition led by the explorer and naval officer James Clark Ross to investigate magnetism and marine geography in high southern latitudes, which sailed with two ships, HMS Terror and HMS Erebus on 29 September 1839 from Chatham.
The ships docked at Madeira, Tenerife, the Cape Verde archipelago, Saint Peter and Saint Paul Archipelago, Trinidad and arrived at the Cape of Good Hope on 4 April 1840. On 21 April the giant kelp Macrocystis pyrifera was seen off Marion Island, but no landfall could be made there or on the Crozet Islands due to the harsh winds. On 12 May the ships anchored at Christmas Harbour for two and a half months, during which all plants previously encountered by James Cook on the Kerguelen Islands were collected. On 20 July they sailed again to arrive on 16 August at the River Derwent, to remain in Tasmania until 12 November. A week later the flotilla stopped at Lord Auckland's Islands and Campbell's Island for the spring months.
Large floating forests of Macrocystis and Durvillaea were found until the ships ran into the icebergs at latitude 61° S. Pack-ice was met at 68° S and longitude 175°. During this part of the voyage Victoria Land, Mount Erebus and Mount Terror were discovered. After returning to Tasmania for three months, the flotilla went via Sydney to the Bay of Islands, and stayed for three months in New Zealand to collect plants.
From 6 April 1842 a long stay in the Falklands began, where the flora was investigated to supplement the work of the French explorer Admiral Jules Dumont d'Urville, who had sailed to the Antarctic and the Pacific between 1837 and 1840, and of the crew of the Uranie, who had visited the South Atlantic and the South Pacific between 1817 and 1820. In the Flora Atlantica, Hooker praises the work of the English botanist Sir Joseph Banks and his Swedish assistant, Daniel Solander, on Captain Cook's first voyage in 1769. Hooker also mentions Cook's second voyage and the explorations of the French survey ship Coquille, on which D'Urville had served as a young officer. When visiting the Hermite Islands, seedlings of the deciduous Nothofagus antarctica and the evergreen Nothofagus betuloides were collected from this southernmost location of any tree. These were planted on the Falklands, and some were later brought to Kew. On Cockburn's Island twenty cryptogam species were found. The ships returned to the Cape of Good Hope on 4 April 1843. At the end of the journey specimens of some fifteen hundred plant species had been collected and preserved.
Few earlier botanical descriptions of the region had been written, and little or no plant collecting had been attempted other than on the coasts before 1820. The first flora for New Zealand was Achille Richard's 1832 Essai d'une flore de la Nouvelle-Zélande, based on d'Urville's work and such earlier data as existed. This was followed by Allan Cunningham's 1839 Florae insularum Novae-Zelandiae praecursor.
Joseph Dalton Hooker
Joseph Dalton Hooker was a British botanist. His father, William Jackson Hooker, was the director of the Royal Botanic Gardens, Kew, the United Kingdom's centre for the study of plant species. The voyage to the Antarctic on the Ross expedition, when he was 23 years old, was his first; formally, he sailed on HMS Erebus as assistant surgeon. Charles Darwin wrote to Hooker in November 1843, urging him to write "some general sketch of the Flora" of the Antarctic, complete with "comparative remarks on the species allied to the European species". Hooker subsequently made voyages to regions around the world including the Himalayas and India in 1847–1851, Palestine in 1860, Morocco in 1871, and the Western United States in 1877, collecting plants and writing monographs on his findings in each case. These helped him to build a high scientific reputation, and in 1855 he became Assistant-Director of the Royal Botanic Gardens, Kew; he became full Director in 1865, remaining so for 20 years.
Walter Hood Fitch
Hooker was ably assisted by the illustrator Walter Hood Fitch, who "splendidly" prepared the many colour illustrations required for the Flora. William Hooker had encouraged Fitch to move into botanical illustration; from 1834, Fitch was the sole artist for Curtis's Botanical Magazine. In 1841, when William Hooker became Director at Kew, Fitch became Kew's sole artist for all its publications, making the chromolithographs by drawing directly onto the lithographic stone; Hooker paid him personally.
Monograph
Publication history
The four parts of the Flora Antarctica total 6 volumes, describe about 3000 species, and contain 530 plates which depict 1095 of the species. They were published by Reeve Brothers in London between 1844 and 1849. The work was reprinted (in English) by the German publisher J. Cramer in Weinheim in 1963.
Approach
The work is prefaced with a "Summary of the Voyage". Each volume begins with a brief general overview of the flora of its region. The body of the work consists of a systematic list of the plant families found in that region, such as Ranunculaceae. Each such family receives a brief overall description, followed by a brief account of the family's habitat in the region. The description of each family and species is in Latin, while the discussion is in English. Each species is illustrated in the colour plates, the details indexed at the end of the text on that species. Thus for instance Ranunculus pinguis is described as ('unstalked, fleshy, hairy, ...'); the a and β varieties living in Lord Auckland's group of islands, in "boggy places on the hills, alt. 1000 feet...". The Latin is tersely botanical, confining itself to anatomical features; the English discussion is more wide-ranging, with comments such as "A very handsome species, and quite distinct from any with which I am acquainted." The flowering plants are described first, followed by the "lower plants" and ending with the lichens.
Contents
Botany of Lord Auckland's Group and Campbell's Island
Part I, published between 1844 and 1845, covers the Flora of Lord Auckland and Campbell's Islands. It has 208 pages, 370 species, 80 plates and a map, and illustrates 150 species. According to Hooker, the flora of the islands south of Tasmania and New Zealand is related to that of New Zealand and bears no likeness to that of Australia. On the Auckland Islands wood grows near the sea and consists of the tree Metrosideros umbellata intermixed with woody Dracophyllum, Coprosma, Hebe (assigned to Veronica by Hooker) and Panax. These are undergrown by many ferns. Higher up grow alpines. On the Campbell Islands brushwood is limited to narrow bays which are relatively sheltered. These islands are steeper and rocky and have bear less vegetation, primarily grasses.
Hooker was the first to study the sub-Antarctic Campbell Island and the Auckland group.
Botany of Fuegia, the Falklands, Kerguelen's Land, Etc.
Part II, published between 1845 and 1847, covers the Botany of Fuegia, the Falklands, Kerguelen's Land, Etc. It has 366 pages, 1000 species, 120 plates, and illustrates 220 species.
According to Hooker, the flora of New Zealand's Antarctic islands is so different from that of the remainder of the territories visited during the voyage, that it merits a separate description. An exemplary difference is the dominance of Asteraceae in New Zealand's islands, and absence of representatives of the Rubiaceae, while the reverse is true for those two plant families on the other Antarctic archipelagos. So the Flora Antarctica describes in its second part the plants of Tierra del Fuego and the south-western coast of Patagonia, the Falkland Islands, Palmer's Land, South Shetlands, South Georgia, Tristan da Cunha, and Kerguelen's Land.
Flora Novae-Zelandiae
Part III, the Botany of New Zealand or Flora Nova-Zelandiae, was published in two volumes between 1851 and 1853.
Volume 1 Phanerogams (355 pages, 730 species, 70 plates, 83 species depicted)
Volume 2 Cryptogams (378 pages, 1037 species, 60 plates, 230 species depicted)
The book has an introductory essay which begins by summarizing the history of botanical research of the islands. Hooker singles out the work of Sir Joseph Banks and Daniel Solander on Captain Cook's first voyage in 1769, also mentioning Cook's second voyage and, 20 years later, the explorations of the French survey ship Coquille and the plant collector D'Urville. Hooker notes that the fungi of the islands remained largely unknown. The next chapter of the essay, on plant biogeography and evolution, is entitled "On the limits of species; their dispersion and variation"; Hooker discusses how plant species may have originated, and notes how much more they vary than was often supposed. The third chapter of the essay considers the "affinities" (relationships) of the New Zealand flora to other floras. The flora proper begins with a short introduction explaining the book's approach; as with the other volumes, the bulk of the text is a systematic account of the families and species found by the expedition.
The Flora "largely completed" the "primary phase of botanical survey in the [New Zealand] region".
Flora Tasmaniae
Part IV, the Botany of Tasmania or Flora Tasmaniae was published in two volumes between 1853 and 1859.
Volume 1 Dicotyledones (550 pages, 758 species, 100 plates, 138 species depicted)
Volume 2 Monocotyledones and Acotyledones (422 pages, 1445 species, 100 plates, 274 species depicted)
Hooker dedicated this Part to the local Tasmanian naturalists Ronald Campbell Gunn and William Archer, noting that "This Flora of Tasmania .. owes so much to their indefatigable exertions". Although the book is sometimes stated to have been published in 1859, the dedication is dated January 1860. It made use of plants collected by the local naturalist Robert Lawrence as well as Gunn and Archer.
The book begins with an "Introductory Essay" on biogeography. It is followed by a "Key to the Natural Orders of Tasmanian Flowering Plants" and a more detailed key to the genera. The Flora proper begins with the first order, the Ranunculaceae.
Flora Tasmaniae was "the first published case study supporting Charles Darwin’s theory of natural selection". It contained a "milestone essay on biogeography", "one of the first major public endorsements of the theory [of evolution by natural selection]". Hooker gradually changed his mind on evolution as he wrote up his findings from the Ross expedition. While he asserted that "my own views on the subjects of the variability of existing species" remain "unaltered from those which I maintained in the 'Flora of New Zealand'", the Flora Tasmaniae is written from a Darwinian perspective that effectively assumes natural selection, or as Hooker named it, the "variation" theory, to be correct.
Gallery
Reception
Contemporary
The American botanist Asa Gray welcomed the publication of the first two parts of the Flora, describing it as an "elaborate and highly beautiful work,—second in importance and in perfection of illustration, to no other Flora which has appeared in our time".
The work's author, Hooker, gave Charles Darwin a copy of (a draft of) the Flora; Darwin thanked him, and agreed in November 1845 that the geographical distribution of organisms would be "the key which will unlock the mystery of species". To explain the presence of plant groups on the widely-separated landmasses of Australia, New Zealand, and southern South America, Hooker proposed that the groups indeed had common ancestors, and that the plants had spread across now-vanished land bridges. Darwin was sceptical of the explanation, preferring the hypothesis of long-distance seed dispersal. For this work, Hooker has been described as "the real founder of causal historical biogeography".
In 1868, the botanist Robert Oliver Cunningham described the Flora as "invaluable" for his study of the plants of "Fuegia" from the survey ship HMS Nassau.
Modern
Flora Antarctica remains important, and continues to be cited in modern botanical research. For example, in 2013 W. H. Walton in his Antarctica: Global Science from a Frozen Continent describes it as "a major reference to this day", encompassing as it does "all the plants he found both in the Antarctic and on the sub-Antarctic islands", surviving better than Ross's deep-sea soundings which were made with "inadequate equipment".
David Senchina notes that Hooker was the first botanist to set foot on Antarctica, in 1840; the first sighting of a plant on the continent was only a few years earlier, namely A. Young's observation of Deschampsia antarctica (Antarctic hair grass) in 1819, from HMS Andromache, and the first plant specimen from an Antarctic island had been collected by the American James Eights only in 1830. Senchina calls Hooker's work "monumental", and notes that it covers ecology, with discussion of rocks as sources of heat for plants, and wind as a means of dispersing seeds and spores, as well as "standard plant collection, description, and classification". He concludes that Hooker, in the book and in discussion with Darwin, initiated the study of Antarctic plant geography and ecology.
References
External links
Part 1 Botany of Lord Auckland's... on Archive.org
Part 2, Botany of Fuegia... on Archive.org
Part 2, Botany of Fuegia... on Google Books (free)
Colour Plates on Archive.org
Volumes at Biodiversity Heritage Library
Illustrations from 7 volumes: 1, 1(1), 1(2), 2(1), 2(2), 3(1), 3(2)
Antarctica
Flora of the Antarctic
Books about Antarctica | Flora Antarctica | Biology | 3,302 |
55,464,594 | https://en.wikipedia.org/wiki/ABACABA%20pattern | The ABACABA pattern is a recursive fractal pattern that shows up in many places in the real world (such as in geometry, art, music, poetry, number systems, literature and higher dimensions). Patterns often show a DABACABA type subset. AA, ABBA, and ABAABA type forms are also considered.
Generating the pattern
In order to generate the next sequence, first take the previous pattern, add the next letter from the alphabet, and then repeat the previous pattern. The first few steps are listed here.
ABACABA is a "quickly growing word", often described as chiastic or "symmetrically organized around a central axis" (see: Chiastic structure and Χ). The number of members in each iteration is , the Mersenne numbers ().
Gallery
See also
Arch form
Farey sequence
Rondo
Sesquipower
Notes
References
External links
Naylor, Mike: abacaba.org
Fractals | ABACABA pattern | Mathematics | 196 |
4,636,772 | https://en.wikipedia.org/wiki/Matrix%20grammar | A matrix grammar is a formal grammar in which instead of single productions, productions are grouped together into finite sequences. A production cannot be applied separately, it must be applied in sequence. In the application of such a sequence of productions, the rewriting is done in accordance to each production in sequence, the first one, second one etc. till the last production has been used for rewriting. The sequences are referred to as matrices.
Matrix grammar is an extension of context-free grammar, and one instance of a controlled grammar.
Formal definition
A matrix grammar is an ordered quadruple
where
is a finite set of non-terminals
is a finite set of terminals
is a special element of , viz. the starting symbol
is a finite set of non-empty sequences whose elements are ordered pairs where
The pairs are called productions, written as . The sequences are called matrices and can be written as
Let be the set of all productions appearing in the matrices of a matrix grammar . Then the matrix grammar is of type-, length-increasing, linear, -free, context-free or context-sensitive if and only if the grammar has the following property.
For a matrix grammar , a binary relation is defined; also represented as . For any , holds if and only if there exists an integer such that the words
over V exist and
and
is one of the matrices of
and for all such that
Let be the reflexive transitive closure of the relation . Then, the language generated by the matrix grammar is given by
Examples
Consider the matrix grammar
where is a collection containing the following matrices:
These matrices, which contain only context-free rules, generate the context-sensitive language
The associate word of
is
and
.
This example can be found on pages 8 and 9 of in the following form:
Consider the matrix grammar
where is a collection containing the following matrices:
These matrices, which contain only context-regular rules, generate the context-sensitive language
The associate word of
is
and
.
Properties
Let MAT^\lambda be the class of languages produced by matrix grammars, and the class of languages produced by -free matrix grammars.
Trivially, is included in MAT^\lambda.
All context-free languages are in , and all languages in MAT^\lambda are recursively enumerable.
is closed under union, concatenation, intersection with regular languages and permutation.
All languages in can be produced by a context-sensitive grammar.
There exists a context-sensitive language which does not belong to MAT^\lambda .
Each language produced by a matrix grammar with only one terminal symbol is regular.
Open problems
It is not known whether there exist languages in MAT^\lambda which are not in , and it is neither known whether MAT^\lambda contains languages which are not context-sensitive .
References
Footnotes
Ábrahám, S. Some questions of language theory. International Conference on Computational Linguistic, 1965. pp 1–11.
Gheorghe Păun, Membrane Computing: An Introduction, Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2002. pp 30–32
Formal languages | Matrix grammar | Mathematics | 623 |
24,008,138 | https://en.wikipedia.org/wiki/C15H22O3 | {{DISPLAYTITLE:C15H22O3}}
The molecular formula C15H22O3 (molar mass: 255.33 g/mol) may refer to:
Gemfibrozil, an oral drug used to lower lipid levels
Nardosinone, a sesquiterpene
Octyl salicylate, an ingredient in sunscreens
Sterpuric acid, a sesquiterpene
Xanthoxin, a carotenoid
Molecular formulas | C15H22O3 | Physics,Chemistry | 107 |
50,880,861 | https://en.wikipedia.org/wiki/Roofline%20model | The roofline model is an intuitive visual performance model used to provide performance estimates of a given compute kernel or application running on multi-core, many-core, or accelerator processor architectures, by showing inherent hardware limitations, and potential benefit and priority of optimizations. By combining locality, bandwidth, and different parallelization paradigms into a single performance figure, the model can be an effective alternative to assess the quality of attained performance instead of using simple percent-of-peak estimates, as it provides insights on both the implementation and inherent performance limitations.
The most basic roofline model can be visualized by plotting floating-point performance as a function of machine peak performance, machine peak bandwidth, and arithmetic intensity. The resultant curve is effectively a performance bound under which kernel or application performance exists, and includes two platform-specific performance ceilings: a ceiling derived from the memory bandwidth and one derived from the processor's peak performance (see figure on the right).
Related terms and performance metrics
Work
The work denotes the number of operations performed by a given kernel or application. This metric may refer to any type of operation, from number of array points updated, to number of integer operations, to number of floating point operations (FLOPs), and the choice of one or another is driven by convenience. In the majority of the cases however, is expressed as FLOPs.
Note that the work is a property of the given kernel or application and thus depend just partially on the platform characteristics.
Memory traffic
The memory traffic denotes the number of bytes of memory transfers incurred during the execution of the kernel or application. In contrast to , is heavily dependent on the properties of the chosen platform, such as for instance the structure of the cache hierarchy.
Arithmetic intensity
The arithmetic intensity , also referred to as operational intensity, is the ratio of the work to the memory traffic :and denotes the number of operations per byte of memory traffic. When the work is expressed as FLOPs, the resulting arithmetic intensity will be the ratio of floating point operations to total data movement (FLOPs/byte).
Naive Roofline
The naïve roofline is obtained by applying simple bound and bottleneck analysis. In this formulation of the roofline model, there are only two parameters, the peak performance and the peak bandwidth of the specific architecture, and one variable, the arithmetic intensity. The peak performance, in general expressed as GFLOPS, can be usually derived from benchmarking, while the peak bandwidth, that references to peak DRAM bandwidth to be specific, is instead obtained via architectural manuals. The resulting plot, in general with both axes in logarithmic scale, is then derived by the following formula:where is the attainable performance, is the peak performance, is the peak bandwidth and is the arithmetic intensity. The point at which the performance saturates at the peak performance level , that is where the diagonal and horizontal roof meet, is defined as ridge point. The ridge point offers insight on the machine's overall performance, by providing the minimum arithmetic intensity required to be able to achieve peak performance, and by suggesting at a glance the amount of effort required by the programmer to achieve peak performance.
A given kernel or application is then characterized by a point given by its arithmetic intensity (on the x-axis). The attainable performance is then computed by drawing a vertical line that hits the roofline curve. Hence. the kernel or application is said to be memory-bound if . Conversely, if , the computation is said to be compute-bound.
Adding ceilings to the model
The naive roofline provides just an upper bound (the theoretical maximum) to performance. Although it can still give useful insights on the attainable performance, it does not provide a complete picture of what is actually limiting it. If, for instance, the considered kernel or application performs far below the roofline, it might be useful to capture other performance ceilings, other than simple peak bandwidth and performance, to better guide the programmer on which optimization to implement, or even to assess the suitability of the architecture used with respect to the analyzed kernel or application. The added ceilings impose then a limit on the attainable performance that is below the actual roofline, and indicate that the kernel or application cannot break through anyone of these ceilings without first performing the associated optimization.
The roofline plot can be expanded upon three different aspects: communication, adding the bandwidth ceilings; computation, adding the so-called in-core ceilings; and locality, adding the locality walls.
Bandwidth ceilings
The bandwidth ceilings are bandwidth diagonals placed below the idealized peak bandwidth diagonal. Their existence is due to the lack of some kind of memory related architectural optimization, such as cache coherence, or software optimization, such as poor exposure of concurrency (that in turn limit bandwidth usage).
In-core ceilings
The in-core ceilings are roofline-like curve beneath the actual roofline that may be present due to the lack of some form of parallelism. These ceilings effectively limit how high performance can reach. Performance cannot exceed an in-core ceiling until the underlying lack of parallelism is expressed and exploited. The ceilings can be also derived from architectural optimization manuals other than benchmarks.
Locality walls
If the ideal assumption that arithmetic intensity is solely a function of the kernel is removed, and the cache topology - and therefore cache misses - is taken into account, the arithmetic intensity clearly becomes dependent on a combination of kernel and architecture. This may result in a degradation in performance depending on the balance between the resultant arithmetic intensity and the ridge point. Unlike "proper" ceilings, the resulting lines on the roofline plot are vertical barriers through which arithmetic intensity cannot pass without optimization. For this reason, they are referenced to as locality walls or arithmetic intensity walls.
Extension of the model
Since its introduction, the model has been further extended to account for a broader set of metrics and hardware-related bottlenecks. Already available in literature there are extensions that take into account the impact of NUMA organization of memory, of out-of-order execution, of memory latencies, and to model at a finer grain the cache hierarchy in order to better understand what is actually limiting performance and drive the optimization process.
Also, the model has been extended to better suit specific architectures and the related characteristics, such as FPGAs.
See also
Software performance testing
Benchmark (computing)
References
External links
The Roofline Model: A Pedagogical Tool for Auto-tuning Kernels on Multicore Architectures
Applying the Roofline model
Extending the Roofline Model: Bottleneck Analysis with Microarchitectural Constraints
Roofline Model Toolkit
Roofline Model Toolkit: A Practical Tool for Architectural and Program Analysis - publication related to the tool.
Perfplot
Extended Roofline Model
Intel Advisor - Roofline model automation
Youtube Video on how to use Intel Advisor Roofline
Software testing
Software optimization | Roofline model | Engineering | 1,382 |
56,694,218 | https://en.wikipedia.org/wiki/Yalo%20%28company%29 | Yalo (formerly Yalochat) is an artificial intelligence platform specializing in emerging markets. Its headquerters were formerly in San Francisco with offices in Mexico City, Mumbai, Shanghai, Bogotá, and São Paulo. It subsequently relocated to Mexico City.
Overview
Yalo enables companies to interact with their customers in conversational commerce on messaging apps including WhatsApp, Facebook Messenger, and WeChat.
Customers include Walmart, Nike, Volkswagen, Aeroméxico, appliance and electronics retailer Elektra, Mexico's largest department store, Coppel, and Mexico's largest theme resort Xcaret.
The company was founded in Mexico by CEO Javier Mata and was formerly based in San Francisco, but subsequently moved to Mexico. As at 2021 its board of directors includes Mark Fernandes and Rashmi Gopinath.
In February 2018, the company announced the opening of its office in Shanghai, China, in alliance with venture capitalist Michael Kuan's company Strategic Impact Group.
In May 2021, Yalo raised $50 million in new funding led by B Capital, for a total of $75 million in total funding.
Platforms / Apps
Facebook Messenger
Yalochat introduced a variety of services on Facebook Messenger in 2016, shortly after Facebook launched its chatbot platform.
In April 2017, it announced that its chatbot with Aeroméxico had added an artificial intelligence component to the Facebook Messenger bot.
WhatsApp
In October 2017, Aeroméxico announced that together with Yalochat it would launch services on the new enterprise platform of WhatsApp, the world's most popular messaging platform, and that it would be the first airline in The Americas to do so. Services available via WhatsApp include shopping for and purchasing flights, making changes, checking in and obtaining a boarding pass, and tracking a flight. It includes both an artificial intelligence-powered chatbot, and chat with the airline's human agents.
WeChat
In February 2018 Yalochat announced the opening of its office in Shanghai, China and that it had begun offering services on WeChat, China's most popular messaging app.
Line
In an interview, Yalochat CEO Javier Mata said that the company was planning to offer services on Line messenger, popular in Japan, Korea and Thailand.
References
Software companies based in California
Instant messaging
Software companies of the United States | Yalo (company) | Technology | 483 |
76,390,477 | https://en.wikipedia.org/wiki/Promethium%20iodate | Promethium iodate is an inorganic compound with the chemical formula Pm(IO3)3. It can be obtained by reacting with potassium iodate, ammonium iodate or a slight excess of iodic acid and Pm3+ solution and precipitating it. Its hydrate, Pm(IO3)3·H2O, crystallizes in the P21 space group, with unit cell parameters a=10.172±13, b=6.700±20, c=7.289±24 Å, β=113.1±0.2°.
References
External reading
Promethium compounds
Iodates | Promethium iodate | Chemistry | 131 |
60,925 | https://en.wikipedia.org/wiki/Ship%20commissioning | Ship commissioning is the act or ceremony of placing a ship in active service and may be regarded as a particular application of the general concepts and practices of project commissioning. The term is most commonly applied to placing a warship in active duty with its country's military forces. The ceremonies involved are often rooted in centuries-old naval tradition.
Ship naming and launching endow a ship hull with her identity, but many milestones remain before it is completed and considered ready to be designated a commissioned ship. The engineering plant, weapon and electronic systems, galley, and other equipment required to transform the new hull into an operating and habitable warship are installed and tested. The prospective commanding officer, ship's officers, the petty officers, and seamen who will form the crew report for training and familiarization with their new ship.
Before commissioning, the new ship undergoes sea trials to identify any deficiencies needing correction. The preparation and readiness time between christening-launching and commissioning may be as much as three years for a nuclear-powered aircraft carrier to as brief as twenty days for a World War II landing ship. USS Monitor, of American Civil War fame, was commissioned less than three weeks after launch.
Pre-commissioning
Regardless of the type of ship in question, a vessel's journey towards commissioning in its nation's navy begins with a process known as sea trials. Sea trials usually take place some years after a vessel was laid down, and mark the interim step between the completion of a ship's construction and its official acceptance for service with its nation's navy.
Sea trials begin when the ship is floated out of its dry dock (or more rarely, moved by a vehicle to the sea from its construction hangar, as was the case with the submarine ), at which time the initial crew for a ship (usually a skeleton crew composed of yard workers and naval personnel; in the modern era of increasingly complex ships the crew will include technical representatives of the ship builder and major system subcontractors) will assume command of the vessel in question. The ship is then sailed in littoral waters to test the design, equipment, and other ship specific systems to ensure that they work properly and can handle the equipment that they will be using in the future. Tests during this phase can include launching missiles from missile magazines, firing the ship's gun (if so equipped), conducting basic flight tests with rotary and fixed-wing aircraft that will be assigned to the ship, and various tests of the electronic and propulsion equipment. Often during this phase of testing problems arise relating to the state of the equipment on the ship, which can require returning to the builder's shipyard to address those concerns.
In addition to problems with a ship's arms, armament, and equipment, the sea trial phase a ship undergoes prior to commissioning can identify issues with the ship's design that may need to be addressed before it can be accepted into service. During her sea trials in 1999 French Naval officials determined that the was too short to safely operate the E2C Hawkeye, resulting in her return to the builder's shipyard for enlargement.
After a ship has successfully cleared its sea trial period, it will officially be accepted into service with its nation's navy. At this point, the ship in question will undergo a process of degaussing and/or deperming, to reduce the ship's magnetic signature.
Commissioning
Once a ship's sea trials are successfully completed, plans for the commissioning ceremony will take shape. Depending on the naval traditions of the nation in question, the commissioning ceremony may be an elaborately planned event with guests, the ship's future crew, and other persons of interest in attendance, or the nation may forgo a ceremony and administratively place the ship in commission.
At a minimum, on the day on which the ship is to be commissioned the crew will report for duty aboard the ship and the commanding officer will read through the orders given for the ship and its personnel. If the ship's ceremony is a public affair, the Captain may make a speech to the audience, along with other VIPs as the ceremony dictates. Religious ceremonies, such as blessing the ship or the singing of traditional hymns or songs may also occur.
Once a ship has been commissioned its final step toward becoming an active unit of the navy it serves is to report to its home port and officially load or accept any remaining equipment (such as munitions).
Decommissioning
To decommission a ship is to terminate its career in service in the armed forces of a nation. Unlike wartime ship losses, in which a vessel lost to enemy action is said to be struck, decommissioning confers that the ship has reached the end of its usable life and is being retired from a country's navy. Depending on the naval traditions of the country, a ceremony commemorating the decommissioning of the ship may take place, or the vessel may be removed administratively with minimal fanfare. The term "paid off" is alternatively used in British and Commonwealth contexts, originating in the age-of-sail practice of ending an officer's commission and paying crew wages once the ship completed its voyage.
Ship decommissioning usually occurs some years after the ship was commissioned and is intended to serve as a means by which a vessel that has become too old or obsolete can be retired with honor from the country's armed forces. Decommissioning of the vessel may also occur due to treaty agreements (such as the Washington Naval Treaty) or for safety reasons (such as a ship's nuclear reactor and associated parts reaching the end of their service life), depending on the type of ship being decommissioned. In a limited number of cases a ship may be decommissioned if the vessel in question is judged to be damaged beyond economical repair, as was the case with , or . In rare cases, a navy or its associated country may recommission or leave a ship that is old or obsolete in commission with the regular force rather than decommissioning the vessel in question due to the historical significance or public sentiment for the ship in question. This is the case with the ships and . Vessels preserved in this manner typically do not relinquish their names to other, more modern ships that may be in the design, planning, or construction phase of the parent nation's navy.
Prior to its formal decommissioning, the ship in question will begin the process of decommissioning by going through a preliminary step called inactivation or deactivation. During this phase, a ship will report to a naval facility owned by the country to permit the ship's crew to offload, remove, and dismantle the ship's weapons, ammunition, electronics, and other material that is judged to be of further use to the nation. The removed material from a ship usually ends up either rotating to another ship in the class with similar weapons and/or capabilities, or in storage pending a decision on equipment's fate. During this time a ship's crew may be thinned out via transfers and reassignments as the ongoing removal of equipment renders certain personnel (such as missile technicians or gun crews) unable to perform their duties on the ship in question. Certain aspects of a ship's deactivation – such as the removal or deactivation of a ship's nuclear weapons capabilities – may be governed by international treaties, which can result in the presence of foreign officials authorized to inspect the weapon or weapon system to ensure compliance with treaties. Other aspects of a ship's decommissioning, such as the reprocessing of nuclear fuel from a ship utilizing a nuclear reactor or the removal of hazardous materials from a ship, are handled by the government according to the nation's domestic policies. When a ship finishes its inactivation, it is then formally decommissioned, after which the ship is usually towed to a storage facility.
In addition to the economic advantages of retiring a ship that has grown maintenance intensive or obsolete, the decommissioning frees up the name used by the ship, allowing vessels currently in the planning or building stages to inherit the name of that warship. Often, but not always, ships that are decommissioned spend the next few years in a reserve fleet before their ultimate fate is decided.
Practices by nation
United States Navy
Commissioning in the early United States Navy under sail was attended by no ceremony. An officer designated to command a new ship received orders similar to those issued to Captain Thomas Truxtun in 1798:
In Truxtun's time, the prospective commanding officer had responsibility for overseeing construction details, outfitting the ship, and recruiting his crew. When a captain determined that his new ship was ready to take to sea, he mustered the crew on deck, read his orders, broke the national ensign and distinctive commissioning pennant, and caused the watch to be set and the first entry to be made in the log. Thus, the ship was placed in commission.
Commissionings were not public affairs, and unlike christening-and-launching ceremonies, were not recorded by newspapers. The first specific reference to commissioning located in naval records is a letter of November 6, 1863, from Secretary of the Navy Gideon Welles to all navy yards and stations. The Secretary directed: "Hereafter the commandants of navy yards and stations will inform the Department, by special report of the date when each vessel preparing for sea service at their respective commands, is placed in commission."
Subsequently, various editions of Navy regulations mentioned the act of putting a ship in commission, but details of a commissioning ceremony were not prescribed. Through custom and usage, a fairly standard practice emerged, the essentials of which are outlined in current Navy regulations. Craft assigned to Naval Districts and shore bases for local use, such as harbor tugs and floating drydocks, are not usually placed in commission but are instead given an "in service" status. They do fly the national ensign, but not a commissioning pennant.
In modern times, officers and crew members of a new warship are assembled on the quarterdeck or other suitable area. Formal transfer of the ship to the prospective commanding officer is done by the Chief of Naval Operations or his representative. The national anthem is played, the transferring officer reads the commissioning directive, the ensign is hoisted, and the commissioning pennant broken. The prospective commanding officer reads his orders, assumes command, and the first watch is set. Following, the sponsor is traditionally invited to give the first order to the ship's company: "Man our ship and bring her to life!", whereupon the ship's assigned crew would run on board and man the rails of the ship.
In recent years, commissionings have become more public occasions. Most commonly assisted by a Commissioning Support Team (CST), the Prospective Commanding Officer and ship's crew, shipbuilder executives, and senior Navy representatives gather for a formal ceremony placing the ship in active service (in commission). Guests, including the ship's sponsor, are frequently invited to attend, and a prominent individual delivers a commissioning address. On May 3, 1975, more than 20,000 people witnessed the commissioning of at Norfolk, Virginia. The carrier's sponsor, daughter of Fleet Admiral Chester Nimitz, was introduced, and U.S. President Gerald R. Ford was the principal speaker.
Regardless of the type of ship, the brief commissioning ceremony completes the cycle from christening and launching to bring the ship into full status as a warship of her nation.
See also
Shakedown cruise
Taken on Strength
Decommissioning of Russian nuclear-powered vessels
Lists of ship commissionings and decommissionings
References
External links
Navy Traditions and Customs from Naval Historical Center
Photos from the 1986 commissioning of USS Samuel B. Roberts (FFG 58)
Naval ceremonies
Rituals attending construction | Ship commissioning | Engineering | 2,401 |
21,070,925 | https://en.wikipedia.org/wiki/Biostrophin | Biostrophin is a drug which may serve as a vehicle for gene therapy, in the treatment of Duchenne and Becker muscular dystrophy.
As mutations in the gene which codes for the protein dystrophin is the underlying defect responsible for both
disorders, biostrophin will deliver a genetically-engineered, functional copy of the gene at the molecular level to affected muscle cells. Dosage, as well as a viable means for systemic release of the drug in patients, is currently being investigated with the use of both canine and primate animal models.
Biostrophin is being manufactured by Asklepios BioPharmaceuticals, Inc., with funding provided by the Muscular Dystrophy Association.
See also
Other drugs for Duchenne muscular dystrophy
Ataluren
Rimeporide (experimental)
References
External links
Parent Project MD
Muscular dystrophy
Genetic engineering | Biostrophin | Chemistry,Engineering,Biology | 186 |
352,541 | https://en.wikipedia.org/wiki/Charged%20particle | In physics, a charged particle is a particle with an electric charge. For example, some elementary particles, like the electron or quarks are charged. Some composite particles like protons are charged particles. An ion, such as a molecule or atom with a surplus or deficit of electrons relative to protons are also charged particles.
A plasma is a collection of charged particles, atomic nuclei and separated electrons, but can also be a gas containing a significant proportion of charged particles.
Charged particles are labeled as either positive (+) or negative (-). The designations are arbitrary. Nothing is inherent to a positively charged particle that makes it "positive", and the same goes for negatively charged particles.
Examples
Positively charged particles
protons
positrons (antielectrons)
positively charged pions
alpha particles
cations
Negatively charged particles
electrons
antiprotons
muons
tauons
negative charged pions
anions
Particles with zero charge
neutrons
photons
neutrinos
neutral pions
z boson
higgs boson
atoms
See also
Charge carrier – refers to moving charged particles that create an electric current
References
External links
Charged particle motion in E/B Field
Charge carriers
Particle physics | Charged particle | Physics,Materials_science | 236 |
2,690,712 | https://en.wikipedia.org/wiki/Ground%20bounce | In electronic engineering, ground bounce is a phenomenon associated with transistor switching where the gate voltage can appear to be less than the local ground potential, causing the unstable operation of a logic gate.
Description
Ground bounce is usually seen on high density VLSI where insufficient precautions have been taken to supply a logic gate with a sufficiently low impedance connection to ground (or sufficiently high bypass capacitance). In this phenomenon, when the base of an NPN transistor is turned on, enough current flows through the emitter-collector circuit that the silicon in the immediate vicinity of the emitter-ground connection is pulled partially high, sometimes by several volts, thus raising the local ground, as perceived at the gate, to a value significantly above true ground. Relative to this local ground, the base voltage can go negative, thus shutting off the transistor. As the excess local charge dissipates, the transistor turns back on, possibly causing a repeat of the phenomenon, sometimes up to a half-dozen bounces.
Ground bounce is one of the leading causes of "hung" or metastable gates in modern digital circuit design. This happens because the ground bounce puts the input of a flip flop effectively at voltage level that is neither a one nor a zero at clock time, or causes untoward effects in the clock itself. A similar voltage sag phenomenon may be seen on the collector side, called supply voltage sag (or VCC sag), where VCC is pulled unnaturally low. As a whole, ground bounce is a major issue in nanometer range technologies in VLSI.
Ground bounce can also occur when the circuit board has poorly designed ground paths. Improper ground or VCC can lead to local variations in the ground level between various components. This is most commonly seen in circuit boards that have ground and VCC paths on the surfaces of the board.
Reduction
Ground bounce may be reduced by placing a 10–30-ohm resistor in series to each of the switching outputs to limit the current flow during the gate switch.
See also
Metastability in electronics
Unbounded nondeterminism
Buridan's ass
References
Jeff Barrow, Reducing Ground Bounce, (2007), Analog Devices
Vikas Kumar, Ground Bounce Primer, (2005), TechOnLine (now EETimes).
Ground Bounce in 8-Bit High-Speed Logic, Pericom Application Note.
AN-640 Understanding and Minimizing Ground Bounce, (2003) Fairchild Semiconductor, Application Note 640.
Minimizing Ground Bounce & VCC Sag, White Paper, (2001) Altera Corporation.
Ground Bounce part-1 and part-2 by Douglas Brooks , Articles, Ultra Cad Design.
Electronic engineering
Electrical phenomena
Transistors | Ground bounce | Physics,Technology,Engineering | 565 |
30,992,943 | https://en.wikipedia.org/wiki/PRINS%20%28gene%29 | PRINS (psoriasis associated RNA induced by stress) is a long non-coding RNA. Its expression is induced by stress, and it may have a protective role in cells exposed to stress. It is over-expressed in the skin of patients with psoriasis. It regulates G1P3, a gene encoding a protein with anti-apoptotic effects in keratinocytes. Overexpression of PRINS may contribute to psoriasis via the down-regulation of G1P3.
See also
Long noncoding RNA
References
Further reading
Non-coding RNA | PRINS (gene) | Chemistry | 121 |
1,574,904 | https://en.wikipedia.org/wiki/Prince%20Rupert%27s%20drop | Prince Rupert's drops (also known as Dutch tears or Batavian tears) are toughened glass beads created by dripping molten glass into cold water, which causes it to solidify into a tadpole-shaped droplet with a long, thin tail. These droplets are characterized internally by very high residual stresses, which give rise to counter-intuitive properties, such as the ability to withstand a blow from a hammer or a bullet on the bulbous end without breaking, while exhibiting explosive disintegration if the tail end is even slightly damaged. In nature, similar structures are produced under certain conditions in volcanic lava and are known as Pele's tears.
The drops are named after Prince Rupert of the Rhine, who brought them to England in 1660, although they were reportedly being produced in the Netherlands earlier in the 17th century and had probably been known to glassmakers for much longer. They were studied as scientific curiosities by the Royal Society, and the unraveling of the principles of their unusual properties probably led to the development of the process for the production of toughened glass, patented in 1874. Research carried out in the 20th and 21st centuries shed further light on the reasons for the drops' contradictory properties.
Description
Prince Rupert's drops are produced by dropping molten glass drops into cold water. The glass rapidly cools and solidifies in the water from the outside inward. This thermal quenching may be described by means of a simplified model of a rapidly cooled sphere. Prince Rupert's drops have remained a scientific curiosity for nearly 400 years due to two unusual mechanical properties: when the tail is snipped, the drop disintegrates explosively into powder, whereas the bulbous head can withstand compressive forces of up to .
The explosive disintegration arises due to multiple crack bifurcation events when the tail is cut – a single crack is accelerated in the tensile residual stress field in the center of the tail and bifurcates after it reaches a critical velocity of . Given these high speeds, the disintegration process due to crack bifurcation can only be inferred by looking into the tail and employing a high-speed camera. This is perhaps why this curious property of the drops remained unexplained for centuries.
The second unusual property of the drops, namely the strength of the heads, is a direct consequence of large compressive residual stressesup to that exist in the vicinity of the head's outer surface. This stress distribution is measured by using glass's natural property of stress-induced birefringence and by employing techniques of 3D photoelasticity. The high fracture toughness due to residual compressive stresses makes Prince Rupert's drops one of the earliest examples of toughened glass.
History
It has been suggested that methods for making the drops have been known to glassmakers since at least the times of the Roman Empire.
Sometimes attributed to Dutch inventor Cornelis Drebbel, the drops were often referred to as lacrymae Borussicae (Prussian tears) or lacrymae Batavicae (Dutch tears) in contemporary accounts.
Verifiable accounts of the drops from Mecklenburg in North Germany appear as early as 1625. The secret of how to make them remained in the Mecklenburg area for some time, although the drops were disseminated across Europe from there, for sale as toys or curiosities.
The Dutch scientist Constantijn Huygens asked Margaret Cavendish, Duchess of Newcastle to investigate the properties of the drops; her opinion after carrying out experiments was that a small amount of volatile liquid was trapped inside.
Although Prince Rupert did not discover the drops, he was responsible for bringing them to Britain in 1660. He gave them to King Charles II, who in turn delivered them in 1661 to the Royal Society (which had been created the previous year) for scientific study. Several early publications from the Royal Society give accounts of the drops and describe experiments performed. Among these publications was Micrographia of 1665 by Robert Hooke, who later would discover Hooke's Law. His publication laid out correctly most of what can be said about Prince Rupert's drops—without a fuller understanding than existed at the time of elasticity (to which Hooke himself later contributed), and of the failure of brittle materials from the propagation of cracks. A fuller understanding of crack propagation had to wait until the work of A. A. Griffith in 1920.
In 1994, Srinivasan Chandrasekar, an engineering professor at Purdue University, and Munawar Chaudhri, head of the materials group at the University of Cambridge, used high-speed framing photography to observe the drop-shattering process and concluded that while the surface of the drops experiences highly compressive stresses, the inside experiences high tension forces, creating a state of unequal equilibrium which can easily be disturbed by breaking the tail. However, this left the question of how the stresses are distributed throughout a Prince Rupert's drop.
In a further study published in 2017, the team collaborated with Hillar Aben, a professor at Tallinn University of Technology in Estonia using a transmission polariscope to measure the optical retardation of light from a red LED as it travelled through the glass drop, and used the data to construct the stress distribution throughout the drop. This showed that the heads of the drops have a much higher surface compressive stress than previously thought at up to , but that this surface compressive layer is also thin, only about 10% of the diameter of the head of a drop. This gives the surface a high fracture strength, which means it is necessary to create a crack that enters the interior tension zone to break the droplet. As cracks on the surface tend to grow parallel to the surface, they cannot enter the tension zone but a disturbance in the tail allows cracks to enter the tension zone.
A scholarly account of the early history of Prince Rupert's drops is given in the Notes and Records of the Royal Society of London, where much of the early scientific study of the drops was performed.
Scientific uses
The study of drops probably inspired the process of producing toughened glass by quenching. It was patented in England by Parisian Francois Barthelemy Alfred Royer de la Bastie in 1874, just one year after V. De Luynes had published accounts of his experiments with them.
Since at least the 19th century, it has been known that formations similar to Prince Rupert's drops are produced under certain conditions in volcanic lava. More recently researchers at the University of Bristol and the University of Iceland have studied the glass particles produced by explosive fragmentation of Prince Rupert's drops in the laboratory to better understand magma fragmentation and ash formation driven by stored thermal stresses in active volcanoes.
Literary references
Because of their use as a party piece, Prince Rupert's drops became widely known in the late 17th century—far more than today. It can be seen that educated people (or those in "society") were expected to be familiar with them, from their use in the literature of the day. Samuel Butler used them as a metaphor in his poem Hudibras in 1663, and Pepys refers to them in his diary.
The drops were immortalized in a verse of the anonymous Ballad of Gresham College (1663):
Diarist George Templeton Strong wrote (volume 4, p. 122) of a hazardous sudden breaking up of pedestrian-bearing ice in New York City's East River during the winter of 1867 that "The ice flashed into fragments all at once like a Prince Rupert's drop."
Alfred Jarry's 1902 novel Supermale makes reference to the drops in an analogy for the molten glass drops falling from a failed device meant to pass eleven thousand volts of electricity through the supermale's body.
Sigmund Freud, discussing the dissolution of military groups in Group Psychology and the Analysis of the Ego (1921), notes the panic that results from the loss of the leader: "The group vanishes in dust, like a Prince Rupert's drop when its tail is broken off."
E. R. Eddison's 1935 novel Mistress of Mistresses references Rupert's drops in the last chapter as Fiorinda sets off a whole set of them.
In the 1940 detective novel There Came Both Mist and Snow by Michael Innes (J. I. M. Stewart), a character incorrectly refers to them as "Verona drops"; the error is corrected towards the end of the novel by the detective Sir John Appleby.
In his 1943 novella Conjure Wife, Fritz Leiber uses Prince Rupert drops as a metaphor for the volatility of several characters' personalities. These small-town college faculty people seem to be placid and impervious, but "explode" at a mere "flick of the filament".
Peter Carey devotes a chapter to the drops in his 1988 novel Oscar and Lucinda.
The title-giving suite to progressive rock band King Crimson's 1970 third studio album Lizard includes both parts referring to a fictionalised version of Prince Rupert as well as an extended section called "The Battle of Glass Tears".
See also
Bologna bottle
References
Further reading
Sir Robert Moray (1661). "An Account of the Glass Drops", Royal Society (transcribed, archive reference).
External links
PrinceRupertsDrop.com High-speed slow-motion video demonstrations.
Video showing the making and the breaking of Prince Rupert's Drops from the Museum of Glass
Popular Science article with a video detailing Prince Rupert's Drops
Former Mythbusters Adam Savage and Jamie Hyneman demonstrate Rupert's Drops, including diagram of internal stresses
Glass types
Science demonstrations
Novelty items
Fluid mechanics | Prince Rupert's drop | Engineering | 1,963 |
28,353,373 | https://en.wikipedia.org/wiki/Thermal%20destratification | Thermal destratification is the process of mixing the internal air in a building to eliminate stratified layers and achieve temperature equalization throughout the building envelope.
Thermal stratification in buildings
Destratification is the reverse of the natural process of thermal stratification, which is the layering of differing (typically increasing) air temperatures from floor to ceiling. Stratification is caused by hot air rising up to the ceiling or roof space because it is lighter than the surrounding cooler air. Conversely, cool air falls to the floor as it is heavier than the surrounding warmer air.
In a stratified building, temperature differentials of up to 1.5°C per vertical foot is common, and the higher a building's ceiling, the more extreme this temperature differential can be. In extreme cases, temperature differentials of 10°C have been found over a height of 1 meter. Other variables that influence the level of thermal stratification include heat generated by people and processes present in the building, insulation of the space from outside weather conditions, solar gain, specification of the HVAC system, location of supply and return ducts, and vertical air movement inside the space, usually supplied by destratification fans. Computational fluid dynamics can be used to predict the level of stratification in a space.
Effects of thermal stratification
In a study conducted by the Building Scientific Research Information Association, the wasted energy due to stratification increased consistently based on temperature differential from floor to ceiling (ΔT). The study indicates that stratified buildings tend to overheat or overcool based on the temperature at the thermostat, which tends to be lower than the overall heat energy present in the room. The study also showed that energy waste due to stratification was present at ceiling heights ranging from 20 ft. to 40 ft, and higher ceilings caused higher energy waste, even at the same ΔT. Since ΔT tends to be higher in taller ceilings, the effect of stratification is compounded, causing substantial energy waste in high-ceiling buildings.
Definition of destratification
Since stratification and the costs associated with it are linear, the definition of destratification will differ based on opinion and use case. Full destratification, or a 0° ΔT from floor to ceiling, is unlikely to occur in any building. Since the costs of stratification decrease linearly as ΔT approaches 5.4°F, and no study has yet looked at the effects of stratification below 5.4°F, it is not uncommon to consider any space with a ΔT below 5°F to be destratified. In the United States, ASHRAE Standard 55 prescribes 3°C as the limit for the vertical air temperature difference between head and ankle levels, but has no standard recommending an ideal ΔT between floor and ceiling.
Destratification technologies
Reducing thermal stratification can be accomplished by controlling the variables that are associated with increased stratification. Since many of the variables, including ceiling height, people and processes, solar gain, and outside weather conditions cannot be controlled, the most common technologies used are related to the building's HVAC (heating, ventilation, and air conditioning) system. One of the cheapest, most effective, and easiest to install technologies are destratification fans, including both axial destratification fans and HVLS (high-volume low-speed) fans.
Axial destratification fans
Axial destratification fans are self-contained units that are installed in an array at the ceiling with the goal of blowing conditioned air in the ceiling down to the floor, where people live and work. Because axial fans are designed to blow air straight down at the floor, they can be used in ceiling and roof structures over 100 ft. tall. Because axial destratification fans can achieve destratification with low CFMs, it is imperative that the air leaving the nozzle achieve an air speed at the floor of between 0.2 and 0.5 m/s. The result of this level of air movement is the integration of conditioned air from the ceiling with air at the floor level. Failing to impact the floor will result in destratification of medial layers of air but not achieve destratification at the floor. Since the area around the thermostat will not be destratified in this instance, it is hypothesized that there will be little or no cost savings, as the thermostat will continue to overheat or overcool the room.
An experiment in a room with a 21 ft. ceiling yielded a savings of 23.5% with the use of axial destratification fans.
High-volume low-speed (HVLS) fans
Because of their size, HVLS fans are normally installed in new construction, rather than retrofits, as the roof structure may have to be redesigned to accommodate the increased weight and size. It's not uncommon to require the relocation of lights, due to strobing as large fan blades pass under them, and sprinkler systems, which typically require unobstructed access to the floor to meet fire code. When used in the summer to encourage evaporative cooling, HVLS fans are run forward, blowing air at the floor. When used for destratification in the winter, the fans are run in reverse, blowing air towards ceiling which then circulates around the room. The height at which HVLS fans can be effective is limited compared to axial destratification fans.
Benefits of destratification
This method has the most benefits through its application in the heating, ventilation, and air conditioning (HVAC) industry and in heating and cooling for buildings and it has been found that "stratification is the single biggest waste of energy in buildings today."
For reducing energy consumption
By incorporating thermal destratification technology into buildings, energy requirements are reduced as heating systems are no longer over-delivering in order to constantly replace the heat that rises away from the floor area, by redistributing the already heated air from the unoccupied ceiling space back down to floor level, until temperature equalisation is achieved. With regards to cooling destratification systems ensure the cooled air supplied is circulated fully and distributed evenly throughout internal environments, eliminating hot and cold spots and satisfying thermostats for longer periods of time. As a result, destratification technology has great potential for carbon emission reductions due to the reduced energy requirement, and is in turn capable of cutting costs for businesses, sometimes by up to 50%. This is supported by The Carbon Trust which recommends destratification in buildings as one of its top three methods to reduce carbon dioxide emissions.
For comfort
Destratification naturally increases air movement at the floor, reducing "hot spots" and "cold spots" in a room. It can be used in typically cold areas, like grocery store freezer cases, to warm patrons shopping nearby. In addition, air movement from destratification fans can be used to help meet ASHRAE Standard 62.1 by increasing the amount of air movement at the floor.
References
External links
TECHNOLOGY EVALUATION OF THERMAL DESTRATIFIERS AND OTHER VENTILATION TECHNOLOGIES // Joel C. Hughes, Naval Facilities Engineering Service Center
TEMPERATURE PROFILES AND WINTER DESTRATIFICATION ENERGY SAVINGS
IMPRESS METAL PACKAGING DESTRATIFICATION CASE STUDY March 2017
Heating, ventilation, and air conditioning
Building biology
Building engineering | Thermal destratification | Engineering | 1,536 |
74,599,690 | https://en.wikipedia.org/wiki/Citadel%20of%20Safed | The Citadel of Safed is a now-defunct fortress castle situated on the peak of the mountain housing the modern city of Safed. Furthermore, fortifications existed during the late period of the Second Temple as well as the Roman Empire. However most of the remains left in the place are from the Crusader, Mamluk and Ottoman periods. The citadel was an important administrative center of the crusaders. The citadel was severely damaged in the strong earthquake that struck Safed in 1837. During the last few decades, extensive archaeological excavations have been carried out at the site, revealing remains and ancient findings from all periods of the citadel's existence. Today, the citadel is visited by tourists for its historical value as well as the view, since due to its high location you can see from it the surroundings of Safed, from the Meron mountain massif in the west, the Sea of Galilee in the east, the lower Galilee in the south, and the Naftali and Hermon mountains to the north. The citadel garden was established in 1950, designed by landscape architect Shlomo Oren Weinberg. A memorial monument was erected to the Safed residents who lost their lives in the 1948 Palestine War.
Ancient history
In excavations conducted in the area of Giv'at HaMitzuda, evidence of the existence of a Canaanite settlement from the Bronze Age was discovered. Characteristic burial caves for the period were exposed, indicating that burial practices at the site spanned over several centuries. Additionally, remnants from the Iron Age, contemporaneous with the Israelites' settlement in the land according to biblical narrative, were found after they entered the region. While the city of Safed is within the borders of the inheritance of the tribe of Naphtali, it is not mentioned in the Bible, at least not by that name.
First Jewish Roman War
In anticipation of the Great Revolt, Galilee commander Yosef ben Matityahu decided to build a number of strong fortresses in strategically important locations in the Galilee. In his work 'The Jewish Wars,' Yosef enumerates that he fortified 'Sela Akbara, Safed, Yavne'el, and Meron.' Safed is likely the fortress mentioned. Yosef chose to build the fortress on the mountain adjacent to Safed, a peak that rises 834 meters above sea level, overlooking its surroundings with abundant slopes. In this way, he aimed to protect the Jewish settlement from Roman soldiers expected to attempt to conquer the Galilee initially.
Crusader Period
Prior to the Crusader period, the tower was known as Burj Yatim, and described by thirteenth-century Muslim historian Ibn Shaddad as standing above a "flourishing village." Although Ibn Shaddad ascribes the tower's creation to the Knights Templar, it was most likely built during the early Muslim period, before the creation of the Templars. William of Tyre, a Frankish chronicler, noted this tower, or burgus, in 1157, which he referred to as 'Sephet' or 'Castrum Sephet.' In 1102, a fortress was first constructed. In 1140, the fortress was expanded under the orders of Fulk of Anjou, the King of Jerusalem. In 1168 King Amalric I purchased the castle, reinforced its defenses, and instructed the transfer of the stronghold to the care of the Knights Templar, who held it for two decades.
In 1187, Salah ad-Din ascended to the Galilee, annihilating the crusader army in the Battle of Hattin. Nevertheless, the Templars continued to hold the fortress for an additional year and a half until December 1188. After an extended siege and the retreat of the crusaders, the fortress finally succumbed to Muslim forces. The fortress remained under Ayyubid control for approximately fifty years. Al-Mu'azzam Isa, emir of Damascus, ordered the castle to be destroyed during the 1218-1219 siege of Damietta to prevent it from falling into crusader hands.
In 1240, the crusaders returned to Safed, according to diplomatic agreements between Theobald I of Navarre and al-Salih Ismail, then emir of Damascus. They rebuilt and transformed the fortress into one of the largest crusader citadels in the Middle East. Marseille Bishop Benoît d'Alignan, who wrote De constructione castri Saphet, a detailed report on the fortress and its siege in 1264, described a citadel sprawling over an area of about forty dunams, with a length of 252 meters and a width of 112 meters, surrounded by a double wall for comprehensive defense. The circumference of the inner wall reached 580 meters, with a height of 28 meters, while the outer wall had a circumference of approximately 850 meters and a height of 28 meters. Between the walls, a trench was excavated at a depth of 15-18 meters and a width of 13 meters (the outer trench currently crosses Jerusalem Street). Seven solid towers, 26 meters high, were built in the outer wall to safeguard the citadel. The fortress itself included numerous structures, such as walls with fixed arrow slits, a suspended gate tower, vaulted halls, numerous rooms, and wells.
The construction of the fortified citadel by the Templars lasted two and a half years and cost over a million gold coins.
The Templars effectively utilized the fortress. Historian Abu al-Fida reports in his book Compendium of Human History or Chronicles that there were eighty knights, servants, and fifteen commanders stationed in the citadel. Each of them had fifty orders, as well as laborers. With the capture of the fortress, it housed a thousand knights. Estimates suggest that during times of peace, around 1,700 people resided there, and during wartime, approximately 2,200 individuals.
Historian Shams al-Din al-Uthmani wrote in 1372: "The fortress of Safed was among the most robust of the Frankish fortifications and was the one most closely tied to the Muslims. The Templars resided in it, knights like real eagles, ready to launch raids on cities from Damascus to Daria," (likely Deiraya in the southern Hebron hills) "and its surroundings, and from Jerusalem to Karak," (east of the Jordan) "and its region."
From the Crusader period, impressive remnants have been uncovered: a hall with a Gothic vault, similar to those found at Montfort and other places; columns adorned with floral motifs, resembling those found in other Crusader fortresses in the Middle East; a straight wall made of hewn stones, 25 meters long and 2 meters wide, with arrow slits and loopholes; an octagonal wall, 6 meters long and 3.2 meters wide; wide stones with engraved figures of a gargoyle and a sundial; paved passages with drainage channels; a well plastered with lime to prevent water leakage, and more.
Mamluk period
In 1266, the army of the Mamluk Sultan Baibars besieged the fortress of Safed. On July 23, after six weeks of siege, the Mamluks successfully captured the fortress and slaughtered its defenders. The conquerors, who turned Safed into the capital of the Galilee district, feared the remaining Crusaders in Acre and also the Mongols who, at that time, had seized substantial parts of Asia. These concerns prompted the Mamluks to initiate an impressive reconstruction in the fortress.
The additions made by the Mamluks to the fortress of Safed are described in detail in the book of Al-Otmani. According to him, two giant towers were added to the fortress; one is a round victory tower, resembling a chess piece, in the southern part of the fortress. Its dimensions were enormous by all standards: 60 meters in height and 35 meters in diameter. Inside the tower, a massive well was dug, supplying drinking water to thousands of soldiers in the fortress. The remnants of the tower and the well are now located beneath the monument at the summit of the fortress.
The second tower built by Bibars is a solid gate tower, measuring 15 by 20 meters, in the southwest of the fortress. Remnants of this tower also exist to this day.
The Arab geographer al-Dimashqi, who wrote his book "Selected Times and Wonders on Land and Sea" in the year 1300, and also described the Mamluk fortress.
From the Mamluk period, numerous additional remnants have been discovered, shedding light on the extent of their investment in the fortress: utility buildings connected by a rectangular corridor; a paved access road to the gate tower, 24 meters in length and 7-8 meters in width; arches and archivolts; large square stones, one of which bears the carved image of a roaring lion, likely a symbol of Sultan Baybars; a circular well, with a depth of 10.5 meters and a diameter of 10 meters; many pottery artifacts, some locally made and some imported from various places; Mamluk and Venetian coins, and more. Facilities suitable for artistic activity have also been discovered, dating to the late Mamluk or early Ottoman period. Additionally, ceramic artifacts from the sixteenth century have been uncovered.
It is assumed that significant Mamluk structures were destroyed in the earthquake that struck the region in 1303. The fortress remained abandoned for a long period, but in 1475, the Mamluk Sultan Qaitbay ordered its renovation and restoration.
For further reading
Denys Pringle, Safad, in Secular buildings in the Crusader Kingdom of Jerusalem: an archaeological Gazetteer, Cambridge University Press, (1997), pp. 91–92
See also
Safed
Crusader castles
Mamluk Architecture
Gothic Architecture
References
Bibliography
Safed
Crusader castles
Mamluk castles
Fortifications
Ancient sites in Israel | Citadel of Safed | Engineering | 1,980 |
2,905,637 | https://en.wikipedia.org/wiki/Uniface%20%28programming%20language%29 | Uniface is a low-code development and deployment platform for enterprise applications that can run in a large range of runtime environments, including mobile, mainframe, web, Service-oriented architecture (SOA), Windows, Java EE, and .NET. Uniface is used to create mission-critical applications.
Uniface applications are platform-independent and database-independent. Uniface provides an integration framework that enables Uniface applications to integrate with all major DBMS products such as Oracle, Microsoft SQL Server, MySQL and IBM Db2. In addition, Uniface also supports file systems such as RMS, Sequential files, operating-system text files and a wide range of other technologies, such as IBM mainframe-based products (CICS, IMS), web services, SMTP, POP email, LDAP directories, .NET, ActiveX, Component Object Model (COM), C(++) programs, and Java. Uniface operates under Microsoft Windows, various flavors of Unix, Linux, OpenVMS and IBM i.
Uniface can be used in complex systems that maintain enterprise data supporting business processes such as point-of-sale and web-based online shopping, financial transactions, salary administration, and inventory control. It is used by thousands of companies in more than 30 countries, with an effective installed base of millions of end-users. Uniface applications range from client/server to web, and from data entry to workflow, and portals that are accessed locally, via intranets and the internet.
Originally developed in the Netherlands by Inside Automation, later Uniface B.V., the product and company were acquired by Detroit-based Compuware Corp in 1994, and in 2014 was acquired by Marlin Equity Partners and continued as Uniface B.V. global headquartered in Amsterdam. In February 2021, Uniface was acquired by Rocket Software, headquartered in Waltham, Massachusetts, USA.
Uniface products
Uniface Development Environment is an integrated collection of tools for modeling, implementing, compiling, debugging, and distributing applications.
Uniface applications, including the above, use a common runtime infrastructure, consisting of:
Uniface Runtime Engine—a platform-specific process that interprets and executes compiled application components and libraries.
Uniface Router—a multi-threaded process responsible for inter-process communication (IPC) in Uniface applications. It starts and stops Uniface Server processes, performs load balancing, and passes messages between various Uniface processes.
Uniface Server—a server-based process that enables Uniface clients to access remote resources or to execute remote components. It acts as an application server, a data server, and a file server.
Uniface Repository—an SQL-capable DBMS used to store definitions and properties of development objects, process and organization models, and portal definitions.
Web server—Uniface bundles the Apache Tomcat Server for developing and testing web applications, but any web server can be used in a production environment.
Servlets—Java servlets that broker communication between a web server and the Uniface Server for Uniface web applications and web services.
Database connectors—drivers that handle the connection between Uniface and a variety of databases.
Integration tools—drivers, components, and APIs that handle communication between Uniface and third-party applications and technologies, including Java, CICS, IMS, LDAP, SMTP, POP, operating system commands, COM, and more.
In addition, Uniface Anywhere (formerly Uniface JTi or Java Thin Client Interface) can deliver client/server Uniface applications to any computer connected to the Internet as a thin client solution.
Uniface is a low-code development & deployment platform based on a proprietary procedural scripting (fourth generation) language called Uniface Proc that is used to code application behavior. Uniface automates most input/output operations through its kernel and default code, so much fundamental behavior does not need to be coded.
Uniface applications
Uniface applications are component-based, infrastructure-independent software programs that can create or use data stored in one or more databases or file systems. They can be composite applications that include non-Uniface components created using other development tools, and they can be deployed in distributed client/server and web environments, as mobile applications or web services, and in mainframe environments.
Uniface has various component types intended for use in different layers of multi-tier application architecture.
Components for the presentation tier are responsible for the user interface and include:
Forms—interactive screens for displaying and updating data in a client/server environment.
Server Pages—interactive pages for displaying and updating data in a web environment.
Reports—layouts for presenting data in a printed output.
Components for the business logic tier handle business rules and task-specific behavior and have no user interface:
Services—provide processing and business logic functionality when called by other components, either locally or remotely.
Session Services—centralize complex business rules affecting multiple data entities, such as task-specific behavior, transactions, and referential integrity.
Entity Services—centralize simple business rules for single data entities.
The data access tier contains physical database structures captured in the Uniface application model. Uniface ensures physical data access by encapsulating SQL in its DBMS connectors. Network and middleware access are encapsulated by the middleware drivers and the Uniface Router.
The runtime engine executes the application components. It displays presentation components using the appropriate user interface connector (either GUI or character-based) and sends and receives data via a DBMS connector.
Application development
Uniface applications are developed with the Uniface Development Environment. Originally, it was possible to develop on Apple and DEC platforms; now, Windows is the supported platform for development.
Uniface applications development is model-driven and component-based. The data structure, business rules, and default behavior of the application are captured in the Application Model. Model definitions can be reused and inherited by components, which can override inherited definitions and provide component-specific behavior and characteristics. Templates improve productivity and enforce consistency when defining models.
Application model
The application model defines entities (tables), fields, keys (indexes), and relationships together with referential integrity. Each entity and field in the model has properties and a set of triggers. Business rules are added to the model declaratively by setting properties and procedurally by adding Proc code (Uniface's procedural language) in triggers.
Triggers are containers for code. Some triggers represent user or system events, for example, Occurrence Gets Focus, Read or Leave Field. Others cover matters such as validation or act as placeholders for methods associated with the particular object.
The use of model-level triggers enables Uniface to collect properties and behavior within business objects, separating logical from physical data structures. It makes it possible to define default behavior once, for reuse throughout the application, speeding development and facilitating the 3-tier application architecture.
Each entity (table) is first defined in the Application Model from where it can be exported to the physical database in the form of CREATE TABLE scripts.
Components
Objects described in the application model are reused by components. Developers embed objects from the model on the component by drawing them on the layout canvas (for presentation components) or inserting them into a tree view of the component structure. They can also add component-level objects that are not in the application model, such as control fields, menus, and component variables.
The properties and triggers defined in the application model are inherited by being copied into the component. The definitions can be changed at the component level to provide specific functionality. This breaks the link between the application model and the component (although it is possible to restore the link to the model). If the model code or properties are changed at the model level, all components holding that object need only be recompiled to collect the new definitions. This provides benefits in maintenance and ensures that the rules associated with the object are available wherever it is used.
Uniface repository
Uniface maintains a database of its metadata for reuse—application models, component definitions, component layouts, procedural code, and so on. The repository is proprietary and intended for access via the Uniface Development Environment, to ensure repository integrity. However, the repository structure is documented, making it possible (though not recommended) to interface directly with it for, for example, reporting.
By using a centralized repository, application development can be shared over teams of software developers. Uniface can integrate with any version control system (VCS) that supports the Microsoft Common Source Code Control Interface Specification. The VCS functionality available within Uniface depends on the VCS used because software vendors have interpreted and implemented the MS CSCC API differently.
Application deployment
Uniface applications can be deployed on platforms from mainframe through mobile, without changing the code. The components (and other objects such as startup shells, menus, toolbars (panels), glyphs, global, and included Proc entries) are compiled into runtime objects that can be packaged into zip files and deployed onto any platform. The runtime objects are executed using a virtual machine and a platform-specific interpreter. (Java later followed Uniface in this respect). Components can be compiled on one machine and executed on another, provided the Uniface Virtual Machine is present.
The Uniface Router and Uniface Server make Uniface scalable and make it possible to run processes asynchronously.
Database connectivity
Uniface accesses the many databases and file systems it supports using database connectors (or drivers). DBMS connectors map and convert Uniface data types to the most suitable format of the particular storage medium. At runtime, it is possible to pass parameters to invoke (or disable) database-specific extensions. Provided the database connector is licensed, it is possible to convert between different data sources. Uniface also provides an API, the Database Connector Interface, which can be used to create proprietary connectors for any SQL-based database.
Licensing
Licensing is managed through the Compuware Distributed License Manager (DLM), a server-based system of distributing licenses to clients on request, precluding the need for them to be held locally.
History of Uniface
Originally called UNIS, the product was created in The Netherlands in 1984 by Inside Automation, a company that was headed by Bodo Douqué, with Frits Kress as Technical Director. By 1986, both the product and the company had changed their name to Uniface. (An early logo for the product included a red capital A which reflected the red capital A in the Inside Automation logo.)
Uniface was developed on the principles of the American National Standards Institute, (ANSI), 3-schema architecture. First proposed in 1975, this was a standard approach to the building of database management systems consisting of 3 schema (or metamodels):
Conceptual schema—definition of all the data items and relationships between them. There is only one conceptual schema per database. Uniface implements the conceptual schema as the Application Model (in various Uniface versions known as the Business Object Model and the Application Object Model).
External schema—different external (user) views of the data. There can be many external schemas for a database. Uniface implements external schemas as components. During Uniface's evolution, the External Schema became forms; hidden forms, which ran in the background without displaying to the user, became services; services split into session services for objects of the business tier and entity services that may be either business or data tier. Forms that were printed instead of being displayed became report components. The server page (USP) was introduced for web development, and later the dynamic server page was introduced to support Web 2.0 functionality.
Internal schema—definition of the physical representation of the stored data. Uniface leaves the internal schema to the many relational database systems to which it could be connected, enabling it to be database-independent.
Uniface was developed on the DEC VAX machine, using the native VAX file-management system RMS. A vestige of this is still seen in today's product by its continued use of a “GOLD” key to change modes (DEC VT terminals had a gold or a yellow key on the keyboard. Today the “GOLD” is simply mapped to the numeric keyboard + key, or a function key).
Early versions of the product were bundled with the Sybase RDBMS under the name FastBuild, although it was not limited to accessing only that database.
Uniface has continuously evolved to handle new technologies and application architectures. This has been critical to its success because applications built with Uniface can be migrated, updated, and modernized without losing the original development investment.
Uniface versions
Uniface Version 3 (1986):
Uniface 3 was the first public release. It featured support for multiple databases (RMS, Oracle, C_ISAM, Ingres, and RDB); virtual machine interpretation; the Structure Editor, and the Uniface text and command editor.
Uniface Version 4 (1988):
Uniface 4 improved the text editor (now form editor), improved printing and display support, introduced support for MS-DOS, and added a CASE tool interface.
Uniface Version 5 (1990):
Uniface 5 enabled client/server deployment with the introduction of remote database access through Polyserver. It introduced a graphical user interface via the Universal Presentation Interface (UPI). Database support was extended to a total of 13 databases and file systems, and it was now available on DOS, VMS, OS/2, Stratus VOS, and UNIX. Japanese character support was also introduced.
Uniface Six (1994):
Uniface Six completed the move to fully graphical development environments. It included the graphical form painter and application model editor; improved deployment through Dynamic Object Libraries; added support for Microsoft Object Linking and Embedding (OLE); included support for Apple Macintosh; added permissions control; integrated version control; added Personal Series reporting tools (although these were later removed when the 3rd party decided not to enhance its product); wider platform support.
Uniface Seven (1997):
Uniface Seven focused on component integration for both Uniface and external components through the introduction of the Uniface Request Broker (URB) architecture. The URB supports bi-directional and synchronous or asynchronous communication between components. As well as remote data access, it added partitioned Application Servers and messaging. Uniface Seven also delivered the first Uniface web development and deployment tools with Web Application Server and Uniface Request Dispatcher.
Other enhancements included new component types (Services, Server Pages, Reports); Signature Editor and Assembly Workbench; subsystems; operations; non-modal forms; component instances; improved editors and navigation; enhanced editor plug-in; new debugger; integrated online help; component templates; Web Application Server; improved validation; Uniface Name Server and graphical partitioning manager.
Uniface Seven also saw the introduction of several other tools:
A tool for the modeling, integration, and management of business processes. This functionality became Optimal Flow under Uniface 8, then Uniface Flow under Uniface 9.
A business integration portal initially called Optimal View and later Uniface View
Uniface—a server-based, thin-client solution for delivering web-enabled applications over the Internet or Intranet, providing high performance in low-bandwidth connections.
Uniface 8 (2001):
Uniface 8 brought about major changes in the area of process integration. The Uniface Router and Uniface Server provided scalable, balanced deployment. The Web Request Dispatcher (WRD) replaced the URD, improving performance. Support for web services, with SOAP and XML, was introduced. Connectivity and interoperability were improved and a method for implementing a 3-tier application architecture was introduced.
Connectors for SOAP, COM, CORBA, and MQSeries were added; window and file management was improved; a new deployment utility was introduced, improving application distribution; component subtypes for 3-tier architecture were added; handles were added for component instances, and automatic garbage collection was added.
Uniface 9 (2006):
The Uniface 9 release focused on GUI and usability improvements, thin deployment, and integration. Support for Windows Mobile was added, and configuration and deployment were simplified using zipped archives. Support for Unicode improved what was an already impressive multilingual capability, and improvements in web development and XML handling brought Uniface further into line with industry standards. Dynamic field movement in form components removed some old barriers to flexibility.
Other features included improved color handling, dynamic menus, an XML API, a diagram editor for the Application Model; cross-referencing functionality to support refactoring and deployment, and enhanced web services functionality.
Uniface 9.4 (2010):
Despite being a point release, Uniface 9.4 introduced enough major new functionality to be considered a major release. The major focus was on rich internet application (RIA) functionality, making it possible to develop Web 2.0 applications with the rich functionality of client/server applications using the same tools and methodologies used to develop classic client/server applications. Language and locale support was substantially improved, as was support for HTML email, and security and encryption.
Uniface 9.5 (2011):
The release of Uniface 9.5 has improved the product's integration with the World Wide Web. The introduction of a JavaScript API, together with other improvements, means that client-side processing can bring benefits in the areas of performance, integration, functionality, and user-friendliness. The session management capability has been extended to offer improved security. And the processing of Web Services now fully supports complex datatypes for both SOAP and RESTful services. There have also been improvements for those customers who have business-critical client/server applications, particularly in the area of the grid widget.
Uniface 9.6 (2012):
Uniface 9.6, provided a significant overhaul of the Uniface client-server GUI capabilities. Functionality included an HTML5 control leveraging the JavaScript APIs originally delivered for the web, an enhanced tab control, and updates to image handling, buttons plus other improvements. The form container control enables 'forms within forms', enabling the development of dynamic user experiences.
In addition to the GUI enhancements, Uniface 9.6 also delivered enhancements to the Uniface Web and Web Services capabilities, including the ability to dynamically change the scope of web transactions, web pagination, and hitlist processing and improved WSDL and XML capabilities.
Uniface 9.7 (2015):
Uniface 9.7 delivered significant enhancements to the development of Web Applications, including extensions to facilitate the development and deployment of mobile applications based on hybrid applications and enhancing the multi-channel development/deployment capability of Uniface. This will be significantly extended with the Uniface 9.7.02 release (May 2016), providing integration to a build service provider to enable hybrid applications to be packaged for distribution via Google Play and the Apple Store.
In addition to the mobile and web enhancements, Uniface 9.7 delivered integration and client-server enhancements (MS Windows 10).
The Uniface Development Environment (UDE) was modernized, with a new look and feel, providing a new look front screen, and a refreshed visual user experience. The approach that Uniface took to modernize their UDE was shared both at developer conferences and on their community website Uniface.info to help advise and promote client-server modernization to their existing customer base.
Uniface 9.7 provides two new database drivers, enabling connectivity to PostgreSQL and SAP Hana.
Uniface 10 (2015):
Uniface 10 delivered a rewritten development environment based on the core concepts of Integrated Development Environments (IDE).
The initial release positioned as a preview or early adopter release showed a significant change from a proprietary development style to a highly productive implementation of industry-standard development, enabling the development of Web applications.
In May 2015, the first edition of Uniface 10 was released to early adopters to test and develop web applications. The full enterprise edition of Uniface 10 was released in September 2016, delivering mobile and client-server development and a migration path to enable the existing customer base to move their applications to Uniface 10.
Rocket Uniface 10.4 (2021) Uniface 10.4 uses Sentinel License Manager which enables users to better manage their licenses. It has enhancements of the Rocket Uniface Router Monitor, API for TLS, and Repository updates for IDE. It is a 64-bit Development Environment, upgraded to Tomcat 9. OpenSSL and CURL Libraries have been updated and the OpenSSL executable is now delivered with Uniface on Windows. Also, a new SLE 2.0 connector for SQLite was added.
References
Gartner: Uniface company profile
External links
Fourth-generation programming languages
Software development
Integrated development environments | Uniface (programming language) | Technology,Engineering | 4,306 |
18,561,020 | https://en.wikipedia.org/wiki/Russula%20alachuana | Russula alachuana is a species of mushroom.
See also
List of Russula species
alachuana
Fungus species
Taxa named by William Alphonso Murrill | Russula alachuana | Biology | 35 |
5,903,016 | https://en.wikipedia.org/wiki/SMP%20%28computer%20algebra%20system%29 | Symbolic Manipulation Program, usually called SMP, was a computer algebra system designed by Chris A. Cole and Stephen Wolfram at Caltech circa 1979. It was initially developed in the Caltech physics department with contributions from Geoffrey C. Fox, Jeffrey M. Greif, Eric D. Mjolsness, Larry J. Romans, Timothy Shaw, and Anthony E. Terrano.
SMP was first sold commercially in 1981, by the Computer Mathematics Corporation of Los Angeles, which later became part of Inference Corporation. Inference further developed the program and marketed it commercially from 1983 to 1988, but it was not a commercial success, and Inference became pessimistic about the market for symbolic math programs, and so abandoned SMP to concentrate on expert systems.
SMP was influenced by the earlier computer algebra systems Macsyma (of which Wolfram was a user) and Schoonschip (whose code Wolfram studied).
SMP follows a rule-based approach, giving it a "consistent, pattern-directed language". Unlike Macsyma and Reduce, it was written in C.
During the 1980s, it was one of the generally available general-purpose computer algebra systems, along with Reduce, Macsyma, and Scratchpad, and later muMATH and Maple. It was often used for teaching college calculus.
The design of SMP's interactive language and its "map" commands influenced the design of the 1984 version of Scratchpad.
Criticism
SMP has been criticized for various characteristics, notably its use of floating-point numbers instead of exact rational numbers, which can lead to incorrect results, and makes polynomial greatest common divisor calculations problematic. Many other problems in early versions of the system were purportedly fixed in later versions.
References
Additional sources
Chris A. Cole, Stephen Wolfram, "SMP: A Symbolic Manipulation Program", Proceedings of the fourth ACM symposium on Symbolic and algebraic computation (SIGSAM), Snowbird, Utah, 1981. full text
Stephen Wolfram with Chris A. Cole, SMP: A Symbolic Manipulation Program, Reference Manual, California Institute of Technology, 1981; Inference Corporation, 1983. full text
Stephen Wolfram, "Symbolic Mathematical Computation", Communications of the ACM, April 1985 (Volume 28, Issue 4). Despite the general-sounding title the focus is on an introduction to SMP. Online version of this article
J.M. Greif, "The SMP Pattern-Matcher" in B.F. Caviness (editor), Proceedings of EUROCAL 1985, volume 2, pgs. 303-314, Springer-Verlag Lecture Notes in Computer Science, no. 204, A discussion, with examples, of the capabilities, tasks, and design philosophy of the pattern-matcher.
SMP's manual "SMP Handbook"
Stephen Wolfram's blog post on the history of SMP's creation
Computer algebra systems | SMP (computer algebra system) | Mathematics | 594 |
5,011,779 | https://en.wikipedia.org/wiki/Channelling%20%28physics%29 | In condensed-matter physics, channelling (or channeling) is the process that constrains the path of a charged particle in a crystalline solid.
Many physical phenomena can occur when a charged particle is incident upon a solid target, e.g., elastic scattering, inelastic energy-loss processes, secondary-electron emission, electromagnetic radiation, nuclear reactions, etc. All of these processes have cross sections which depend on the impact parameters involved in collisions with individual target atoms. When the target material is homogeneous and isotropic, the impact-parameter distribution is independent of the orientation of the momentum of the particle and interaction processes are also orientation-independent. When the target material is monocrystalline, the yields of physical processes are very strongly dependent on the orientation of the momentum of the particle relative to the crystalline axes or planes. Or in other words, the stopping power of the particle is much lower in certain directions than others. This effect is commonly called the "channelling" effect. It is related to other orientation-dependent effects, such as particle diffraction. These relationships will be discussed in detail later.
History
The channelling effect was first discovered in pioneering binary collision approximation computer simulations in 1963 in order to explain exponential tails in experimentally observed ion range distributions that did not conform to standard theories of ion penetration. The simulated prediction was confirmed experimentally the following year by measurements of ion penetration depths in single-crystalline tungsten. First transmission experiments of ions channelling through crystals were performed by Oak Ridge National Laboratory group showing that ions distribution is determinated by crystal rainbow channelling effect.
Mechanism
From a simple, classical standpoint, one may qualitatively understand the channelling effect as follows: If the direction of a charged particle incident upon the surface of a monocrystal lies close to a major crystal direction (Fig. 1), the particle with high probability will only do small-angle scattering as it passes through the several layers of atoms in the crystal and hence remain in the same crystal 'channel'. If it is not in a major crystal direction or plane ("random direction", Fig. 2), it is much more likely to undergo large-angle scattering and hence its final mean penetration depth is likely to be shorter. If the direction of the particle's momentum is close to the crystalline plane, but it is not close to major crystalline axes, this phenomenon is called "planar channelling".
Channelling usually leads to deeper penetration of the ions in the material, an effect that has been observed experimentally and in computer simulations, see Figures 3-5.
Negatively charged particles like antiprotons and electrons are attracted towards the positively charged nuclei of the plane, and after passing the center of the plane, they will be attracted again, so negatively charged particles tend to follow the direction of one crystalline plane.
Because the crystalline plane has a high density of atomic electrons and nuclei, the channeled particles eventually suffer a high angle Rutherford scattering or energy-losses in collision with electrons and leave the channel. This is called the "dechannelling" process.
Positively charged particles like protons and positrons are instead repelled from the nuclei of the plane, and after entering the space between two neighboring planes, they will be repelled from the second plane. So positively charged particles tend to follow the direction between two neighboring crystalline planes, but at the largest possible distance from each of them. Therefore, the positively charged particles have a smaller probability of interacting with the nuclei and electrons of the planes (smaller "dechannelling" effect) and travel longer distances.
The same phenomena occur when the direction of momentum of the charged particles lies close to a major crystalline, high-symmetry axis. This phenomenon is called "axial channelling". Generally, the effect of axial channeling is higher than planar channeling due to a deeper potential formed in axial conditions.
At low energies the channelling effects in crystals are not present because small-angle scattering at low energies requires large impact parameters, which become bigger than interplanar distances. The particle's diffraction is dominating here. At high energies the quantum effects and diffraction are less effective and the channelling effect is present.
Applications
There are several particularly interesting applications of the channelling effects.
Channelling effects can be used as tools to investigate the properties of the crystal lattice and of its perturbations (like doping) in the bulk region that is not accessible to X-rays.
The channelling method may be utilized to detect the geometrical location of interstitials. This is an important variation of the Rutherford backscattering ion beam analysis technique, commonly called Rutherford backscattering/channelling (RBS-C).
The channelling may even be used for superfocusing of ion beam, to be employed for sub-atomic microscopy.
At higher energies (tens of GeV), the applications include the channelling radiation for enhanced production of high energy gamma rays, and the use of bent crystals for extraction of particles from the halo of the circulating beam in a particle accelerator.
Classical channelling theory
The classical treatment of channelling phenomenon supposes that the ion - nucleus interactions are not correlated phenomena. The first analytic classical treatise is due to Jens Lindhard in 1965, who proposed a treatment that still remains the reference one. He proposed a model that is based on the effects of a continuous repulsive potential generated by atomic nuclei lines or planes, arranged neatly in a crystal. The continuous potential is the average in a row or on an atomic plane of the single Coulomb potentials of the charged nuclei and shielded from the electronic cloud.
The proposed potential (named Lindhard potential) is:
r represents the distance from the nucleus, is a constant equal to 3 and a is the screen radius of Thomas-Fermi:
is equal to the Bohr radius (=0.53Å the radius of the smallest orbit of the Bohr atom). The typical values for the screen radius is in between 0.1-0.2 Å.
Considering the case of axial channelling, if d is the distance between two successive atoms of an atomic row, the mean of the potential along this row is equal to:
equal to the distance between atomic lines. The obtained potential is a continuous potential generated by a string of atoms with an atomic number and a mean distance d between nuclei.
The energy of the channeled ions, having an atomic number can be written as:
where e are respectively the parallel and perpendicular components of the momentum of the projectile with respect to the considered direction of the string of atoms. The potential is the minimum potential of the channel, taking into account the superposition of the potentials generated by the various atomic lines inside the crystal.
It therefore follows that the components of the momentum are:
where is the angle between the direction of motion of an ion and the considered crystallographic axial direction.
Neglecting the energy loss processes, the quantity is conserved during the channeled ion motion and the energy conservation can be formulated as follows:
The equation is also known as the expression of the conservation of transverse energy. The approximation of is feasible, since we consider a good alignment between ion and crystallographic axis.
The channelling condition can now be considered the condition for which an ion is channeled if its transverse energy is not sufficient to overcome the height of the potential barrier created by the strings of ordered nuclei. It is therefore useful to define the "critical energy" as that transverse energy under which an ion is channeled, while if it exceeds it, an ion will be de-channeled.
Typical values are a few tens of eV, since the critical distance is similar to the screen radius, i.e. 0.1-0.2 Å. Therefore, all ions with transverse energy lower than will be channeled.
In the case of (perfect ion-axis alignment) all ions with impact parameter will be de-channeled.
where is the occupied area by each row of atoms having an average distance d in a material, with a density N (expressed as atoms / cm ^ 3). Therefore, is an estimation of the smallest fraction of de-channeled ions that can be obtained from a material perfectly aligned to the ion beam. By considering a single crystal of silicon, oriented along the <110>, a can be calculated, in good agreement with the experimental values.
Further considerations can be made by considering the thermal vibration motion of the nuclei: for this discussion, see the reference.
The critical angle can be defined as the angle such that if the ion enters with an angle smaller than the critical angle it will be channeled vice versa its transverse energy will allow it to escape to the periodic potential.
Using the Lindhard potential and assuming the amplitude of thermal vibration as the minimum approach distance.
Typical critical angles values (at room temperature) are for silicon <110> 0.71 °, for germanium <100> 0.89 °, for tungsten <100> 2.17 °.
Similar consideration can be made for planar channelling. In this case, the average of the atomic potentials will cause the ions to be confined between charge planes that correspond to a continuous planar potential .
where are the average number of atoms per unit area in the plane, is the spacing between crystallographic planes and y is the distance from the plane. Planar channelling has critical angles that are a factor of 2-4 smaller than axial analogs and a which is greater than axial channelling, with values that are around 10-20%, comparing with > 99% of axial channelling. A complete discussion of planar channelling can be found in references.
General literature
J.W. Mayer and E. Rimini, Ion Beam Handbook for Material Analysis, (1977) Academic Press, New York
L.C. Feldman, J.W. Mayer and S.T.Picraux, Material Analysis by Ion Channelling, (1982) Academic Press, New York
R. Hovden, H. L. Xin, D. A. Muller, Phys. Rev. B 86, 195415 (2012)
G. R. Anstis, D. Q. Cai, and D. J. H. Cockayne, Ultramicroscopy 94, 309 (2003).
D. Van Dyck and J. H. Chen, Solid State Communications 109, 501 (1999).
S. Hillyard and J. Silcox, Ultramicroscopy 58, 6 (1995).
S. J. Pennycook and D. E. Jesson, Physical Review Letters 64, 938 (1990).
M. V. Berry and Ozoriode.Am, Journal of Physics a-Mathematical and General 6, 1451 (1973).
M. V. Berry, Journal of Physics Part C Solid State Physics 4, 697 (1971).
A. Howie, Philosophical Magazine 14, 223 (1966).
P. B. Hirsch, A. Howie, R. B. Nicholson, D. W. Pashley, and M. Whelan, Electron microscopy of thin crystals (Butterworths London, 1965).
J. U. Andersen, Notes on Channeling, http://phys.au.dk/en/publications/lecture-notes/ (2014)
See also
Emission channeling
Electron channeling pattern
References
External links
CERN NA43 Experiment that investigated interactions of high energy particles with crystals
Note and reports on crystal extraction
The future looks bright for particle channelling on CERN Courier
Experimental particle physics | Channelling (physics) | Physics | 2,359 |
77,135,221 | https://en.wikipedia.org/wiki/Emavusertib | Emavusertib (CA-4948) is a drug which acts as a selective inhibitor of the enzyme Interleukin-1 receptor-associated kinase 4 (IRAK-4) and was developed for the treatment of some forms of cancer.
See also
Zimlovisertib
References
Interleukin-1 receptor-associated kinase 4 inhibitors
Pyridines
Pyrrolidines
4-Morpholinyl compounds
Oxazoles
Carboxamides
Oxazolopyridines | Emavusertib | Chemistry | 104 |
32,293,190 | https://en.wikipedia.org/wiki/Penicillium%20brevicompactum | Penicillium brevicompactum is a mould species in the genus Penicillium.
Mycophenolic acid can be isolated from P. brevicompactum.
References
brevicompactum
Fungi described in 1901
Fungus species | Penicillium brevicompactum | Biology | 54 |
2,739,199 | https://en.wikipedia.org/wiki/Gasworks | A gasworks or gas house is an industrial plant for the production of flammable gas. Many of these have been made redundant in the developed world by the use of natural gas, though they are still used for storage space.
Early gasworks
Coal gas was introduced to Great Britain in the 1790s as an illuminating gas by the Scottish inventor William Murdoch.
Early gasworks were usually located beside a river or canal so that coal could be brought in by barge. Transport was later shifted to railways and many gasworks had internal railway systems with their own locomotives.
Early gasworks were built for factories in the Industrial Revolution from about 1805 as a light source and for industrial processes requiring gas, and for lighting in country houses from about 1845. Country house gas works are extant at Culzean Castle in Scotland and Owlpen in Gloucestershire.
Equipment
A gasworks was divided into several sections for the production, purification and storage of gas.
Retort house
This contained the retorts in which coal was heated to generate the gas. The crude gas was siphoned off and passed on to the condenser. The waste product left in the retort was coke. In many cases the coke was then burned to heat the retorts or sold as smokeless fuel.
Condenser
This consisted of a bank of air-cooled gas pipes over a water-filled sump. Its purpose was to remove tar from the gas by condensing it out as the gas was cooled. Occasionally the condenser pipes were contained in a water tank similar to a boiler but operated in the same manner as the air-cooled variant. The tar produced was then held in a tar well/tank which was also used to store liquor.
Exhauster
An impeller or pump was used to increase the gas pressure before scrubbing. Exhausters were optional components and could be placed anywhere along the purifying process but were most often placed after the condensers and immediately before the gas entered the gas holders.
Scrubber
A sealed tank containing water through which the gas was bubbled. This removed ammonia and ammonium compounds. The water often contained dissolved lime to aid the removal of ammonia. The water left behind was known as ammonical liquor. Other versions used consisted of a tower, packed with coke, down which water was trickled.
Purifier
Also known as an Iron Sponge, this removed hydrogen sulfide from the gas by passing it over wooden trays containing moist ferric oxide. The gas then passed on to the gasholder and the iron sulfide was sold to extract the sulfur. Waste from this process often gave rise to blue billy, a ferrocyanide contaminant in the land which causes problems when trying to redevelop an old gasworks site.
Benzole plant
Often only used at large gasworks sites, a benzole plant consisted of a series of vertical tanks containing petroleum oil through which the gas was bubbled. The purpose of a benzole plant was to extract benzole from the gas. The benzole dissolved into the petroleum oil was run through a steam separating plant to be sold separately.
Gasholder
The gas holder or gasometer was a tank used for storage of the gas and to maintain even pressure in distribution pipes. The gas holder usually consisted of an upturned steel bell contained within a large frame that guided it as it rose and fell depending on the amount of gas it contained.
By-products
The by-products of gas-making, such as coke, coal tar, ammonia and sulfur had many uses. For details, see coal gas.
British gasworks today
Coal gas is no longer made in the UK but many gasworks sites are still used for storage and metering of natural gas and some of the old gasometers are still in use. Fakenham gasworks dating from 1846 is the only complete, non-operational gasworks remaining in England. Other examples exist at Biggar in Scotland and Carrickfergus in Northern Ireland.
Photos of Fakenham Gas Works
Gasworks in popular culture
Gasworks were noted for their foul smell and generally located in the poorest metropolitan areas. Cultural remnants of gasworks include many streets named Gas Street or Gas Avenue and groups or gangs known as Gas House Gang, such as the 1934 St. Louis Cardinals baseball team. The 1946 film Gas House Kids features children from New York's Gas House District taking on a gang, and spawned two sequels. Ewan McColl's 1968 song "Dirty Old Town" (about his home town of Salford) famously begins "Found my love by the gaswork croft…" (in cover versions often "I met my love by the gasworks wall…")
Fans of Bristol Rovers F.C. in south west England are known as ‘Gas-Heads’ due to the proximity of gasometers near to their original ground at Eastville in Bristol. Bristol Rovers F.C. is also known as ‘The Gas’.
Railway gasworks
Gas was used for many years to illuminate the interior of railway carriages. The New South Wales Government Railways manufactured its own oil-gas for this purpose, together with reticulated coal-gas to railway stations and associated infrastructure. Such works were established at the Macdonaldtown Carriage Sheds, Newcastle, Bathurst, Junee and Werris Creek. These plants followed on from the works of a private supplier which the railway took over in 1884.
Gas was also transported in special travelling gas reservoir wagons from the gasworks to stationary reservoirs located at a number of country stations where carriage reservoirs were replenished.
With the spreading conversion to electric power for lighting buildings and carriages during the 1920s and 1930s, the railway gasworks were progressively decommissioned.
Gasworks being operated as industrial museums
Gasworks Brisbane, Australia
The Gasworks Newstead site in Brisbane Australia has been a stalwart of the river's edge since its development in 1863. By 1890, the works were supplying gas to Brisbane streets from Toowong to Hamilton and over the next 100 years, it would grow to supply Brisbane city with the latest in gas technology until it was decommissioned in 1996.
In March 1866, the Queensland Defence Force placed an official request for town gas connection, evidence of the vital role the gasworks played in the economic development of colonial Brisbane. In fact, the gasworks were considered to be of such importance, that during World War II, genuine fears of attack from Japanese air raids motivated the installation of anti aircraft guns which vigilantly watched over the plant and its employees throughout the war.
The site itself has been synonymous with economic growth and benefit to Brisbane and Queensland with the success of the gasworks facilitating further development of the Newstead/Teneriffe area to include the James Hardie fibro-cement manufacturing plant, Shell Oil plant, Brisbane Water and Sewerage Depot and even the “Brisbane Gas Company Cookery School” which operated in the 1940s. In 1954, a carbonizing plant was built, giving Brisbane the "most modern gas producing plant in Australia", consuming 100 tonnes of coal every eight hours.
During its golden years in the late 19th and early 20th centuries, the site also played a vital role in providing employment to aboriginal Australians and many migrant workers arriving there from Europe after the second World War.
The fine tradition of the Brisbane Gasworks economic and employment-based successes will not be lost or forgotten with the Teneriffe Gasworks Village Development paying homage to the sites history and integrity in its pending urban development.
The gasholder structure at this site is set to become a hub of a new property development on the site – keeping the structural integrity of the pig iron structure. It will be a true reflection of urban renewal embracing its industrial past.
Dunedin Gasworks Museum
Located in South Dunedin, New Zealand, the Dunedin Gasworks Museum consists of a conserved engine house featuring a working boiler house, fitting shop and collection of five stationary steam engines. There are also displays of domestic and industrial gas appliances.
Technopolis (Gazi)
Located in Athens, Greece Technopolis (Gazi) is a gasworks converted to an exhibition space.
The Gas Museum, Leicester
The Gas Museum in Leicester, UK is operated by The National Gas Museum Trust.
Gas Works Park
Gas Works Park is a public park in Seattle, Washington.
Warsaw Gasworks Museum
The Warsaw Gasworks Museum is a museum in Warsaw, Poland.
Museo dell'Acqua e del Gas
The Museo dell'Acqua e del Gas is a museum in Genoa, Italy.
It is located in the industrial area of the IREN company, an Italian multi-utility, where coal gas has been produced till 1972.
The small Museum, managed by Fondazione AMGA, hosts a rich collection of industrial finds, related to water and gas works history.
Hasanpaşa Gasworks
Hasanpaşa Gasworks a 1892-built gasworks in Istanbul, Turkey, which was redeveloped into a museum in 2021.
See also
History of manufactured fuel gases
British Gas plc
References
Chemical plants
Fuel gas
Industrial buildings and structures | Gasworks | Chemistry | 1,812 |
35,915,429 | https://en.wikipedia.org/wiki/TI%20StarterWare | StarterWare was initially developed by TI as a free software package catering to their arm A8 and A9 microprocessors. Its primary purpose was to offer drivers and libraries with a consistent API tailored for processors within these microprocessor families. The package encompassed utilities and illustrative use cases across various applications. Despite TI's diminished active backing, the software lingers in open-source repositories on GitHub, primarily upholding support for widely used beagle boards that make use of these processors.
This software collection closely aligns with what many chip manufacturers refer to as a HAL (Hardware Abstraction Layer). In TI's context, it's termed DAL (Device Abstraction Layer). Its role revolves around furnishing fundamental functionalities and an API that an operating system can conveniently adapt to. For those inclined to create baremetal programs by directly engaging with the starterware API, the package also offered documentation and assistance.
Texas Instruments
Embedded systems
System software | TI StarterWare | Technology,Engineering | 203 |
49,250,446 | https://en.wikipedia.org/wiki/Reactive%20carbonyl%20species | Reactive carbonyl species (RCS) are molecules with highly reactive carbonyl groups, and often known for their damaging effects on proteins, nucleic acids, and lipids. They are often generated as metabolic products. Important RCSs include 3-deoxyglucosone, glyoxal, and methylglyoxal. RCSs react with amines and thiol groups leading to advanced glycation endproducts (AGEs). AGE's are indicators of diabetes.
Reactive aldehyde species (RASP), such as malondialdehyde and 4-hydroxynonenal, are a subset of RCS that are implicated in a variety of human diseases.
See also
Reactive oxygen species
Reactive sulfur species
Reactive nitrogen species
References
Molecules
Carbon compounds | Reactive carbonyl species | Physics,Chemistry,Biology | 159 |
63,862,751 | https://en.wikipedia.org/wiki/Erbium%28III%29%20fluoride | Erbium(III) fluoride is the fluoride of erbium, a rare earth metal, with the chemical formula ErF3. It can be used to make infrared light-transmitting materials and up-converting luminescent materials.
Production
Erbium(III) fluoride can be produced by reacting erbium(III) nitrate and ammonium fluoride:
Er(NO3)3 + 3 NH4F → 3 NH4NO3 + ErF3
References
Further reading
Erbium compounds
Fluorides
Lanthanide halides | Erbium(III) fluoride | Chemistry | 119 |
70,855,725 | https://en.wikipedia.org/wiki/Goyang%20%28fermented%20food%29 | Goyang is a fermented, lightly acidic vegetable food of the Himalayan Sherpa people of Sikkim state and Darjeeling hills of India, and Nepal. It is prepared during the summer monsoon season when the leaves of the wild plant Cardamine macrophylla Willd., with the local name magane-saag, belonging to the family Brassicaceae are available abundantly for the picking in the surrounding hillside.
Preparation
The magane-saag leaves are collected, washed, cut, drained, and pressed into bamboo baskets lined with local fig leaves. The baskets are covered with more fig leaves and stored at room temperature for nearly a month, allowing the magane-saag leaves to ferment. The goyang is now ready and transferred to airtight containers where it is stored for two or three months. If the fermented goyang is shaped into tightly-pressed balls and dried in the sun for several days, its shelf life may be extended.
Culinary practice
Goyang is most commonly prepared in Sherpa homes, there being no reports of its sale in the markets. It is generally boiled with yak meat or beef, along with noodles, to make a thukpa of heavy consistency, a regularly eaten Sherpa food.
Notes
Bibliography
Darjeeling
Indian cuisine
Nepalese cuisine
Fermented foods | Goyang (fermented food) | Biology | 273 |
2,070,045 | https://en.wikipedia.org/wiki/Heun%20function | In mathematics, the local Heun function is the solution of Heun's differential equation that is holomorphic and 1 at the singular point z = 0. The local Heun function is called a Heun function, denoted Hf, if it is also regular at z = 1, and is called a Heun polynomial, denoted Hp, if it is regular at all three finite singular points z = 0, 1, a.
Heun's equation
Heun's equation is a second-order linear ordinary differential equation (ODE) of the form
The condition is taken so that the characteristic exponents for the regular singularity at infinity are α and β (see below).
The complex number q is called the accessory parameter. Heun's equation has four regular singular points: 0, 1, a and ∞ with exponents (0, 1 − γ), (0, 1 − δ), (0, 1 − ϵ), and (α, β). Every second-order linear ODE on the extended complex plane with at most four regular singular points, such as the Lamé equation or the hypergeometric differential equation, can be transformed into this equation by a change of variable.
Coalescence of various regular singularities of the Heun equation into irregular singularities give rise to several confluent forms of the equation, as shown in the table below.
{| class="wikitable"
|+Forms of the Heun Equation
|-
! Form !! Singularities !! Equation
|-
| General
| 0, 1, a, ∞
|
|-
| Confluent
| 0, 1, ∞ (irregular, rank 1)
|
|-
| Doubly Confluent
| 0 (irregular, rank 1), ∞ (irregular, rank 1)
|
|-
| Biconfluent
| 0, ∞ (irregular, rank 2)
|
|-
| Triconfluent
| ∞ (irregular, rank 3)
|
|}
q-analog
The q-analog of Heun's equation has been discovered by and studied by .
Symmetries
Heun's equation has a group of symmetries of order 192, isomorphic to the Coxeter group of the Coxeter diagram D4, analogous to the 24 symmetries of the hypergeometric differential equations obtained by Kummer.
The symmetries fixing the local Heun function form a group of order 24 isomorphic to the symmetric group on 4 points, so there are 192/24 = 8 = 2 × 4 essentially different solutions given by acting on the local Heun function by these symmetries, which give solutions for each of the 2 exponents for each of the 4 singular points. The complete list of 192 symmetries was given by using machine calculation. Several previous attempts by various authors to list these by hand contained many errors and omissions; for example, most of the 48 local solutions listed by Heun contain serious errors.
See also
Heine–Stieltjes polynomials, a generalization of Heun polynomials.
References
A. Erdélyi, F. Oberhettinger, W. Magnus and F. Tricomi Higher Transcendental functions vol. 3 (McGraw Hill, NY, 1953).
Hahn W.(1971) On linear geometric difference equations with accessory parameters.Funkcial. Ekvac., 14, 73–78
.
Ordinary differential equations
Special functions | Heun function | Mathematics | 705 |
1,200,360 | https://en.wikipedia.org/wiki/Panel%C3%A1k | ( ) is a colloquial term in Czech and Slovak for a large panel system panel building constructed of pre-fabricated, pre-stressed concrete, such as those extant in the former Czechoslovakia (now the Czech Republic & Slovakia) and elsewhere in the world. Paneláks are usually located in housing estates (, )
(plural: ) is derived from the standard panelový dům or panelový dom meaning, literally, "panel house / prefabricated-sections house". The term panelák is used mainly for the elongated blocks with more sections with separate entrances – simple panel tower blocks are called "věžový dům" (tower house) or colloquially "věžák" . The buildings remain a towering, highly visible reminder of the Communist era. The term panelák refers specifically to buildings in the former Czechoslovakia, however, similar buildings were a common feature of urban planning in communist countries and even in the West.
History
Interwar Czechoslovakia saw many constructivist architects in the country, such as Vladimír Karfík and František Lydie Gahura, many of whom would maintain prominence following the establishment of the Czechoslovak People's Republic in 1948. In the years following 1948, the Czechoslovakian architectural scene favored Stalinist architecture over more modern architecture. However, a 1954 speech by Nikita Khrushchev encouraging the construction of panel buildings, coupled by post-war housing shortages faced throughout both eastern and western Europe, encouraged the country's architectures to construct more simplistic, modernist buildings. Throughout the mid 1950s, the country's designers applied a modernist aesthetic known as the , named after the international attention it attracted during the 1958 World’s Fair held in Brussels. By the late 1960s, the country's paneláks often reached up to 16 stories in height.
Between 1959 and 1995, paneláks containing 1.17 million flats were built in what is now the Czech Republic. As of 2005, they housed about 3.5 million people, or about one-third of the country's population.
In Prague and other large cities, most paneláks were built in a type of housing estate known as a sídliště or sídlisko . Such housing developments now dominate large parts of Prague, Bratislava and other cities and towns. The first such housing development built in Prague was Petřiny in the 1950s; the largest in Prague is Jižní Město (about 100,000 inhabitants), with 200 buildings and 30,000 flats built since the 1970s. The Slovak Petržalka however, is the largest such housing development in Central Europe, with its population exceeding 110,000.
Following the Velvet Revolution in 1989, there was widespread speculation that the country's paneláks would fall out of favor, due to their simplicity and small size. The Czech and Slovak government sold individual panelák apartments to their tenants for cheap prices, furthering speculation that the apartments would be undesirable. However, these fears have not materialized.
Characteristics
A typical panelák apartment has a foyer, bathroom, kitchen, a living room (also used for dining), and a bedroom. All paneláks in the Czech Republic were constructed to follow one of sixteen design patterns.
Paneláks have been criticized for their simplistic design, poor-quality building materials, and their tendency to become overcrowded. In 1990, Václav Havel, who was then the president of Czechoslovakia, called paneláks "undignified rabbit pens, slated for liquidation". Panelák housing estates as a whole are said by some to be mere bedroom communities with few conveniences and even less character.
However, paneláks have also been praised by many. Upon their introduction, paneláks offered more reliable heating, hot water, and plumbing than existing buildings, especially those in rural locations. The buildings typically offered large amounts of natural light, compared to their older counterparts.
Today
Paneláks remain commonplace today, and have attracted a wide diversity of social classes. Fears that paneláks would become undesirable and be subject to middle class flight, commonplace following the Velvet Revolution, have not materialized. Panelák apartments have risen in value more than brick apartments, have been praised for housing people from a wide variety of incomes, and have been subject to a number of positive cultural depictions including magazines and TV shows.
Areas with high shares of its population living in paneláks include the city of Karviná (where approximately 97% of people live in them), Petržalka, and the city of Most (approx. 80%). Most's historical city was largely torn down due to the spread of coal mining and the majority of its population was moved into paneláks.
Amenities
Renovations
In March 2005, the director of the Czech Ministry of Regional Development expressed concerns that the country's paneláks were near the end of their lifespan, citing an increasing number of structural incidents. He estimated that his agency would need 400 billion Czech koruna to modernize paneláks in the Czech Republic, and 1.5 trillion to tear them down entirely.
In recent years, many paneláks have been repainted, renovated, and repaired if needed, with funding mainly from the government, partially thanks to funds from the European Union. A sizable renovation market has formed in recent years, and even a home magazine, Panel Plus, exists to give renovators ideas.
Ownership
Following the Velvet Revolution, most panelák apartments were sold to their tenants at low costs. Many panelák flats are now the property of their inhabitants, though they are also rented out through real estate agents and private landlords, The buildings and surrounding areas are often owned and managed by the government, administrative divisions, housing cooperatives, authorities, self-governing (non-profit) organizations, owners of apartments (individual blocks), and/or through public–private partnerships and such, or a combination thereof.
Other countries
Buildings similar to paneláks were built also in other communist countries, and they are a common feature of cityscapes across Central and Eastern Europe, and to some degree Northern Europe.
In Bulgaria, buildings similar to paneláks are colloquially known as "panelki", and are the predominant type of en masse housing throughout the country. In Hungary, similar buildings are called panelház. In Poland, they are called "bloki" (blocks), or "wielka płyta" (the great panel). In Germany they are known as Plattenbau. Most buildings in Soviet-era Microdistricts are panel buildings.
In the European Union, among former communist countries, a majority of the population lives in flats in Latvia (65.1%), Estonia (63.8%), Lithuania (58.4%), Czech Republic (52.8%) and Slovakia (50.3%) (as of 2014, data from Eurostat). However, not all flat dwellers in Eastern Europe live in Communist era blocks of flats; many live in buildings constructed after the fall of communism, and some in buildings surviving from the era before communism.
In the United States, some housing estates have buildings that are similar to paneláks or are built from the same or similar material.
Popular culture
The movie Panelstory from Věra Chytilová shows the life of several inhabitants in a real unfinished communist-bloc apartment. Awarded a Great Prize in San Remo in 1980.
Béla Tarr's film Panelkapcsolat tells a doomed love story set in a similar housing estate in Hungary. Special Mention at the 1982 Locarno Film Festival.
Polish director Krzysztof Kieślowski's celebrated Dekalog series is set in a wielka płyta housing estate in Warsaw, Poland.
The long-running Slovak soap opera Panelák focused on the residents of a single block in Bratislava.
Other popular TV series set largely within the confines of a panelák include the long running sitcom Susedia (Neighbours), focusing on the relationships between the ethnically Slovak and Slovak-Hungarian families living within the building, as well as some episodes of the stop-motion animation series Pat & Mat.
See also
Sídlisko
Khrushchevka & Brezhnevka (former Soviet Union)
Panelház (Hungary)
LPS (Germany)
HLM (France)
Million Programme (Sweden)
Brutalist architecture
Housing estate
Affordable housing
Subsidized housing
Public housing
References
Bibliography
Stankova, Jaroslava, et al. (1992) Prague: Eleven Centuries of Architecture. Prague: PAV. .
Zarecor, Kimberly Elman (2011) Manufacturing a Socialist Modernity: Housing in Czechoslovakia, 1945–1960. Pittsburgh: University of Pittsburgh Press. .
Chánov case study
Notes
External links
Website about paneláks in the Czech Republic
Czech words and phrases
Slovak words and phrases
Economy of Czechoslovakia
Urban planning in the Czech Republic
Architecture in the Czech Republic
Prefabricated buildings
Concrete buildings and structures
Human settlement
Human habitats
Housing
House types | Panelák | Engineering | 1,813 |
21,014,698 | https://en.wikipedia.org/wiki/Sony%20Ericsson%20W380 | Sony Ericsson W380 is a mobile phone that belongs to the Walkman series. This phone has multiple function for camera, such as negative, black and white or sepia.
External links
Tri-band clamshell Walkman phone with 3D gaming
Phones - The best of Sony
W380
Mobile phones introduced in 2007 | Sony Ericsson W380 | Technology | 67 |
51,725,745 | https://en.wikipedia.org/wiki/NGC%20245 | NGC 245 is a spiral galaxy located in the constellation Cetus. It was discovered on October 1, 1785 by William Herschel.
References
External links
0245
Spiral galaxies
Cetus
00476
Markarian galaxies
002691 | NGC 245 | Astronomy | 47 |
153,221 | https://en.wikipedia.org/wiki/Heat%20exchanger | A heat exchanger is a system used to transfer heat between a source and a working fluid. Heat exchangers are used in both cooling and heating processes. The fluids may be separated by a solid wall to prevent mixing or they may be in direct contact. They are widely used in space heating, refrigeration, air conditioning, power stations, chemical plants, petrochemical plants, petroleum refineries, natural-gas processing, and sewage treatment. The classic example of a heat exchanger is found in an internal combustion engine in which a circulating fluid known as engine coolant flows through radiator coils and air flows past the coils, which cools the coolant and heats the incoming air. Another example is the heat sink, which is a passive heat exchanger that transfers the heat generated by an electronic or a mechanical device to a fluid medium, often air or a liquid coolant.
Flow arrangement
There are three primary classifications of heat exchangers according to their flow arrangement. In parallel-flow heat exchangers, the two fluids enter the exchanger at the same end, and travel in parallel to one another to the other side. In counter-flow heat exchangers the fluids enter the exchanger from opposite ends. The counter current design is the most efficient, in that it can transfer the most heat from the heat (transfer) medium per unit mass due to the fact that the average temperature difference along any unit length is higher. See countercurrent exchange. In a cross-flow heat exchanger, the fluids travel roughly perpendicular to one another through the exchanger.
For efficiency, heat exchangers are designed to maximize the surface area of the wall between the two fluids, while minimizing resistance to fluid flow through the exchanger. The exchanger's performance can also be affected by the addition of fins or corrugations in one or both directions, which increase surface area and may channel fluid flow or induce turbulence.
The driving temperature across the heat transfer surface varies with position, but an appropriate mean temperature can be defined. In most simple systems this is the "log mean temperature difference" (LMTD). Sometimes direct knowledge of the LMTD is not available and the NTU method is used.
Types
By maximum operating temperature, heat exchangers can be divided into low-temperature and high-temperature ones. The former work up to 500–650°C depending on the industry and generally don't require special design and material considerations. The latter work up to 1000 or even 1400°C.
Double pipe heat exchangers are the simplest exchangers used in industries. On one hand, these heat exchangers are cheap for both design and maintenance, making them a good choice for small industries. On the other hand, their low efficiency coupled with the high space occupied in large scales, has led modern industries to use more efficient heat exchangers like shell and tube or plate. However, since double pipe heat exchangers are simple, they are used to teach heat exchanger design basics to students as the fundamental rules for all heat exchangers are the same.
1. Double-pipe heat exchanger
When one fluid flows through the smaller pipe, the other flows through the annular gap between the two pipes. These flows may be parallel or counter-flows in a double pipe heat exchanger.
(a) Parallel flow, where both hot and cold liquids enter the heat exchanger from the same side, flow in the same direction and exit at the same end. This configuration is preferable when the two fluids are intended to reach exactly the same temperature, as it reduces thermal stress and produces a more uniform rate of heat transfer.
(b) Counter-flow, where hot and cold fluids enter opposite sides of the heat exchanger, flow in opposite directions, and exit at opposite ends. This configuration is preferable when the objective is to maximize heat transfer between the fluids, as it creates a larger temperature differential when used under otherwise similar conditions.
The figure above illustrates the parallel and counter-flow flow directions of the fluid exchanger.
2. Shell-and-tube heat exchanger
In a shell-and-tube heat exchanger, two fluids at different temperatures flow through the heat exchanger. One of the fluids flows through the tube side and the other fluid flows outside the tubes, but inside the shell (shell side).
Baffles are used to support the tubes, direct the fluid flow to the tubes in an approximately natural manner, and maximize the turbulence of the shell fluid. There are many various kinds of baffles, and the choice of baffle form, spacing, and geometry depends on the allowable flow rate of the drop in shell-side force, the need for tube support, and the flow-induced vibrations. There are several variations of shell-and-tube exchangers available; the differences lie in the arrangement of flow configurations and details of construction.
In application to cool air with shell-and-tube technology (such as intercooler / charge air cooler for combustion engines), fins can be added on the tubes to increase heat transfer area on air side and create a tubes & fins configuration.
3. Plate Heat Exchanger
A plate heat exchanger contains an amount of thin shaped heat transfer plates bundled together. The gasket arrangement of each pair of plates provides two separate channel system. Each pair of plates form a channel where the fluid can flow through. The pairs are attached by welding and bolting methods. The following shows the components in the heat exchanger.
In single channels the configuration of the gaskets enables flow through. Thus, this allows the main and secondary media in counter-current flow. A gasket plate heat exchanger has a heat region from corrugated plates. The gasket function as seal between plates and they are located between frame and pressure plates. Fluid flows in a counter current direction throughout the heat exchanger. An efficient thermal performance is produced. Plates are produced in different depths, sizes and corrugated shapes. There are different types of plates available including plate and frame, plate and shell and spiral plate heat exchangers. The distribution area guarantees the flow of fluid to the whole heat transfer surface. This helps to prevent stagnant area that can cause accumulation of unwanted material on solid surfaces. High flow turbulence between plates results in a greater transfer of heat and a decrease in pressure.
4. Condensers and Boilers
Heat exchangers using a two-phase heat transfer system are condensers, boilers and evaporators. Condensers are instruments that take and cool hot gas or vapor to the point of condensation and transform the gas into a liquid form. The point at which liquid transforms to gas is called vaporization and vice versa is called condensation. Surface condenser is the most common type of condenser where it includes a water supply device. Figure 5 below displays a two-pass surface condenser.
The pressure of steam at the turbine outlet is low where the steam density is very low where the flow rate is very high. To prevent a decrease in pressure in the movement of steam from the turbine to condenser, the condenser unit is placed underneath and connected to the turbine. Inside the tubes the cooling water runs in a parallel way, while steam moves in a vertical downward position from the wide opening at the top and travel through the tube.
Furthermore, boilers are categorized as initial application of heat exchangers. The word steam generator was regularly used to describe a boiler unit where a hot liquid stream is the source of heat rather than the combustion products. Depending on the dimensions and configurations the boilers are manufactured. Several boilers are only able to produce hot fluid while on the other hand the others are manufactured for steam production.
Shell and tube
Shell and tube heat exchangers consist of a series of tubes which contain fluid that must be either heated or cooled. A second fluid runs over the tubes that are being heated or cooled so that it can either provide the heat or absorb the heat required. A set of tubes is called the tube bundle and can be made up of several types of tubes: plain, longitudinally finned, etc. Shell and tube heat exchangers are typically used for high-pressure applications (with pressures greater than 30 bar and temperatures greater than 260 °C). This is because the shell and tube heat exchangers are robust due to their shape.Several thermal design features must be considered when designing the tubes in the shell and tube heat exchangers:
There can be many variations on the shell and tube design. Typically, the ends of each tube are connected to plenums (sometimes called water boxes) through holes in tubesheets. The tubes may be straight or bent in the shape of a U, called U-tubes.
Tube diameter: Using a small tube diameter makes the heat exchanger both economical and compact. However, it is more likely for the heat exchanger to foul up faster and the small size makes mechanical cleaning of the fouling difficult. To prevail over the fouling and cleaning problems, larger tube diameters can be used. Thus to determine the tube diameter, the available space, cost and fouling nature of the fluids must be considered.
Tube thickness: The thickness of the wall of the tubes is usually determined to ensure:
There is enough room for corrosion
That flow-induced vibration has resistance
Axial strength
Availability of spare parts
Hoop strength (to withstand internal tube pressure)
Buckling strength (to withstand overpressure in the shell)
Tube length: heat exchangers are usually cheaper when they have a smaller shell diameter and a long tube length. Thus, typically there is an aim to make the heat exchanger as long as physically possible whilst not exceeding production capabilities. However, there are many limitations for this, including space available at the installation site and the need to ensure tubes are available in lengths that are twice the required length (so they can be withdrawn and replaced). Also, long, thin tubes are difficult to take out and replace.
Tube pitch: when designing the tubes, it is practical to ensure that the tube pitch (i.e., the centre-centre distance of adjoining tubes) is not less than 1.25 times the tubes' outside diameter. A larger tube pitch leads to a larger overall shell diameter, which leads to a more expensive heat exchanger.
Tube corrugation: this type of tubes, mainly used for the inner tubes, increases the turbulence of the fluids and the effect is very important in the heat transfer giving a better performance.
Tube Layout: refers to how tubes are positioned within the shell. There are four main types of tube layout, which are, triangular (30°), rotated triangular (60°), square (90°) and rotated square (45°). The triangular patterns are employed to give greater heat transfer as they force the fluid to flow in a more turbulent fashion around the piping. Square patterns are employed where high fouling is experienced and cleaning is more regular.
Baffle Design: baffles are used in shell and tube heat exchangers to direct fluid across the tube bundle. They run perpendicularly to the shell and hold the bundle, preventing the tubes from sagging over a long length. They can also prevent the tubes from vibrating. The most common type of baffle is the segmental baffle. The semicircular segmental baffles are oriented at 180 degrees to the adjacent baffles forcing the fluid to flow upward and downwards between the tube bundle. Baffle spacing is of large thermodynamic concern when designing shell and tube heat exchangers. Baffles must be spaced with consideration for the conversion of pressure drop and heat transfer. For thermo economic optimization it is suggested that the baffles be spaced no closer than 20% of the shell's inner diameter. Having baffles spaced too closely causes a greater pressure drop because of flow redirection. Consequently, having the baffles spaced too far apart means that there may be cooler spots in the corners between baffles. It is also important to ensure the baffles are spaced close enough that the tubes do not sag. The other main type of baffle is the disc and doughnut baffle, which consists of two concentric baffles. An outer, wider baffle looks like a doughnut, whilst the inner baffle is shaped like a disk. This type of baffle forces the fluid to pass around each side of the disk then through the doughnut baffle generating a different type of fluid flow.
Tubes & fins Design: in application to cool air with shell-and-tube technology (such as intercooler / charge air cooler for combustion engines), the difference in heat transfer between air and cold fluid can be such that there is a need to increase heat transfer area on air side. For this function fins can be added on the tubes to increase heat transfer area on air side and create a tubes & fins configuration.
Fixed tube liquid-cooled heat exchangers especially suitable for marine and harsh applications can be assembled with brass shells, copper tubes, brass baffles, and forged brass integral end hubs. (See: Copper in heat exchangers).
Plate
Another type of heat exchanger is the plate heat exchanger. These exchangers are composed of many thin, slightly separated plates that have very large surface areas and small fluid flow passages for heat transfer. Advances in gasket and brazing technology have made the plate-type heat exchanger increasingly practical. In HVAC applications, large heat exchangers of this type are called plate-and-frame; when used in open loops, these heat exchangers are normally of the gasket type to allow periodic disassembly, cleaning, and inspection. There are many types of permanently bonded plate heat exchangers, such as dip-brazed, vacuum-brazed, and welded plate varieties, and they are often specified for closed-loop applications such as refrigeration. Plate heat exchangers also differ in the types of plates that are used, and in the configurations of those plates. Some plates may be stamped with "chevron", dimpled, or other patterns, where others may have machined fins and/or grooves.
When compared to shell and tube exchangers, the stacked-plate arrangement typically has lower volume and cost. Another difference between the two is that plate exchangers typically serve low to medium pressure fluids, compared to medium and high pressures of shell and tube. A third and important difference is that plate exchangers employ more countercurrent flow rather than cross current flow, which allows lower approach temperature differences, high temperature changes, and increased efficiencies.
Plate and shell
A third type of heat exchanger is a plate and shell heat exchanger, which combines plate heat exchanger with shell and tube heat exchanger technologies. The heart of the heat exchanger contains a fully welded circular plate pack made by pressing and cutting round plates and welding them together. Nozzles carry flow in and out of the platepack (the 'Plate side' flowpath). The fully welded platepack is assembled into an outer shell that creates a second flowpath (the 'Shell side'). Plate and shell technology offers high heat transfer, high pressure, high operating temperature, compact size, low fouling and close approach temperature. In particular, it does completely without gaskets, which provides security against leakage at high pressures and temperatures.
Adiabatic wheel
A fourth type of heat exchanger uses an intermediate fluid or solid store to hold heat, which is then moved to the other side of the heat exchanger to be released. Two examples of this are adiabatic wheels, which consist of a large wheel with fine threads rotating through the hot and cold fluids, and fluid heat exchangers.
Plate fin
This type of heat exchanger uses "sandwiched" passages containing fins to increase the effectiveness of the unit. The designs include crossflow and counterflow coupled with various fin configurations such as straight fins, offset fins and wavy fins.
Plate and fin heat exchangers are usually made of aluminum alloys, which provide high heat transfer efficiency. The material enables the system to operate at a lower temperature difference and reduce the weight of the equipment. Plate and fin heat exchangers are mostly used for low temperature services such as natural gas, helium and oxygen liquefaction plants, air separation plants and transport industries such as motor and aircraft engines.
Advantages of plate and fin heat exchangers:
High heat transfer efficiency especially in gas treatment
Larger heat transfer area
Approximately 5 times lighter in weight than that of shell and tube heat exchanger.
Able to withstand high pressure
Disadvantages of plate and fin heat exchangers:
Might cause clogging as the pathways are very narrow
Difficult to clean the pathways
Aluminium alloys are susceptible to Mercury Liquid Embrittlement Failure
Finned tube
The usage of fins in a tube-based heat exchanger is common when one of the working fluids is a low-pressure gas, and is typical for heat exchangers that operate using ambient air, such as automotive radiators and HVAC air condensers. Fins dramatically increase the surface area with which heat can be exchanged, which improves the efficiency of conducting heat to a fluid with very low thermal conductivity, such as air. The fins are typically made from aluminium or copper since they must conduct heat from the tube along the length of the fins, which are usually very thin.
The main construction types of finned tube exchangers are:
A stack of evenly-spaced metal plates act as the fins and the tubes are pressed through pre-cut holes in the fins, good thermal contact usually being achieved by deformation of the fins around the tube. This is typical construction for HVAC air coils and large refrigeration condensers.
Fins are spiral-wound onto individual tubes as a continuous strip, the tubes can then be assembled in banks, bent in a serpentine pattern, or wound into large spirals.
Zig-zag metal strips are sandwiched between flat rectangular tubes, often being soldered or brazed together for good thermal and mechanical strength. This is common in low-pressure heat exchangers such as water-cooling radiators. Regular flat tubes will expand and deform if exposed to high pressures but flat microchannel tubes allow this construction to be used for high pressures.
Stacked-fin or spiral-wound construction can be used for the tubes inside shell-and-tube heat exchangers when high efficiency thermal transfer to a gas is required.
In electronics cooling, heat sinks, particularly those using heat pipes, can have a stacked-fin construction.
Pillow plate
A pillow plate heat exchanger is commonly used in the dairy industry for cooling milk in large direct-expansion stainless steel bulk tanks. Nearly the entire surface area of a tank can be integrated with this heat exchanger, without gaps that would occur between pipes welded to the exterior of the tank. Pillow plates can also be constructed as flat plates that are stacked inside a tank. The relatively flat surface of the plates allows easy cleaning, especially in sterile applications.
The pillow plate can be constructed using either a thin sheet of metal welded to the thicker surface of a tank or vessel, or two thin sheets welded together. The surface of the plate is welded with a regular pattern of dots or a serpentine pattern of weld lines. After welding the enclosed space is pressurised with sufficient force to cause the thin metal to bulge out around the welds, providing a space for heat exchanger liquids to flow, and creating a characteristic appearance of a swelled pillow formed out of metal.
Waste heat recovery units
A waste heat recovery unit (WHRU) is a heat exchanger that recovers heat from a hot gas stream while transferring it to a working medium, typically water or oils. The hot gas stream can be the exhaust gas from a gas turbine or a diesel engine or a waste gas from industry or refinery.
Large systems with high volume and temperature gas streams, typical in industry, can benefit from steam Rankine cycle (SRC) in a waste heat recovery unit, but these cycles are too expensive for small systems. The recovery of heat from low temperature systems requires different working fluids than steam.
An organic Rankine cycle (ORC) waste heat recovery unit can be more efficient at low temperature range using refrigerants that boil at lower temperatures than water. Typical organic refrigerants are ammonia, pentafluoropropane (R-245fa and R-245ca), and toluene.
The refrigerant is boiled by the heat source in the evaporator to produce super-heated vapor. This fluid is expanded in the turbine to convert thermal energy to kinetic energy, that is converted to electricity in the electrical generator. This energy transfer process decreases the temperature of the refrigerant that, in turn, condenses. The cycle is closed and completed using a pump to send the fluid back to the evaporator.
Dynamic scraped surface
Another type of heat exchanger is called "(dynamic) scraped surface heat exchanger". This is mainly used for heating or cooling with high-viscosity products, crystallization processes, evaporation and high-fouling applications. Long running times are achieved due to the continuous scraping of the surface, thus avoiding fouling and achieving a sustainable heat transfer rate during the process.
Phase-change
In addition to heating up or cooling down fluids in just a single phase, heat exchangers can be used either to heat a liquid to evaporate (or boil) it or used as condensers to cool a vapor and condense it to a liquid. In chemical plants and refineries, reboilers used to heat incoming feed for distillation towers are often heat exchangers.
Distillation set-ups typically use condensers to condense distillate vapors back into liquid.
Power plants that use steam-driven turbines commonly use heat exchangers to boil water into steam. Heat exchangers or similar units for producing steam from water are often called boilers or steam generators.
In the nuclear power plants called pressurized water reactors, special large heat exchangers pass heat from the primary (reactor plant) system to the secondary (steam plant) system, producing steam from water in the process. These are called steam generators. All fossil-fueled and nuclear power plants using steam-driven turbines have surface condensers to convert the exhaust steam from the turbines into condensate (water) for re-use.
To conserve energy and cooling capacity in chemical and other plants, regenerative heat exchangers can transfer heat from a stream that must be cooled to another stream that must be heated, such as distillate cooling and reboiler feed pre-heating.
This term can also refer to heat exchangers that contain a material within their structure that has a change of phase. This is usually a solid to liquid phase due to the small volume difference between these states. This change of phase effectively acts as a buffer because it occurs at a constant temperature but still allows for the heat exchanger to accept additional heat. One example where this has been investigated is for use in high power aircraft electronics.
Heat exchangers functioning in multiphase flow regimes may be subject to the Ledinegg instability.
Direct contact
Direct contact heat exchangers involve heat transfer between hot and cold streams of two phases in the absence of a separating wall. Thus such heat exchangers can be classified as:
Gas – liquid
Immiscible liquid – liquid
Solid-liquid or solid – gas
Most direct contact heat exchangers fall under the Gas – Liquid category, where heat is transferred between a gas and liquid in the form of drops, films or sprays.
Such types of heat exchangers are used predominantly in air conditioning, humidification, industrial hot water heating, water cooling and condensing plants.
Microchannel
Microchannel heat exchangers are multi-pass parallel flow heat exchangers consisting of three main elements: manifolds (inlet and outlet), multi-port tubes with the hydraulic diameters smaller than 1mm, and fins. All the elements usually brazed together using controllable atmosphere brazing process. Microchannel heat exchangers are characterized by high heat transfer ratio, low refrigerant charges, compact size, and lower airside pressure drops compared to finned tube heat exchangers. Microchannel heat exchangers are widely used in automotive industry as the car radiators, and as condenser, evaporator, and cooling/heating coils in HVAC industry.
Micro heat exchangers, Micro-scale heat exchangers, or microstructured heat exchangers are heat exchangers in which (at least one) fluid flows in lateral confinements with typical dimensions below 1 mm. The most typical such confinement are microchannels, which are channels with a hydraulic diameter below 1 mm. Microchannel heat exchangers can be made from metal or ceramics. Microchannel heat exchangers can be used for many applications including:
high-performance aircraft gas turbine engines
heat pumps
Microprocessor and microchip cooling
air conditioning
HVAC and refrigeration air coils
One of the widest uses of heat exchangers is for refrigeration and air conditioning. This class of heat exchangers is commonly called air coils, or just coils due to their often-serpentine internal tubing, or condensers in the case of refrigeration, and are typically of the finned tube type. Liquid-to-air, or air-to-liquid HVAC coils are typically of modified crossflow arrangement. In vehicles, heat coils are often called heater cores.
On the liquid side of these heat exchangers, the common fluids are water, a water-glycol solution, steam, or a refrigerant. For heating coils, hot water and steam are the most common, and this heated fluid is supplied by boilers, for example. For cooling coils, chilled water and refrigerant are most common. Chilled water is supplied from a chiller that is potentially located very far away, but refrigerant must come from a nearby condensing unit. When a refrigerant is used, the cooling coil is the evaporator, and the heating coil is the condenser in the vapor-compression refrigeration cycle. HVAC coils that use this direct-expansion of refrigerants are commonly called DX coils. Some DX coils are "microchannel" type.
On the air side of HVAC coils a significant difference exists between those used for heating, and those for cooling. Due to psychrometrics, air that is cooled often has moisture condensing out of it, except with extremely dry air flows. Heating some air increases that airflow's capacity to hold water. So heating coils need not consider moisture condensation on their air-side, but cooling coils must be adequately designed and selected to handle their particular latent (moisture) as well as the sensible (cooling) loads. The water that is removed is called condensate.
For many climates, water or steam HVAC coils can be exposed to freezing conditions. Because water expands upon freezing, these somewhat expensive and difficult to replace thin-walled heat exchangers can easily be damaged or destroyed by just one freeze. As such, freeze protection of coils is a major concern of HVAC designers, installers, and operators.
The introduction of indentations placed within the heat exchange fins controlled condensation, allowing water molecules to remain in the cooled air.
The heat exchangers in direct-combustion furnaces, typical in many residences, are not 'coils'. They are, instead, gas-to-air heat exchangers that are typically made of stamped steel sheet metal. The combustion products pass on one side of these heat exchangers, and air to heat on the other. A cracked heat exchanger is therefore a dangerous situation that requires immediate attention because combustion products may enter living space.
Helical-coil
Although double-pipe heat exchangers are the simplest to design, the better choice in the following cases would be the helical-coil heat exchanger (HCHE):
The main advantage of the HCHE, like that for the Spiral heat exchanger (SHE), is its highly efficient use of space, especially when it's limited and not enough straight pipe can be laid.
Under conditions of low flowrates (or laminar flow), such that the typical shell-and-tube exchangers have low heat-transfer coefficients and becoming uneconomical.
When there is low pressure in one of the fluids, usually from accumulated pressure drops in other process equipment.
When one of the fluids has components in multiple phases (solids, liquids, and gases), which tends to create mechanical problems during operations, such as plugging of small-diameter tubes. Cleaning of helical coils for these multiple-phase fluids can prove to be more difficult than its shell and tube counterpart; however the helical coil unit would require cleaning less often.
These have been used in the nuclear industry as a method for exchanging heat in a sodium system for large liquid metal fast breeder reactors since the early 1970s, using an HCHE device invented by Charles E. Boardman and John H. Germer. There are several simple methods for designing HCHE for all types of manufacturing industries, such as using the Ramachandra K. Patil (et al.) method from India and the Scott S. Haraburda method from the United States.
However, these are based upon assumptions of estimating inside heat transfer coefficient, predicting flow around the outside of the coil, and upon constant heat flux.
Spiral
A modification to the perpendicular flow of the typical HCHE involves the replacement of shell with another coiled tube, allowing the two fluids to flow parallel to one another, and which requires the use of different design calculations. These are the Spiral Heat Exchangers (SHE), which may refer to a helical (coiled) tube configuration, more generally, the term refers to a pair of flat surfaces that are coiled to form the two channels in a counter-flow arrangement. Each of the two channels has one long curved path. A pair of fluid ports are connected tangentially to the outer arms of the spiral, and axial ports are common, but optional.
The main advantage of the SHE is its highly efficient use of space. This attribute is often leveraged and partially reallocated to gain other improvements in performance, according to well known tradeoffs in heat exchanger design. (A notable tradeoff is capital cost vs operating cost.) A compact SHE may be used to have a smaller footprint and thus lower all-around capital costs, or an oversized SHE may be used to have less pressure drop, less pumping energy, higher thermal efficiency, and lower energy costs.
Construction
The distance between the sheets in the spiral channels is maintained by using spacer studs that were welded prior to rolling. Once the main spiral pack has been rolled, alternate top and bottom edges are welded and each end closed by a gasketed flat or conical cover bolted to the body. This ensures no mixing of the two fluids occurs. Any leakage is from the periphery cover to the atmosphere, or to a passage that contains the same fluid.
Self cleaning
Spiral heat exchangers are often used in the heating of fluids that contain solids and thus tend to foul the inside of the heat exchanger. The low pressure drop lets the SHE handle fouling more easily. The SHE uses a “self cleaning” mechanism, whereby fouled surfaces cause a localized increase in fluid velocity, thus increasing the drag (or fluid friction) on the fouled surface, thus helping to dislodge the blockage and keep the heat exchanger clean. "The internal walls that make up the heat transfer surface are often rather thick, which makes the SHE very robust, and able to last a long time in demanding environments."
They are also easily cleaned, opening out like an oven where any buildup of foulant can be removed by pressure washing.
Self-cleaning water filters are used to keep the system clean and running without the need to shut down or replace cartridges and bags.
Flow arrangements
There are three main types of flows in a spiral heat exchanger:
Counter-current Flow: Fluids flow in opposite directions. These are used for liquid-liquid, condensing and gas cooling applications. Units are usually mounted vertically when condensing vapour and mounted horizontally when handling high concentrations of solids.
Spiral Flow/Cross Flow: One fluid is in spiral flow and the other in a cross flow. Spiral flow passages are welded at each side for this type of spiral heat exchanger. This type of flow is suitable for handling low density gas, which passes through the cross flow, avoiding pressure loss. It can be used for liquid-liquid applications if one liquid has a considerably greater flow rate than the other.
Distributed Vapour/Spiral flow: This design is that of a condenser, and is usually mounted vertically. It is designed to cater for the sub-cooling of both condensate and non-condensables. The coolant moves in a spiral and leaves via the top. Hot gases that enter leave as condensate via the bottom outlet.
Applications
The Spiral heat exchanger is good for applications such as pasteurization, digester heating, heat recovery, pre-heating (see: recuperator), and effluent cooling. For sludge treatment, SHEs are generally smaller than other types of heat exchangers. These are used to transfer the heat.
Selection
Due to the many variables involved, selecting optimal heat exchangers is challenging. Hand calculations are possible, but many iterations are typically needed. As such, heat exchangers are most often selected via computer programs, either by system designers, who are typically engineers, or by equipment vendors.
To select an appropriate heat exchanger, the system designers (or equipment vendors) would firstly consider the design limitations for each heat exchanger type.
Though cost is often the primary criterion, several other selection criteria are important:
High/low pressure limits
Thermal performance
Temperature ranges
Product mix (liquid/liquid, particulates or high-solids liquid)
Pressure drops across the exchanger
Fluid flow capacity
Cleanability, maintenance and repair
Materials required for construction
Ability and ease of future expansion
Material selection, such as copper, aluminium, carbon steel, stainless steel, nickel alloys, ceramic, polymer, and titanium.
Small-diameter coil technologies are becoming more popular in modern air conditioning and refrigeration systems because they have better rates of heat transfer than conventional sized condenser and evaporator coils with round copper tubes and aluminum or copper fin that have been the standard in the HVAC industry. Small diameter coils can withstand the higher pressures required by the new generation of environmentally friendlier refrigerants. Two small diameter coil technologies are currently available for air conditioning and refrigeration products: copper microgroove and brazed aluminum microchannel.
Choosing the right heat exchanger (HX) requires some knowledge of the different heat exchanger types, as well as the environment where the unit must operate. Typically in the manufacturing industry, several differing types of heat exchangers are used for just one process or system to derive the final product. For example, a kettle HX for pre-heating, a double pipe HX for the 'carrier' fluid and a plate and frame HX for final cooling. With sufficient knowledge of heat exchanger types and operating requirements, an appropriate selection can be made to optimise the process.
Monitoring and maintenance
Online monitoring of commercial heat exchangers is done by tracking the overall heat transfer coefficient. The overall heat transfer coefficient tends to decline over time due to fouling.
By periodically calculating the overall heat transfer coefficient from exchanger flow rates and temperatures, the owner of the heat exchanger can estimate when cleaning the heat exchanger is economically attractive.
Integrity inspection of plate and tubular heat exchanger can be tested in situ by the conductivity or helium gas methods. These methods confirm the integrity of the plates or tubes to prevent any cross contamination and the condition of the gaskets.
Mechanical integrity monitoring of heat exchanger tubes may be conducted through Nondestructive methods such as eddy current testing.
Fouling
Fouling occurs when impurities deposit on the heat exchange surface.
Deposition of these impurities can decrease heat transfer effectiveness significantly over time and are caused by:
Low wall shear stress
Low fluid velocities
High fluid velocities
Reaction product solid precipitation
Precipitation of dissolved impurities due to elevated wall temperatures
The rate of heat exchanger fouling is determined by the rate of particle deposition less re-entrainment/suppression. This model was originally proposed in 1959 by Kern and Seaton.
Crude Oil Exchanger Fouling. In commercial crude oil refining, crude oil is heated from to prior to entering the distillation column. A series of shell and tube heat exchangers typically exchange heat between crude oil and other oil streams to heat the crude to prior to heating in a furnace. Fouling occurs on the crude side of these exchangers due to asphaltene insolubility. The nature of asphaltene solubility in crude oil was successfully modeled by Wiehe and Kennedy. The precipitation of insoluble asphaltenes in crude preheat trains has been successfully modeled as a first order reaction by Ebert and Panchal who expanded on the work of Kern and Seaton.
Cooling Water Fouling.
Cooling water systems are susceptible to fouling. Cooling water typically has a high total dissolved solids content and suspended colloidal solids. Localized precipitation of dissolved solids occurs at the heat exchange surface due to wall temperatures higher than bulk fluid temperature. Low fluid velocities (less than 3 ft/s) allow suspended solids to settle on the heat exchange surface. Cooling water is typically on the tube side of a shell and tube exchanger because it's easy to clean. To prevent fouling, designers typically ensure that cooling water velocity is greater than and bulk fluid temperature is maintained less than . Other approaches to control fouling control combine the "blind" application of biocides and anti-scale chemicals with periodic lab testing.
Maintenance
Plate and frame heat exchangers can be disassembled and cleaned periodically. Tubular heat exchangers can be cleaned by such methods as acid cleaning, sandblasting, high-pressure water jet, bullet cleaning, or drill rods.
In large-scale cooling water systems for heat exchangers, water treatment such as purification, addition of chemicals, and testing, is used to minimize fouling of the heat exchange equipment. Other water treatment is also used in steam systems for power plants, etc. to minimize fouling and corrosion of the heat exchange and other equipment.
A variety of companies have started using water borne oscillations technology to prevent biofouling. Without the use of chemicals, this type of technology has helped in providing a low-pressure drop in heat exchangers.
Design and manufacturing regulations
The design and manufacturing of heat exchangers has numerous regulations, which vary according to the region in which they will be used.
Design and manufacturing codes include: ASME Boiler and Pressure Vessel Code (US); PD 5500 (UK); BS 1566 (UK); EN 13445 (EU); CODAP (French); Pressure Equipment Safety Regulations 2016 (PER) (UK); Pressure Equipment Directive (EU); NORSOK (Norwegian); TEMA; API 12; and API 560.
In nature
Humans
The human nasal passages serve as a heat exchanger, with cool air being inhaled and warm air being exhaled. Its effectiveness can be demonstrated by putting the hand in front of the face and exhaling, first through the nose and then through the mouth. Air exhaled through the nose is substantially cooler. This effect can be enhanced with clothing, by, for example, wearing a scarf over the face while breathing in cold weather.
In species that have external testes (such as human), the artery to the testis is surrounded by a mesh of veins called the pampiniform plexus. This cools the blood heading to the testes, while reheating the returning blood.
Birds, fish, marine mammals
"Countercurrent" heat exchangers occur naturally in the circulatory systems of fish, whales and other marine mammals. Arteries to the skin carrying warm blood are intertwined with veins from the skin carrying cold blood, causing the warm arterial blood to exchange heat with the cold venous blood. This reduces the overall heat loss in cold water. Heat exchangers are also present in the tongues of baleen whales as large volumes of water flow through their mouths. Wading birds use a similar system to limit heat losses from their body through their legs into the water.
Carotid rete
Carotid rete is a counter-current heat exchanging organ in some ungulates. The blood ascending the carotid arteries on its way to the brain, flows via a network of vessels where heat is discharged to the veins of cooler blood descending from the nasal passages. The carotid rete allows Thomson's gazelle to maintain its brain almost 3 °C (5.4 °F) cooler than the rest of the body, and therefore aids in tolerating bursts in metabolic heat production such as associated with outrunning cheetahs (during which the body temperature exceeds the maximum temperature at which the brain could function). Humans with other primates lack a carotid rete.
In industry
Heat exchangers are widely used in industry both for cooling and heating large scale industrial processes. The type and size of heat exchanger used can be tailored to suit a process depending on the type of fluid, its phase, temperature, density, viscosity, pressures, chemical composition and various other thermodynamic properties.
In many industrial processes there is waste of energy or a heat stream that is being exhausted, heat exchangers can be used to recover this heat and put it to use by heating a different stream in the process. This practice saves a lot of money in industry, as the heat supplied to other streams from the heat exchangers would otherwise come from an external source that is more expensive and more harmful to the environment.
Heat exchangers are used in many industries, including:
Waste water treatment
Refrigeration
Wine and beer making
Petroleum refining
Nuclear power
In waste water treatment, heat exchangers play a vital role in maintaining optimal temperatures within anaerobic digesters to promote the growth of microbes that remove pollutants. Common types of heat exchangers used in this application are the double pipe heat exchanger as well as the plate and frame heat exchanger.
In aircraft
In commercial aircraft heat exchangers are used to take heat from the engine's oil system to heat cold fuel. This improves fuel efficiency, as well as reduces the possibility of water entrapped in the fuel freezing in components.
Current market and forecast
Estimated at US$17.5 billion in 2021, the global demand of heat exchangers is expected to experience robust growth of about 5% annually over the next years. The market value is expected to reach US$27 billion by 2030. With an expanding desire for environmentally friendly options and increased development of offices, retail sectors, and public buildings, market expansion is due to grow.
A model of a simple heat exchanger
A simple heat exchange might be thought of as two straight pipes with fluid flow, which are thermally connected. Let the pipes be of equal length L, carrying fluids with heat capacity (energy per unit mass per unit change in temperature) and let the mass flow rate of the fluids through the pipes, both in the same direction, be (mass per unit time), where the subscript i applies to pipe 1 or pipe 2.
Temperature profiles for the pipes are and where x is the distance along the pipe. Assume a steady state, so that the temperature profiles are not functions of time. Assume also that the only transfer of heat from a small volume of fluid in one pipe is to the fluid element in the other pipe at the same position, i.e., there is no transfer of heat along a pipe due to temperature differences in that pipe. By Newton's law of cooling the rate of change in energy of a small volume of fluid is proportional to the difference in temperatures between it and the corresponding element in the other pipe:
( this is for parallel flow in the same direction and opposite temperature gradients, but for counter-flow heat exchange countercurrent exchange the sign is opposite in the second equation in front of ), where is the thermal energy per unit length and γ is the thermal connection constant per unit length between the two pipes. This change in internal energy results in a change in the temperature of the fluid element. The time rate of change for the fluid element being carried along by the flow is:
where is the "thermal mass flow rate". The differential equations governing the heat exchanger may now be written as:
Since the system is in a steady state, there are no partial derivatives of temperature with respect to time, and since there is no heat transfer along the pipe, there are no second derivatives in x as is found in the heat equation. These two coupled first-order differential equations may be solved to yield:
where , ,
(this is for parallel-flow, but for counter-flow the sign in front of is negative, so that if , for the same "thermal mass flow rate" in both opposite directions, the gradient of temperature is constant and the temperatures linear in position x with a constant difference along the exchanger, explaining why the counter current design countercurrent exchange is the most efficient )
and A and B are two as yet undetermined constants of integration. Let and be the temperatures at x=0 and let and be the temperatures at the end of the pipe at x=L. Define the average temperatures in each pipe as:
Using the solutions above, these temperatures are:
{|
|-
|
|
|-
|
|
|-
|
|
|}
Choosing any two of the temperatures above eliminates the constants of integration, letting us find the other four temperatures. We find the total energy transferred by integrating the expressions for the time rate of change of internal energy per unit length:
By the conservation of energy, the sum of the two energies is zero. The quantity is known as the Log mean temperature difference, and is a measure of the effectiveness of the heat exchanger in transferring heat energy.
See also
Architectural engineering
Chemical engineering
Cooling tower
Copper in heat exchangers
Heat pipe
Heat pump
Heat recovery ventilation
Jacketed vessel
Log mean temperature difference (LMTD)
Marine heat exchangers
Mechanical engineering
Micro heat exchanger
Moving bed heat exchanger
Packed bed and in particular Packed columns
Pumpable ice technology
Reboiler
Recuperator, or cross plate heat exchanger
Regenerator
Run around coil
Steam generator (nuclear power)
Surface condenser
Toroidal expansion joint
Thermosiphon
Thermal wheel, or rotary heat exchanger (including enthalpy wheel and desiccant wheel)
Tube tool
Waste heat
References
Coulson, J. and Richardson, J (1999). Chemical Engineering- Fluid Flow. Heat Transfer and Mass Transfer- Volume 1; Reed Educational & Professional Publishing LTD
Dogan Eryener (2005), 'Thermoeconomic optimization of baffle spacing for shell and tube heat exchangers', Energy Conservation and Management, Volume 47, Issue 11–12, Pages 1478–1489.
G.F.Hewitt, G.L.Shires, T.R.Bott (1994) Process Heat Transfer, CRC Press, Inc, United States Of America.
External links
Shell and Tube Heat Exchanger Design Software for Educational Applications (PDF)
EU Pressure Equipment Guideline
A Thermal Management Concept For More Electric Aircraft Power System Application (PDF)
Heat transfer
Gas technologies | Heat exchanger | Physics,Chemistry,Engineering | 9,768 |
18,065,584 | https://en.wikipedia.org/wiki/Satavaptan | Satavaptan (INN; developmental code name SR121463, former tentative brand name Aquilda) is a vasopressin-2 receptor antagonist which was investigation by Sanofi-Aventis and was under development for the treatment of hyponatremia. It was also being studied for the treatment of ascites. Development was discontinued in 2009.
References
Diuretics
Tert-butyl compounds
Vasopressin receptor antagonists
Benzamides
4-Morpholinyl compounds
Ethanolamines
Cyclohexanols
Spiro compounds
Sulfonamides
Ethoxy compounds
Lactams
Methoxy compounds | Satavaptan | Chemistry | 134 |
3,243,411 | https://en.wikipedia.org/wiki/StAX | Streaming API for XML (StAX) is an application programming interface (API) to read and write XML documents, originating from the Java programming language community.
Traditionally, XML APIs are either:
DOM based - the entire document is read into memory as a tree structure for random access by the calling application
event based - the application registers to receive events as entities are encountered within the source document.
Both have advantages: DOM, for example, allows for random access to the document, and event driven algorithm like SAX has a small memory footprint and is typically much faster.
These two access metaphors can be thought of as polar opposites. A tree based API allows unlimited, random access and manipulation, while an event based API is a 'one shot' pass through the source document.
StAX was designed as a median between these two opposites. In the StAX metaphor, the programmatic entry point is a cursor that represents a point within the document. The application moves the cursor forward - 'pulling' the information from the parser as it needs. This is different from an event based API - such as SAX - which 'pushes' data to the application - requiring the application to maintain state between events as necessary to keep track of location within the document.
Origins
StAX has its roots in a number of incompatible pull APIs for XML, most notably XMLPULL, the authors of which (Stefan Haustein and Aleksander Slominski) collaborated with, amongst others, BEA Systems, Oracle, Sun and James Clark.
Examples
From JSR-173 Specification• Final, V1.0 (used under fair use).
Quote:
The following Java API shows the main methods for reading XML in the cursor approach.
public interface XMLStreamReader {
public int next() throws XMLStreamException;
public boolean hasNext() throws XMLStreamException;
public String getText();
public String getLocalName();
public String getNamespaceURI();
// ...other methods not shown
}
The writing side of the API has methods that correspond to the reading side for “StartElement” and “EndElement” event types.
public interface XMLStreamWriter {
public void writeStartElement(String localName) throws XMLStreamException;
public void writeEndElement() throws XMLStreamException;
public void writeCharacters(String text) throws XMLStreamException;
// ...other methods not shown
}
5.3.1 XMLStreamReader
This example illustrates how to instantiate an input factory, create a reader and iterate over the elements of an XML document.
XMLInputFactory xmlInputFactory = XMLInputFactory.newInstance();
XMLStreamReader xmlStreamReader = xmlInputFactory.createXMLStreamReader(...);
while (xmlStreamReader.hasNext()) {
xmlStreamReader.next();
}
See also
Competing and complementary ways to process XML in Java (the order is loosely based on initial date of introduction):
Document Object Model (DOM), the first standardized, language/platform-independent tree-based XML processing model; alternate Java tree models include JDOM, Dom4j, and XOM
Simple API for XML (SAX), the standard XML push API
Java XML Binding API (JAXB), works on top of another parser (usually streaming parser), binds contained data to/from Java objects.
Streaming XML
XQuery API for Java
External links
Introduction to StAX XML.com, Harold, Elliotte Rusty
Java Streaming API for XML (Stax) - Tutorial
XMLPull Patterns Article on XML Pull (and StAX) design patterns by Aleksander Slominski.
StAX Parser - Cursor & Iterator APIs Article on Cursor & Iterator APIs by HowToDoInJava.
Java platform
Application programming interfaces
XML parsers
Articles with example Java code | StAX | Technology | 831 |
14,644,098 | https://en.wikipedia.org/wiki/J.%20Peter%20May | Jon Peter May (born September 16, 1939, in New York) is an American mathematician working in the fields of algebraic topology, category theory, homotopy theory, and the foundational aspects of spectra. He is known, in particular, for the May spectral sequence and for coining the term operad.
Education and career
May received a Bachelor of Arts degree from Swarthmore College in 1960 and a Doctor of Philosophy degree from Princeton University in 1964. His thesis, written under the direction of John Moore, was titled The cohomology of restricted Lie algebras and of Hopf algebras: Application to the Steenrod algebra.
From 1964 to 1967, May taught at Yale University. He has been a faculty member at the University of Chicago since 1967, and a professor since 1970.
The word "operad" was created by May as a portmanteau of "operations" and "monad".
Awards
In 2012 he became a fellow of the American Mathematical Society. He has advised over 60 doctoral students, including Mark Behrens, Andrew Blumberg, Frederick Cohen, Ib Madsen, Emily Riehl, Michael Shulman, and Zhouli Xu.
References
Notes
May himself has stated that he was partially inspired by his mother's opera singing when coining the term.
External links
May's homepage at the University of Chicago
Jon Peter May at the Mathematics Genealogy Project
20th-century American mathematicians
21st-century American mathematicians
American topologists
University of Chicago faculty
Yale University faculty
Princeton University alumni
Swarthmore College alumni
Fellows of the American Mathematical Society
1939 births
Living people
Mathematicians from New York (state) | J. Peter May | Mathematics | 329 |
8,007 | https://en.wikipedia.org/wiki/Diameter | In geometry, a diameter of a circle is any straight line segment that passes through the centre of the circle and whose endpoints lie on the circle. It can also be defined as the longest chord of the circle. Both definitions are also valid for the diameter of a sphere.
In more modern usage, the length of a diameter is also called the diameter. In this sense one speaks of diameter rather than diameter (which refers to the line segment itself), because all diameters of a circle or sphere have the same length, this being twice the radius
The word "diameter" is derived from (), "diameter of a circle", from (), "across, through" and (), "measure". It is often abbreviated or
Constructions
With straightedge and compass, a diameter of a given circle can be constructed as the perpendicular bisector of an arbitrary chord. Drawing two diameters in this way can be used to locate the center of a circle, as their crossing point. To construct a diameter parallel to a given line, choose the chord to be perpendicular to the line.
The circle having a given line segment as its diameter can be constructed by straightedge and compass, by finding the midpoint of the segment and then drawing the circle centered at the midpoint through one of the ends of the line segment.
Symbol
The symbol or variable for diameter, , is sometimes used in technical drawings or specifications as a prefix or suffix for a number (e.g. "⌀ 55 mm"), indicating that it represents diameter. Photographic filter thread sizes are often denoted in this way.
The symbol has a code point in Unicode at , in the Miscellaneous Technical set. It should not be confused with several other characters (such as or ) that resemble it but have unrelated meanings. It has the compose sequence .
Generalizations
The definitions given above are only valid for circles and spheres. However, they are special cases of a more general definition that is valid for any kind of -dimensional object, or a set of scattered points. The diameter of a set is the least upper bound of the set of all distances between pairs of points in the subset.
A different and incompatible definition is sometimes used for the diameter of a conic section. In this context, a diameter is any chord which passes through the conic's centre. A diameter of an ellipse is any line passing through the centre of the ellipse. Half of any such diameter may be called a semidiameter, although this term is most often a synonym for the radius of a circle or sphere. The longest diameter is called the major axis. Conjugate diameters are a pair of diameters where one is parallel to a tangent to the ellipse at the endpoint of the other diameter.
Several kinds of object can be measured by equivalent diameter, the diameter of a circular or spherical approximation to the object. This includes hydraulic diameter, the equivalent diameter of a channel or pipe through which liquid flows, and the Sauter mean diameter of a collection of particles.
The diameter of a circle is exactly twice its radius. However, this is true only for a circle, and only in the Euclidean metric. Jung's theorem provides more general inequalities relating the diameter to the radius.
See also
Caliper, micrometer, tools for measuring diameters
Eratosthenes, who calculated the diameter of the Earth around 240 BC.
References
Elementary geometry
Length
Circles | Diameter | Physics,Mathematics | 695 |
22,564,667 | https://en.wikipedia.org/wiki/Signal%20averaging | Signal averaging is a signal processing technique applied in the time domain, intended to increase the strength of a signal relative to noise that is obscuring it. By averaging a set of replicate measurements, the signal-to-noise ratio (SNR) will be increased, ideally in proportion to the square root of the number of measurements.
Deriving the SNR for averaged signals
Assumed that
Signal is uncorrelated to noise, and noise is uncorrelated : .
Signal power is constant in the replicate measurements.
Noise is random, with a mean of zero and constant variance in the replicate measurements: and .
We (canonically) define Signal-to-Noise ratio as .
Noise power for sampled signals
Assuming we sample the noise, we get a per-sample variance of
.
Averaging a random variable leads to the following variance:
.
Since noise variance is constant :
,
demonstrating that averaging realizations of the same, uncorrelated noise reduces noise power by a factor of , and reduces noise level by a factor of .
Signal power for sampled signals
Considering vectors of signal samples of length :
,
the power of such a vector simply is
.
Again, averaging the vectors , yields the following averaged vector
.
In the case where , we see that reaches a maximum of
.
In this case, the ratio of signal to noise also reaches a maximum,
.
This is the oversampling case, where the observed signal is correlated (because oversampling implies that the signal observations are strongly correlated).
Time-locked signals
Averaging is applied to enhance a time-locked signal component in noisy measurements; time-locking implies that the signal is observation-periodic, so we end up in the maximum case above.
Averaging odd and even trials
A specific way of obtaining replicates is to average all the odd and even trials in separate buffers. This has the advantage of allowing for comparison of even and odd results from interleaved trials. An average of odd and even averages generates the completed averaged result, while the difference between the odd and even averages, divided by two, constitutes an estimate of the noise.
Algorithmic implementation
The following is a MATLAB simulation of the averaging process:
N=1000; % signal length
even=zeros(N,1); % even buffer
odd=even; % odd buffer
actual_noise=even;% keep track of noise level
x=sin(linspace(0,4*pi,N))'; % tracked signal
for ii=1:256 % number of replicates
n = randn(N,1); % random noise
actual_noise = actual_noise+n;
if (mod(ii,2))
even = even+n+x;
else
odd=odd+n+x;
end
end
even_avg = even/(ii/2); % even buffer average
odd_avg = odd/(ii/2); % odd buffer average
act_avg = actual_noise/ii; % actual noise level
db(rms(act_avg))
db(rms((even_avg-odd_avg)/2))
plot((odd_avg+even_avg));
hold on;
plot((even_avg-odd_avg)/2)
The averaging process above, and in general, results in an estimate of the signal. When compared with the raw trace, the averaged noise component is reduced with every averaged trial. When averaging real signals, the underlying component may not always be as clear, resulting in repeated averages in a search for consistent components in two or three replicates. It is unlikely that two or more consistent results will be produced by chance alone.
Correlated noise
Signal averaging typically relies heavily on the assumption that the noise component of a signal is random, having zero mean, and being unrelated to the signal. However, there are instances in which the noise is not uncorrelated. A common example of correlated noise is quantization noise (e.g. the noise created when converting from an analog to a digital signal).
References
Digital signal processing
Averaging | Signal averaging | Technology,Engineering | 835 |
60,064,346 | https://en.wikipedia.org/wiki/HoloLens%202 | Microsoft HoloLens 2 is a mixed reality head-mounted display developed and manufactured by Microsoft. It is the successor to the original Microsoft HoloLens. The first variant of the device, The HoloLens 2 enterprise edition, debuted its release on February 24, 2019. This was followed by a developer edition that was announced on May 2, 2019. The HoloLens 2 was subsequently released to limited numbers on November 7, 2019.
The Hololens 2 is now discontinued, but will continue to receive software updates until December 31, 2027.
Description
The HoloLens 2 was announced by lead HoloLens developer Alex Kipman on February 24, 2019 at Mobile World Congress (MWC) in Barcelona, Spain. On May 7, 2019 the HoloLens 2 was shown again at the Microsoft Build developer conference. There, it showcased an application created with the Unreal Game Engine.
The HoloLens 2 are combination waveguide and laser-based stereoscopic and full-color mixed reality smartglasses developed and manufactured by Microsoft. The US military's Integrated Visual Augmentation System is a further development of Hololens 2.
The HoloLens 2 is an early AR device. The displays on the HoloLens 2 are simple waveguide displays with a fixed focus of approximately two meters. Because of the fixed focus, the displays exhibit the Vergence-Accommodation Conflict, which is an unpleasant visual sensation for the viewer.
On August 20, 2019, at the Hot Chips 31 symposium Microsoft presented their Holographic Processing Unit (HPU) 2.0 custom design for the HoloLens 2 with the following features:
7x SIMD Fixed Point (SFP) for 2D processing
6x Floating Vector Processor (FVP) for 3D processing
>1 TOP of programmable compute
125Mb SRAM
79mm2 die size and 2 billion transistors
TSMC 16FF+ process
PCIe 2.0 x1 at 100 MB/s bandwidth to Snapdragon 850
On August 29, 2019, at the World Artificial Intelligence Conference in Shanghai, Microsoft's Executive Vice President, Harry Shum, revealed that HoloLens 2 would go on sale in September 2019. The product started shipping on November 7, 2019.
Improvements over the previous model
Microsoft highlighted three main improvements made to the device: immersiveness, ergonomics and business friendliness.
HoloLens 2 has a diagonal field of view of 52 degrees, improving over the 34 degree field of view (FOV) of the first edition of HoloLens, although Karl Guttag states that it offers less than 20 pixels per degree of resolution (despite Microsoft's claim that it would keep a resolution of 47 pixels per degree).
Holographic Processing Unit (HPU) 2.0 improvements compared to the HPU 1.0:
1.7x compute
2x effective DRAM bandwidth
Improved hologram stability
New hardware accelerated workloads such as eye tracking, fully articulated hand tracking, semantic labeling, spatial audio and JBL filter
HoloLens 2 Emulator
The HoloLens 2 Emulator was made available to developers on April 17, 2019. This emulator allows developers to create applications for the HoloLens 2 before the device ships.
References
External links
Microsoft Documentation for HoloLens Emulator
Augmented reality
Computing input devices
Gesture recognition
Head-mounted displays
History of human–computer interaction
Microphones
Microsoft peripherals
Mixed reality
Wearable devices
Windows 10 | HoloLens 2 | Technology | 718 |
20,409,825 | https://en.wikipedia.org/wiki/Amiga%20Walker | The Amiga Walker, sometimes incorrectly known as the Mind Walker, is a prototype of an Amiga computer developed and shown by Amiga Technologies, a subsidiary of Escom, in late 1995/early 1996. Walker was planned as a replacement for the A1200 with a faster CPU, better expansion capabilities, and a built-in CD-ROM. The Walker was never released; Escom and Amiga Technologies went bankrupt, and only two (three) prototypes were made.
The case is unique and radically different from computers before it. The intention was also to make the motherboard available without the case so users could put it into a standard PC case. There were a number of other potential case designs of different sizes, the Walker motherboard could fit all of them; this allowed for expandability tailored to the user's requirements.
When the Walker was announced, it was the subject of much discussion (and ridicule) within the Amiga user community, centering on the unconventional case design.
Technical information
Specifications
CPU:
Motorola 68030/33 MHz (in the prototype version)
Motorola 68030/40 MHz (compared to 68020/14 MHz in A1200)
Chipset: AGA
Memory:
1 MB Kickstart ROM (compared to 512 kB in the original Amiga 1200)
2 MB Chip RAM
4 MB Fast RAM (only in the production version)
Drives:
internal CD-ROM
1.44 MB internal floppy drive
Realtime clock onboard
Additional:
Amiga keyboard
See also
Power A5000
Amiga models and variants
References
External links
Short video of an Amiga Walker prototype on YouTube
Amiga
Vaporware | Amiga Walker | Technology | 318 |
100,558 | https://en.wikipedia.org/wiki/A%2A%20search%20algorithm | A* (pronounced "A-star") is a graph traversal and pathfinding algorithm that is used in many fields of computer science due to its completeness, optimality, and optimal efficiency. Given a weighted graph, a source node and a goal node, the algorithm finds the shortest path (with respect to the given weights) from source to goal.
One major practical drawback is its space complexity where is the depth of the solution (the length of the shortest path) and is the branching factor (the average number of successors per state), as it stores all generated nodes in memory. Thus, in practical travel-routing systems, it is generally outperformed by algorithms that can pre-process the graph to attain better performance, as well as by memory-bounded approaches; however, A* is still the best solution in many cases.
Peter Hart, Nils Nilsson and Bertram Raphael of Stanford Research Institute (now SRI International) first published the algorithm in 1968. It can be seen as an extension of Dijkstra's algorithm. A* achieves better performance by using heuristics to guide its search.
Compared to Dijkstra's algorithm, the A* algorithm only finds the shortest path from a specified source to a specified goal, and not the shortest-path tree from a specified source to all possible goals. This is a necessary trade-off for using a specific-goal-directed heuristic. For Dijkstra's algorithm, since the entire shortest-path tree is generated, every node is a goal, and there can be no specific-goal-directed heuristic.
History
A* was created as part of the Shakey project, which had the aim of building a mobile robot that could plan its own actions. Nils Nilsson originally proposed using the Graph Traverser algorithm for Shakey's path planning. Graph Traverser is guided by a heuristic function , the estimated distance from node to the goal node: it entirely ignores , the distance from the start node to . Bertram Raphael suggested using the sum, . Peter Hart invented the concepts we now call admissibility and consistency of heuristic functions. A* was originally designed for finding least-cost paths when the cost of a path is the sum of its costs, but it has been shown that A* can be used to find optimal paths for any problem satisfying the conditions of a cost algebra.
The original 1968 A* paper contained a theorem stating that no A*-like algorithm could expand fewer nodes than A* if the heuristic function is consistent and A*'s tie-breaking rule is suitably chosen. A "correction" was published a few years later claiming that consistency was not required, but this was shown to be false in 1985 in Dechter and Pearl's definitive study of A*'s optimality (now called optimal efficiency), which gave an example of A* with a heuristic that was admissible but not consistent expanding arbitrarily more nodes than an alternative A*-like algorithm.
Description
A* is an informed search algorithm, or a best-first search, meaning that it is formulated in terms of weighted graphs: starting from a specific starting node of a graph, it aims to find a path to the given goal node having the smallest cost (least distance travelled, shortest time, etc.). It does this by maintaining a tree of paths originating at the start node and extending those paths one edge at a time until the goal node is reached.
At each iteration of its main loop, A* needs to determine which of its paths to extend. It does so based on the cost of the path and an estimate of the cost required to extend the path all the way to the goal. Specifically, A* selects the path that minimizes
where is the next node on the path, is the cost of the path from the start node to , and is a heuristic function that estimates the cost of the cheapest path from to the goal. The heuristic function is problem-specific. If the heuristic function is admissible – meaning that it never overestimates the actual cost to get to the goal – A* is guaranteed to return a least-cost path from start to goal.
Typical implementations of A* use a priority queue to perform the repeated selection of minimum (estimated) cost nodes to expand. This priority queue is known as the open set, fringe or frontier. At each step of the algorithm, the node with the lowest value is removed from the queue, the and values of its neighbors are updated accordingly, and these neighbors are added to the queue. The algorithm continues until a removed node (thus the node with the lowest value out of all fringe nodes) is a goal node. The value of that goal is then also the cost of the shortest path, since at the goal is zero in an admissible heuristic.
The algorithm described so far only gives the length of the shortest path. To find the actual sequence of steps, the algorithm can be easily revised so that each node on the path keeps track of its predecessor. After this algorithm is run, the ending node will point to its predecessor, and so on, until some node's predecessor is the start node.
As an example, when searching for the shortest route on a map, might represent the straight-line distance to the goal, since that is physically the smallest possible distance between any two points. For a grid map from a video game, using the Taxicab distance or the Chebyshev distance becomes better depending on the set of movements available (4-way or 8-way).
If the heuristic satisfies the additional condition for every edge of the graph (where denotes the length of that edge), then is called monotone, or consistent. With a consistent heuristic, A* is guaranteed to find an optimal path without processing any node more than once and A* is equivalent to running Dijkstra's algorithm with the reduced cost {{math|d(x, y) d(x, y) + h(y) − h(x)}}.
Pseudocode
The following pseudocode describes the algorithm:
function reconstruct_path(cameFrom, current)
total_path := {current}
while current in cameFrom.Keys:
current := cameFrom[current]
total_path.prepend(current)
return total_path
// A* finds a path from start to goal.
// h is the heuristic function. h(n) estimates the cost to reach goal from node n.
function A_Star(start, goal, h)
// The set of discovered nodes that may need to be (re-)expanded.
// Initially, only the start node is known.
// This is usually implemented as a min-heap or priority queue rather than a hash-set.
openSet := {start}
// For node n, cameFrom[n] is the node immediately preceding it on the cheapest path from the start
// to n currently known.
cameFrom := an empty map
// For node n, gScore[n] is the currently known cost of the cheapest path from start to n.
gScore := map with default value of Infinity
gScore[start] := 0
// For node n, fScore[n] := gScore[n] + h(n). fScore[n] represents our current best guess as to
// how cheap a path could be from start to finish if it goes through n.
fScore := map with default value of Infinity
fScore[start] := h(start)
while openSet is not empty
// This operation can occur in O(Log(N)) time if openSet is a min-heap or a priority queue
current := the node in openSet having the lowest fScore[] value
if current = goal
return reconstruct_path(cameFrom, current)
openSet.Remove(current)
for each neighbor of current
// d(current,neighbor) is the weight of the edge from current to neighbor
// tentative_gScore is the distance from start to the neighbor through current
tentative_gScore := gScore[current] + d(current, neighbor)
if tentative_gScore < gScore[neighbor]
// This path to neighbor is better than any previous one. Record it!
cameFrom[neighbor] := current
gScore[neighbor] := tentative_gScore
fScore[neighbor] := tentative_gScore + h(neighbor)
if neighbor not in openSet
openSet.add(neighbor)
// Open set is empty but goal was never reached
return failureRemark: In this pseudocode, if a node is reached by one path, removed from openSet, and subsequently reached by a cheaper path, it will be added to openSet again. This is essential to guarantee that the path returned is optimal if the heuristic function is admissible but not consistent. If the heuristic is consistent, when a node is removed from openSet the path to it is guaranteed to be optimal so the test ‘tentative_gScore < gScore[neighbor]’ will always fail if the node is reached again.
Example
An example of an A* algorithm in action where nodes are cities connected with roads and h(x) is the straight-line distance to the target point:Key: green: start; blue: goal; orange: visited
The A* algorithm has real-world applications. In this example, edges are railroads and h(x) is the great-circle distance (the shortest possible distance on a sphere) to the target. The algorithm is searching for a path between Washington, D.C., and Los Angeles.
Implementation details
There are a number of simple optimizations or implementation details that can significantly affect the performance of an A* implementation. The first detail to note is that the way the priority queue handles ties can have a significant effect on performance in some situations. If ties are broken so the queue behaves in a LIFO manner, A* will behave like depth-first search among equal cost paths (avoiding exploring more than one equally optimal solution).
When a path is required at the end of the search, it is common to keep with each node a reference to that node's parent. At the end of the search, these references can be used to recover the optimal path. If these references are being kept then it can be important that the same node doesn't appear in the priority queue more than once (each entry corresponding to a different path to the node, and each with a different cost). A standard approach here is to check if a node about to be added already appears in the priority queue. If it does, then the priority and parent pointers are changed to correspond to the lower-cost path. A standard binary heap based priority queue does not directly support the operation of searching for one of its elements, but it can be augmented with a hash table that maps elements to their position in the heap, allowing this decrease-priority operation to be performed in logarithmic time. Alternatively, a Fibonacci heap can perform the same decrease-priority operations in constant amortized time.
Special cases
Dijkstra's algorithm, as another example of a uniform-cost search algorithm, can be viewed as a special case of A* where for all x.. General depth-first search can be implemented using A* by considering that there is a global counter C initialized with a very large value. Every time we process a node we assign C to all of its newly discovered neighbors. After every single assignment, we decrease the counter C by one. Thus the earlier a node is discovered, the higher its value. Both Dijkstra's algorithm and depth-first search can be implemented more efficiently without including an value at each node.
Properties
Termination and completeness
On finite graphs with non-negative edge weights A* is guaranteed to terminate and is complete, i.e. it will always find a solution (a path from start to goal) if one exists. On infinite graphs with a finite branching factor and edge costs that are bounded away from zero ( for some fixed ), A* is guaranteed to terminate only if there exists a solution.
Admissibility
A search algorithm is said to be admissible if it is guaranteed to return an optimal solution. If the heuristic function used by A* is admissible, then A* is admissible. An intuitive "proof" of this is as follows:
Call a node closed if it has been visited and is not in the open set. We close a node when we remove it from the open set. A basic property of the A* algorithm, which we'll sketch a proof of below, is that when is closed, is an optimistic estimate (lower bound) of the true distance from the start to the goal. So when the goal node, , is closed, is no more than the true distance. On the other hand, it is no less than the true distance, since it is the length of a path to the goal plus a heuristic term.
Now we'll see that whenever a node is closed, is an optimistic estimate. It is enough to see that whenever the open set is not empty, it has at least one node on an optimal path to the goal for which is the true distance from start, since in that case + underestimates the distance to goal, and therefore so does the smaller value chosen for the closed vertex. Let be an optimal path from the start to the goal. Let be the last closed node on for which is the true distance from the start to the goal (the start is one such vertex). The next node in has the correct value, since it was updated when was closed, and it is open since it is not closed.
Optimality and consistency
Algorithm A is optimally efficient with respect to a set of alternative algorithms Alts on a set of problems P if for every problem P in P and every algorithm A′ in Alts, the set of nodes expanded by A in solving P is a subset (possibly equal) of the set of nodes expanded by A′ in solving P. The definitive study of the optimal efficiency of A* is due to Rina Dechter and Judea Pearl.
They considered a variety of definitions of Alts and P''' in combination with A*'s heuristic being merely admissible or being both consistent and admissible. The most interesting positive result they proved is that A*, with a consistent heuristic, is optimally efficient with respect to all admissible A*-like search algorithms on all "non-pathological" search problems. Roughly speaking, their notion of the non-pathological problem is what we now mean by "up to tie-breaking". This result does not hold if A*'s heuristic is admissible but not consistent. In that case, Dechter and Pearl showed there exist admissible A*-like algorithms that can expand arbitrarily fewer nodes than A* on some non-pathological problems.
Optimal efficiency is about the set of nodes expanded, not the number of node expansions (the number of iterations of A*'s main loop). When the heuristic being used is admissible but not consistent, it is possible for a node to be expanded by A* many times, an exponential number of times in the worst case.
In such circumstances, Dijkstra's algorithm could outperform A* by a large margin. However, more recent research found that this pathological case only occurs in certain contrived situations where the edge weight of the search graph is exponential in the size of the graph and that certain inconsistent (but admissible) heuristics can lead to a reduced number of node expansions in A* searches.
Bounded relaxation
While the admissibility criterion guarantees an optimal solution path, it also means that A* must examine all equally meritorious paths to find the optimal path. To compute approximate shortest paths, it is possible to speed up the search at the expense of optimality by relaxing the admissibility criterion. Oftentimes we want to bound this relaxation, so that we can guarantee that the solution path is no worse than (1 + ε) times the optimal solution path. This new guarantee is referred to as ε-admissible.
There are a number of ε-admissible algorithms:
Weighted A*/Static Weighting's. If ha(n) is an admissible heuristic function, in the weighted version of the A* search one uses , as the heuristic function, and perform the A* search as usual (which eventually happens faster than using ha since fewer nodes are expanded). The path hence found by the search algorithm can have a cost of at most ε times that of the least cost path in the graph.
Dynamic Weighting uses the cost function , where , and where is the depth of the search and N is the anticipated length of the solution path.
Sampled Dynamic Weighting uses sampling of nodes to better estimate and debias the heuristic error.
. uses two heuristic functions. The first is the FOCAL list, which is used to select candidate nodes, and the second hF is used to select the most promising node from the FOCAL list.
Aε selects nodes with the function , where A and B are constants. If no nodes can be selected, the algorithm will backtrack with the function , where C and D are constants.
AlphA* attempts to promote depth-first exploitation by preferring recently expanded nodes. AlphA* uses the cost function , where , where λ and Λ are constants with , π(n) is the parent of n, and ñ is the most recently expanded node.
Complexity
The time complexity of A* depends on the heuristic. In the worst case of an unbounded search space, the number of nodes expanded is exponential in the depth of the solution (the shortest path) : , where is the branching factor (the average number of successors per state). This assumes that a goal state exists at all, and is reachable from the start state; if it is not, and the state space is infinite, the algorithm will not terminate.
The heuristic function has a major effect on the practical performance of A* search, since a good heuristic allows A* to prune away many of the nodes that an uninformed search would expand. Its quality can be expressed in terms of the effective branching factor , which can be determined empirically for a problem instance by measuring the number of nodes generated by expansion, , and the depth of the solution, then solving
Good heuristics are those with low effective branching factor (the optimal being ).
The time complexity is polynomial when the search space is a tree, there is a single goal state, and the heuristic function h'' meets the following condition:
where is the optimal heuristic, the exact cost to get from to the goal. In other words, the error of will not grow faster than the logarithm of the "perfect heuristic" that returns the true distance from to the goal.
The space complexity of A* is roughly the same as that of all other graph search algorithms, as it keeps all generated nodes in memory. In practice, this turns out to be the biggest drawback of the A* search, leading to the development of memory-bounded heuristic searches, such as Iterative deepening A*, memory-bounded A*, and SMA*.
Applications
A* is often used for the common pathfinding problem in applications such as video games, but was originally designed as a general graph traversal algorithm.
It finds applications in diverse problems, including the problem of parsing using stochastic grammars in NLP.
Other cases include an Informational search with online learning.
Relations to other algorithms
What sets A* apart from a greedy best-first search algorithm is that it takes the cost/distance already traveled, , into account.
Some common variants of Dijkstra's algorithm can be viewed as a special case of A* where the heuristic for all nodes; in turn, both Dijkstra and A* are special cases of dynamic programming.
A* itself is a special case of a generalization of branch and bound.
A* is similar to beam search except that beam search maintains a limit on the numbers of paths that it has to explore.
Variants
Anytime A*
Block A*
D*
Field D*
Fringe
Fringe Saving A* (FSA*)
Generalized Adaptive A* (GAA*)
Incremental heuristic search
Reduced A*
Iterative deepening A* (IDA*)
Jump point search
Lifelong Planning A* (LPA*)
New Bidirectional A* (NBA*)
Simplified Memory bounded A* (SMA*)
Theta*
A* can also be adapted to a bidirectional search algorithm, but special care needs to be taken for the stopping criterion.
See also
Any-angle path planning, search for paths that are not limited to moving along graph edges but rather can take on any angle
Breadth-first search
Depth-first search
Notes
References
Further reading
External links
Variation on A* called Hierarchical Path-Finding A* (HPA*)
Graph algorithms
Routing algorithms
Search algorithms
Combinatorial optimization
Game artificial intelligence
Articles with example pseudocode
Greedy algorithms
Graph distance | A* search algorithm | Mathematics | 4,421 |
58,955 | https://en.wikipedia.org/wiki/Timeline%20of%20artificial%20satellites%20and%20space%20probes | This timeline of artificial satellites and space probes includes uncrewed spacecraft including technology demonstrators, observatories, lunar probes, and interplanetary probes. First satellites from each country are included. Not included are most Earth science satellites, commercial satellites or crewed missions.
Timeline
1950s
1960s
1970s
1980s
1990s
2000s
2010s
2020s
References
External links
Current and Upcoming Launches
Missions-NASA
Unmanned spaceflight discussion forum
Chronology of Lunar and Planetary Exploration (homepage)
Artificial satellites and space probes | Timeline of artificial satellites and space probes | Astronomy | 100 |
53,913,187 | https://en.wikipedia.org/wiki/Land%20change%20modeling | Land change models (LCMs) describe, project, and explain changes in and the dynamics of land use and land-cover. LCMs are a means of understanding ways that humans change the Earth's surface in the past, present, and future.
Land change models are valuable in development policy, helping guide more appropriate decisions for resource management and the natural environment at a variety of scales ranging from a small piece of land to the entire spatial extent. Moreover, developments within land-cover, environmental and socio-economic data (as well as within technological infrastructures) have increased opportunities for land change modeling to help support and influence decisions that affect human-environment systems, as national and international attention increasingly focuses on issues of global climate change and sustainability.
Importance
Changes in land systems have consequences for climate and environmental change on every scale. Therefore, decisions and policies in relation to land systems are very important for reacting these changes and working towards a more sustainable society and planet.
Land change models are significant in their ability to help guide the land systems to positive societal and environmental outcomes at a time when attention to changes across land systems is increasing.
A plethora of science and practitioner communities have been able to advance the amount and quality of data in land change modeling in the past few decades. That has influenced the development of methods and technologies in model land change. The multitudes of land change models that have been developed are significant in their ability to address land system change and useful in various science and practitioner communities.
For the science community, land change models are important in their ability to test theories and concepts of land change and its connections to human-environment relationships, as well as explore how these dynamics will change future land systems without real-world observation.
Land change modeling is useful to explore spatial land systems, uses, and covers. Land change modeling can account for complexity within dynamics of land use and land cover by linking with climatic, ecological, biogeochemical, biogeophysical and socioeconomic models. Additionally, LCMs are able to produce spatially explicit outcomes according to the type and complexity within the land system dynamics within the spatial extent. Many biophysical and socioeconomic variables influence and produce a variety of outcomes in land change modeling.
Model uncertainty
A notable property of all land change models is that they have some irreducible level of uncertainty in the model structure, parameter values, and/or input data. For instance, one uncertainty within land change models is a result from temporal non-stationarity that exists in land change processes, so the further into the future the model is applied, the more uncertain it is. Another uncertainty within land change models are data and parameter uncertainties within physical principles (i.e., surface typology), which leads to uncertainties in being able to understand and predict physical processes.
Furthermore, land change model design are a product of both decision-making and physical processes. Human-induced impact on the socio-economic and ecological environment is important to take into account, as it is constantly changing land cover and sometimes model uncertainty. To avoid model uncertainty and interpret model outputs more accurately, a model diagnosis is used to understand more about the connections between land change models and the actual land system of the spatial extent. The overall importance of model diagnosis with model uncertainty issues is its ability to assess how interacting processes and the landscape are represented, as well as the uncertainty within the landscape and its processes.
Approaches
Machine learning and statistical models
A machine-learning approach uses land-cover data from the past to try to assess how land will change in the future, and works best with large datasets. There are multiple types of machine-learning and statistical models - a study in western Mexico from 2011 found that results from two outwardly similar models were considerably different, as one used a neural network and the other used a simple weights-of-evidence model.
Cellular models
A cellular land change model uses maps of suitability for various types of land use, and compares areas that are immediately adjacent to one another to project changes into the future. Variations in the scale of cells in a cellular model can have significant impacts on model outputs.
Sector-based and spatially disaggregated economic models
Economic models are built on principles of supply and demand. They use mathematical parameters in order to predict what land types will be desired and which will be discarded. These are frequently built for urban areas, such as a 2003 study of the highly dense Pearl River Delta in southern China.
Agent-based models
Agent-based models try to simulate the behavior of many individuals making independent choices, and then see how those choices affect the landscape as a whole. Agent-based modeling can be complex - for instance, a 2005 study combined an agent-based model with computer-based genetic programming to explore land change in the Yucatan peninsula of Mexico.
Hybrid approaches
Many models do not limit themselves to one of the approaches above - they may combine several in order to develop a fully comprehensive and accurate model.
Evaluation
Purpose
Land change models are evaluated to appraise and quantify the performance of a model’s predictive power in terms of spatial allocation and quantity of change. Evaluating a model allows the modeler to evaluate a model’s performance to edit a “model’s output, data measurement, and the mapping and modeling of data” for future applications. The purpose for model evaluation is not to develop a singular metric or method to maximize a “correct” outcome, but to develop tools to evaluate and learn from model outputs to produce better models for their specific applications
Methods
There are two types of validation in land change modeling: process validation and pattern validation. Process Validation compares the match between “the process in the model and the process operating in the real world”. Process validation is most commonly used in agent-based modeling whereby the modeler is using the behaviors and decisions to inform the process determining land change in the model. Pattern validation compares model outputs (ie. predicted change) and observed outputs (ie. reference change). Three map analyses are a commonly used method for pattern validation in which three maps, a reference map at time 1, a reference map at time 2, and a simulated map of time 2, are compared. This generates a cross-comparison of the three maps where the pixels are classified as one of these five categories:
Hits: reference change is correctly simulated as change
Misses: reference change is simulated incorrectly as persistence
False alarms: persistence in the reference data is simulated incorrectly as change
Correct rejections: reference change correctly simulated as persistence
Wrong hits: reference change simulated as correctly as change, but to the wrong gaining category
Because three map comparisons include both errors and correctly simulated pixels, it results in a visual expression of both allocation and quantity errors.
Single-summary metrics are also used to evaluate LCMs. There are many single summary metrics that modelers have used to evaluate their models and are often utilized to compare models to each other. One such metric is the Figure of Merit (FoM) which uses the hit, miss, and false alarm values generated from a three-map comparison to generate a percentage value that expresses the intersection between reference and simulated change. Single summary metrics can obfuscate important information, but the FoM can be useful especially when the hit, miss and false alarm values are reported as well.
Improvements
The separation of calibration from validation has been identified as a challenge that should be addressed as a modeling challenge. This is commonly caused by modelers use of information from after the first time period. This can cause a map to appear to have a level of accuracy that is much higher than a model’s actual predictive power. Additional improvements that have been discussed within the field include characterizing the difference between allocation errors and quantity errors, which can be done through three map comparisons, as well as including both observed and predicted change in the analysis of land change models. Single summary metrics have been overly relied on in the past, and have varying levels of usefulness when evaluating LCMs. Even the best single summary metrics often leave out important information, and reporting metrics like FoM along with the maps and values that are used to generate them can communicate necessary information that would otherwise be obfuscated.
Implementation opportunities
Scientists use LCMs to build and test theories in land change modeling for a variety of human and environmental dynamics. Land change modeling has a variety of implementation opportunities in many science and practice disciplines, such as in decision-making, policy, and in real-world application in public and private domains. Land change modeling is a key component of land change science, which uses LCMs to assess long-term outcomes for land cover and climate. The science disciplines use LCMs to formalize and test land change theory, and the explore and experiment with different scenarios of land change modeling. The practical disciplines use LCMs to analyze current land change trends and explore future outcomes from policies or actions in order to set appropriate guidelines, limits and principles for policy and action. Research and practitioner communities may study land change to address topics related to land-climate interactions, water quantity and quality, food and fiber production, and urbanization, infrastructure, and the built environment.
Improvement and advancement
Improved land observational strategies
One improvement for land change modeling can be made through better data and integration with available data and models. Improved observational data can influence modeling quality. Finer spatial and temporal resolution data that can integrate with socioeconomic and biogeophysical data can help land change modeling couple the socioeconomic and biogeological modeling types. Land change modelers should value data at finer scales. Fine data can give a better conceptual understanding of underlying constructs of the model and capture additional dimensions of land use. It is important to maintain the temporal and spatial continuity of data from airborne-based and survey-based observation through constellations of smaller satellite coverage, image processing algorithms, and other new data to link satellite-based land use information and land management information. It is also important to have better information on land change actors and their beliefs, preferences, and behaviors to improve the predictive ability of models and evaluate the consequences of alternative policies.
Aligning model choices with model goals
One important improvement for land change modeling can be made though better aligning model choices with model goals. It is important to choose the appropriate modeling approach based on the scientific and application contexts of the specific study of interest. For example, when someone needs to design a model with policy and policy actors in mind, they may choose an agent-based model. Here, structural economic or agent-based approaches are useful, but specific patterns and trends in land change as with many ecological systems may not be as useful. When one needs to grasp the early stages of problem identification, and thus needs to understand the scientific patterns and trend of land change, machine learning and cellular approaches are useful.
Integrating positive and normative approaches
Land Change Modeling should also better integrate positive and normative approaches to explanation and prediction based on evidence-based accounts of land systems. It should also integrate optimization approaches to explore the outcomes that are the most beneficial and the processes that might produce those outcomes.
Integrating across scales
It is important to integrate data across scales. A models design is based on the dominant processes and data from a specific scale of application and spatial extent. Cross-scale dynamics and feedbacks between temporal and spatial scales influences the patterns and processes of the model. Process like telecoupling, indirect land use change, and adaption to climate change at multiple scales requires better representation by cross-scale dynamics. Implementing these processes will require a better understanding of feedback mechanisms across scales.
Opportunities in research infrastructure and cyberinfrastructure support
As there is continuous reinvention of modeling environments, frameworks, and platforms, land change modeling can improve from better research infrastructure support. For example, model and software infrastructure development can help avoid duplication of initiatives by land change modeling community members, co-learn about land change modeling, and integrate models to evaluate impacts of land change. Better data infrastructure can provide more data resources to support compilation, curation, and comparison of heterogeneous data sources. Better community modeling and governance can advance decision-making and modeling capabilities within a community with specific and achievable goals. Community modeling and governance would provide a step towards reaching community agreement on specific goals to move modeling and data capabilities forward.
A number of modern challenges in land change modeling can potentially be addressed through contemporary advances in cyberinfrastructure such as crowd-source, “mining” for distributed data, and improving high-performance computing. Because it is important for modelers to find more data to better construct, calibrate, and validate structural models, the ability to analyze large amount of data on individual behaviors is helpful. For example, modelers can find point-of-sales data on individual purchases by consumers and internet activities that reveal social networks. However, some issues of privacy and propriety for crowdsourcing improvements have not yet been resolved.
The land change modeling community can also benefit from Global Positioning System and Internet-enabled mobile device data distribution. Combining various structural-based data-collecting methods can improve the availability of microdata and the diversity of people that see the findings and outcomes of land change modeling projects. For example, citizen-contributed data supported the implementation of Ushahidi in Haiti after the 2010 earthquake, helping at least 4,000 disaster events. Universities, non-profit agencies, and volunteers are needed to collect information on events like this to make positive outcomes and improvements in land change modeling and land change modeling applications. Tools such as mobile devices are available to make it easier for participants to participate in collecting micro-data on agents. Google Maps uses cloud-based mapping technologies with datasets that are co-produced by the public and scientists. Examples in agriculture such as coffee farmers in Avaaj Otalo showed use of mobile phones for collecting information and as an interactive voice.
Cyberinfrastructure developments may also increase the ability of land change modeling to meet computational demands of various modeling approaches given increasing data volumes and certain expected model interactions. For example, improving the development of processors, data storage, network bandwidth, and coupling land change and environmental process models at high resolution.
Model evaluation
An additional way to improve land change modeling is through improvement of model evaluation approaches. Improvement in sensitivity analysis are needed to gain a better understand of the variation in model output in response to model elements like input data, model parameters, initial conditions, boundary conditions, and model structure. Improvement in pattern validation can help land change modelers make comparisons between model outputs parameterized for some historic case, like maps, and observations for that case. Improvement in uncertainty sources is needed to improve forecasting of future states that are non-stationary in processes, input variables, and boundary conditions. One can explicitly recognize stationarity assumptions and explore data for evidence in non-stationarity to better acknowledge and understand model uncertainty to improve uncertainty sources. Improvement in structural validation can help improve acknowledgement and understanding of the processes in the model and the processes operating in the real world through a combination of qualitative and quantitative measures.
See also
GeoMod
Land change science
Land use and land use planning
Land-use forecasting
Land Use Evolution and Impact Assessment Model (LEAM)
TerrSet
References
Land management
Forecasting
Deforestation
Environmental modelling
Land use
Physical geography | Land change modeling | Environmental_science | 3,102 |
63,361,034 | https://en.wikipedia.org/wiki/Power%20in%20Numbers%3A%20The%20Rebel%20Women%20of%20Mathematics | Power in Numbers: The Rebel Women of Mathematics is a book on women in mathematics, by Talithia Williams. It was published in 2018 by Race Point Publishing.
Topics and related works
This book is a collection of biographies of 27 women mathematicians, and brief sketches of the lives of many others. It is similar to previous works including Osen's Women in Mathematics (1974), Perl's Math Equals (1978), Henrion's Women in Mathematics (1997), Murray's Women Becoming Mathematicians (2000), Complexities: Women in Mathematics (2005), Green and LaDuke's Pioneering Women in American Mathematics (2009), and Swaby's Headstrong (2015).
The book is divided into three sections. The first two cover mathematics before and after World War II, when women's mathematical contributions to codebreaking and other aspects of the war effort became crucial;
together they include the biographies of 11 mathematicians. The final section, on modern (post-1965) mathematics has another 16. Mathematics is interpreted in a broad sense, including people who trained as mathematicians and worked in industry, or who made mathematical contributions in other fields. It includes people from more diverse backgrounds than previous such collections, including 18th-century Chinese astronomer Wang Zhenyi, Native American engineer Mary G. Ross, African-American rocket scientist Annie Easley, Iranian mathematician Maryam Mirzakhani, and Mexican-American mathematician Pamela E. Harris.
Mathematicians
The mathematicians discussed in this book include:
Part I: The Pioneers
Marie Crous
Émilie du Châtelet
Maria Gaetana Agnesi
Philippa Fawcett
Isabel Maddison
Grace Chisholm Young
Wang Zhenyi
Sophie Germain
Winifred Edgerton Merrill
Sofya Kovalevskaya
Emmy Noether
Euphemia Haynes
Part II: From Code Breaking to Rocket Science
Grace Hopper
Mary G. Ross
Dorothy Vaughan
Katherine Johnson
Mary Jackson
Shakuntala Devi
Annie Easley
Margaret Hamilton
Part III: Modern Math Mavens
Sylvia Bozeman
Eugenia Cheng
Carla Cotwright-Williams
Pamela E. Harris
Maryam Mirzakhani
Ami Radunskaya
Daina Taimiņa
Tatiana Toro
Chelsea Walton
Sara Zahedi
Audience and reception
The book is aimed at a young audience, with many images and few mathematical details. Nevertheless, each biography is accompanied by a general-audience introduction to the subject's mathematical work, and beyond images of the women profiled, the book includes many mathematical illustrations and historical images that bring to life these contributions. Reviewer Emille Davie Lawrence suggests that the book could also find its way to the coffee tables of professional mathematicians, and spark conversations with guests.
Reviewer Amy Ackerberg-Hastings criticizes the book for overlooking much scholarly work on the subject of women in mathematics, for its lack of detail for some notable women including Émilie du Châtelet and Maria Gaetana Agnesi, and for omitting others such as Mary Somerville. Nevertheless, she recommends it as a "gift book for middle schoolers", as a way of motivating them to work in STEM fields.
Reviewer Allan Stenger notes with approval the book's inclusion of information about how each subject became interested in mathematics, and despite catching some minor errors calls it "a good bet for inspiring bright young women to have an interest in math". Similarly, reviewer Angela Mihai writes that it "will educate and encourage many aspiring mathematicians".
References
Women in mathematics
Biographies and autobiographies of mathematicians
2018 non-fiction books | Power in Numbers: The Rebel Women of Mathematics | Technology | 711 |
31,453,337 | https://en.wikipedia.org/wiki/Algebraic%20geometry%20of%20projective%20spaces | The concept of a Projective space plays a central role in algebraic geometry. This article aims to define the notion in terms of abstract algebraic geometry and to describe some basic uses of projective spaces.
Homogeneous polynomial ideals
Let k be an algebraically closed field, and V be a finite-dimensional vector space over k. The symmetric algebra of the dual vector space V* is called the polynomial ring on V and denoted by k[V]. It is a naturally graded algebra by the degree of polynomials.
The projective Nullstellensatz states that, for any homogeneous ideal I that does not contain all polynomials of a certain degree (referred to as an irrelevant ideal), the common zero locus of all polynomials in I (or Nullstelle) is non-trivial (i.e. the common zero locus contains more than the single element {0}), and, more precisely, the ideal of polynomials that vanish on that locus coincides with the radical of the ideal I.
This last assertion is best summarized by the formula : for any relevant ideal I,
In particular, maximal homogeneous relevant ideals of k[V] are one-to-one with lines through the origin of V.
Construction of projectivized schemes
Let V be a finite-dimensional vector space over a field k. The scheme over k defined by Proj(k[V]) is called projectivization of V. The projective n-space on k is the projectivization of the vector space .
The definition of the sheaf is done on the base of open sets of principal open sets D(P), where P varies over the set of homogeneous polynomials, by setting the sections
to be the ring , the zero degree component of the ring obtained by localization at P. Its elements are therefore the rational functions with homogeneous numerator and some power of P as the denominator, with same degree as the numerator.
The situation is most clear at a non-vanishing linear form φ. The restriction of the structure sheaf to the open set D(φ) is then canonically identified with the affine scheme spec(k[ker φ]). Since the D(φ) form an open cover of X the projective schemes can be thought of as being obtained by the gluing via projectivization of isomorphic affine schemes.
It can be noted that the ring of global sections of this scheme is a field, which implies that the scheme is not affine. Any two open sets intersect non-trivially: ie the scheme is irreducible. When the field k is algebraically closed, is in fact an abstract variety, that furthermore is complete. cf. Glossary of scheme theory
Divisors and twisting sheaves
The Proj construction in fact gives more than a mere scheme: a sheaf in graded modules over the structure sheaf is defined in the process. The homogeneous components of this graded sheaf are denoted , the Serre twisting sheaves. All of these sheaves are in fact line bundles. By the correspondence between Cartier divisors and line bundles, the first twisting sheaf is equivalent to hyperplane divisors.
Since the ring of polynomials is a unique factorization domain, any prime ideal of height 1 is principal, which shows that any Weil divisor is linearly equivalent to some power of a hyperplane divisor. This consideration proves that the Picard group of a projective space is free of rank 1. That is , and the isomorphism is given by the degree of divisors.
Classification of vector bundles
The invertible sheaves, or line bundles, on the projective space for k a field, are exactly the twisting sheaves so the Picard group of is isomorphic to . The isomorphism is given by the first Chern class.
The space of local sections on an open set of the line bundle is the space of homogeneous degree k regular functions on the cone in V associated to U. In particular, the space of global sections
vanishes if m < 0, and consists of constants in k for m=0 and of homogeneous polynomials of degree m for m > 0. (Hence has dimension ).
The Birkhoff-Grothendieck theorem states that on the projective line, any vector bundle splits in a unique way as a direct sum of the line bundles.
Important line bundles
The tautological bundle, which appears for instance as the exceptional divisor of the blowing up of a smooth point is the sheaf . The canonical bundle
is .
This fact derives from a fundamental geometric statement on projective spaces: the Euler sequence.
The negativity of the canonical line bundle makes projective spaces prime examples of Fano varieties, equivalently, their anticanonical line bundle is ample (in fact very ample). Their index (cf. Fano varieties) is given by , and, by a theorem of Kobayashi-Ochiai, projective spaces are characterized amongst Fano varieties by the property
Morphisms to projective schemes
As affine spaces can be embedded in projective spaces, all affine varieties can be embedded in projective spaces too.
Any choice of a finite system of nonsimultaneously vanishing global sections of a globally generated line bundle defines a morphism to a projective space. A line bundle whose base can be embedded in a projective space by such a morphism is called very ample.
The group of symmetries of the projective space is the group of projectivized linear automorphisms . The choice of a morphism to a projective space modulo the action of this group is in fact equivalent to the choice of a globally generating n-dimensional linear system of divisors on a line bundle on X. The choice of a projective embedding of X, modulo projective transformations is likewise equivalent to the choice of a very ample line bundle on X.
A morphism to a projective space defines a globally generated line bundle by and a linear system
If the range of the morphism is not contained in a hyperplane divisor, then the pull-back is an injection and the linear system of divisors
is a linear system of dimension n.
An example: the Veronese embeddings
The Veronese embeddings are embeddings for
See the answer on MathOverflow for an application of the Veronese embedding to the calculation of cohomology groups of smooth projective hypersurfaces (smooth divisors).
Curves in projective spaces
As Fano varieties, the projective spaces are ruled varieties. The intersection theory of curves in the projective plane yields the Bézout theorem.
See also
General algebraic geometry
Scheme (mathematics)
Projective variety
Proj construction
General projective geometry
Projective space
Projective geometry
Homogeneous polynomial
Notes
References
Algebraic geometry
Projective geometry
Algebraic varieties
Geometry of divisors
Space (mathematics) | Algebraic geometry of projective spaces | Mathematics | 1,388 |
60,939,156 | https://en.wikipedia.org/wiki/Felisa%20N%C3%BA%C3%B1ez%20Cubero | Felisa Núñez Cubero (January 21, 1924 - August 10, 2017) was a Spanish physicist. She was the first female professor at the Polytechnic University of Madrid.
Career
She graduated in Chemical Sciences in 1946 in Valladolid, and began working with Professor Velayos, who greatly influenced her scientific vocation, orienting it towards physics and directing his doctoral thesis in the area of magnetism. In 1958 she received her doctorate in physics from the UCM with a thesis on permanent magnets and three years later she obtained a scholarship from the Ramsay Memorial Fellowship Trust to expand her research activity at the University of Nottingham, working on magnetic domains with Professor Bates. Her work is cited in the books Modern Magnetism of Bates and Magnetism of Rado and Shull, whose four volumes constitute an authentic encyclopedia of magnetism.
In her academic life she carried out teaching activities, starting as assistant and associate professor at the University of Valladolid (1946-1956), later as assistant professor at the Complutense University of Madrid (1956-1982) and finally at the Polytechnic University of Madrid. Here she was professor of physics, first at the University School of Telecommunications Engineering(1964-1983) and later in the University School of Forestry Technical Engineering (1983-2000), the last ten years as professor emerita. In 1990 the universities of Madrid UCM and UPM awarded her gold medals.
Selected works
"Electricity and Magnetism Laboratory" Editorial Urmo, 1972
"100 problems Electromagnetism" Alianza Editorial, 1997
Awards and honors
First Prize Teaching of Physics. 1999, awarded by the Royal Spanish Society of Physics
Gold Medal of the Complutense University of Madrid. June 1989
Gold Medal of the Polytechnic University of Madrid. October 1989
References
Spanish women physicists
Spanish physicists
Condensed matter physicists
1924 births
2017 deaths
Technical University of Madrid
Academic staff of the Complutense University of Madrid | Felisa Núñez Cubero | Physics,Materials_science | 391 |
2,868,248 | https://en.wikipedia.org/wiki/SNOPT | SNOPT, for Sparse Nonlinear OPTimizer, is a software package for solving large-scale nonlinear optimization problems written by Philip Gill, Walter Murray and Michael Saunders. SNOPT is mainly written in Fortran, but interfaces to C, C++, Python and MATLAB are available.
It employs a sparse sequential quadratic programming (SQP) algorithm with limited-memory quasi-Newton approximations to the Hessian of the Lagrangian. It is especially effective for nonlinear problems with functions and gradients that are expensive to evaluate. The functions should be smooth but need not be convex.
SNOPT is used in several trajectory optimization software packages, including Copernicus, AeroSpace Trajectory Optimization and Software (ASTOS), General Mission Analysis Tool, and Optimal Trajectories by Implicit Simulation (OTIS). It is also available in the Astrogator module of Systems Tool Kit.
SNOPT is supported in the AIMMS, AMPL, APMonitor, General Algebraic Modeling System (GAMS), and TOMLAB modeling systems.
References
External links
Latest Documentation (for SNOPT 7.7) :
SNOPT 7.7 User's Manual (.pdf)
SNOPT 7 Reference Guide (.html)
Numerical software
Mathematical optimization software | SNOPT | Mathematics | 260 |
9,335,064 | https://en.wikipedia.org/wiki/Ledinegg%20instability | In fluid dynamics, the Ledinegg instability occurs in two-phase flow, especially in a boiler tube, when the boiling boundary is within the tube. For a given mass flux J through the tube, the pressure drop per unit length (which typically varies as the square of the mass flux and inversely as the density, i.e., as ) is much less when the flow is wholly of liquid than when the flow is wholly of steam. Thus, as the boiling boundary moves up the tube, the total pressure drop falls, potentially increasing the flow in an unstable manner. Boiler tubes normally overcome this (which is effectively a 'negative resistance' regime) by incorporating a narrow orifice at the entry, to give a stabilising pressure drop on entry.
References
Ruspini, Two-phase flow instabilities: A review, IJHMT, 71, 2013
System Instabilities https://web.archive.org/web/20060721232210/http://caltechbook.library.caltech.edu/51/01/chap15.pdf
http://authors.library.caltech.edu/25021/1/chap15.pdf
Fluid dynamics | Ledinegg instability | Chemistry,Engineering | 255 |
2,903,820 | https://en.wikipedia.org/wiki/Iota%20Cancri | Iota Cancri (ι Cnc, ι Cancri) is a double star in the constellation Cancer approximately 300 light years from Earth.
The two stars of ι Cancri are separated by 30 arcseconds, changing only slowly. Although no orbit has been derived, the two stars show a large common proper motion and are assumed to be gravitationally related.
The brighter star, ι Cancri A, is a yellow G-type giant with an apparent magnitude of +4.02. It is a mild barium star, thought to be formed by mass transfer of enriched material from an asymptotic giant branch star onto a less evolved companion. No such donor has been detected in the ι Cancri system, but it is assumed that there is an unseen white dwarf.
The fainter of the two stars, ι Cancri B, is a white A-type main sequence dwarf with an apparent magnitude of +6.57. It is a shell star, surrounded by material expelled by its rapid rotation.
References
A-type main-sequence stars
G-type giants
Suspected variables
Shell stars
Barium stars
Binary stars
Cancri, Iota
Cancer (constellation)
Durchmusterung objects
Cancri, 48
074738 9
043100 3
3474 5 | Iota Cancri | Astronomy | 266 |
68,144,048 | https://en.wikipedia.org/wiki/NGC%203602 | NGC 3602 is a barred spiral galaxy in the constellation Leo. It was discovered on March 4, 1865 by the astronomer Albert Marth.
See also
List of largest galaxies
List of nearest galaxies
References
External links
Leo (constellation)
3602
Barred spiral galaxies
034351 | NGC 3602 | Astronomy | 55 |
4,547,563 | https://en.wikipedia.org/wiki/Ammonia%20production | Ammonia production takes place worldwide, mostly in large-scale manufacturing plants that produce 240 million metric tonnes of ammonia (2023) annually. Based on the annual production in 2023 the major part (~70%) of the production facilities are based in China (29%), India (9.5%), USA (9.5%), Russia (9.5%), Indonesia (4%), Iran (2,9%), Egypt (2,7%), and middle Saudi Arabia (2,7%). 80% or more of ammonia is used as fertilizer. Ammonia is also used for the production of plastics, fibres, explosives, nitric acid (via the Ostwald process), and intermediates for dyes and pharmaceuticals. The industry contributes 1% to 2% of global . Between 18–20 Mt of the gas is transported globally each year.
History
Dry distillation
Before the start of World War I, most ammonia was obtained by the dry distillation of nitrogenous vegetable and animal products; by the reduction of nitrous acid and nitrites with hydrogen; and also by the decomposition of ammonium salts by alkaline hydroxides or by quicklime, the salt most generally used being the chloride (sal-ammoniac).
Frank–Caro process
Adolph Frank and Nikodem Caro found that Nitrogen could be fixed by using the same calcium carbide produced to make acetylene to form calcium-cyanamide, which could then be divided with water to form ammonia.
The method was developed between 1895 and 1899.
CaO + 3C <=> CaC2 + CO
CaC2 + N2 <=> CaCN2 + C
CaCN2 + 3H2O <=> CaCO3 + 2NH3
Birkeland–Eyde process
While not strictly speaking a method of producing ammonia, nitrogen can be fixed by passing it (with oxygen) through an electric spark.
Nitrides
Heating metals such as magnesium in an atmosphere of pure nitrogen produces nitride, which when combined with water produce metal hydroxide and ammonia.
Haber-Bosch process
Environmental Impacts
Because ammonia production depends on a reliable supply of energy, fossil fuels are often used, contributing to climate change when they are combusted and create greenhouse gasses. Ammonia production also introduces nitrogen into the Earth's nitrogen cycle, causing imbalances that contribute to environmental issues such as algae blooms. Certain production methods prove to have less of an environmental impact, such as those powered by renewable or nuclear energy.
Sustainable production
Sustainable production is possible by using non-polluting methane pyrolysis or generating hydrogen by water electrolysis with renewable energy sources. Thyssenkrupp Uhde Chlorine Engineers expanded its annual production capacity for alkaline water electrolysis to 1 gigawatt of electrolyzer capacity for this purpose.
In a hydrogen economy some hydrogen production could be diverted to feedstock use. For example, in 2002, Iceland produced 2,000 tons of hydrogen gas by electrolysis, using excess power from its hydroelectric plants, primarily for fertilizer. The Vemork hydroelectric plant in Norway used its surplus electricity output to generate renewable nitric acid from 1911 to 1971, requiring 15 MWh/ton of nitric acid. The same reaction is carried out by lightning, providing a natural source of soluble nitrates. Natural gas remains the lowest cost method.
Wastewater is often high in ammonia. Because discharging ammonia-laden water into the environment damages marine life, nitrification is often necessary to remove the ammonia. This may become a potentially sustainable source of ammonia given its abundance. Alternatively, ammonia from wastewater can be sent into an ammonia electrolyzer (ammonia electrolysis) operating with renewable energy sources to produce hydrogen and clean water. Ammonia electrolysis may require much less thermodynamic energy than water electrolysis (only 0.06 V in alkaline media).
Another option for recovering ammonia from wastewater is to use the mechanics of the ammonia-water thermal absorption cycle. Ammonia can thus be recovered either as a liquid or as ammonium hydroxide. The advantage of the former is that it is much easier to handle and transport, whereas the latter has commercial value at concentrations of 30 percent in solution.
Coal
Making ammonia from coal is mainly practised in China, where it is the main source. Oxygen from the air separation module is fed to the gasifier to convert coal into synthesis gas (, CO, ) and . Most gasifiers are based on fluidized beds that operate above atmospheric pressure and have the ability to utilize different coal feeds.
Production plants
The American Oil Co in the mid-1960s positioned a single-converter ammonia plant engineered by M. W. Kellogg at Texas City, Texas, with a capacity of 544 m.t./day. It used a single-train design that received the “Kirkpatrick Chemical Engineering Achievement Award” in 1967. The plant used a four-case centrifugal compressor to compress the syngas to a pressure of 152 bar Final compression to an operating pressure of 324 bar occurred in a reciprocating compressor. Centrifugal compressors for the synthesis loop and refrigeration services provided significant cost reductions.
Almost every plant built between 1964 and 1992 had large single-train designs with syngas manufacturing at 25–35 bar and ammonia synthesis at 150–200 bar. Braun Purifier process plants utilized a primary or tubular reformer with a low outlet temperature and high methane leakage to reduce the size and cost of the reformer. Air was added to the secondary reformer to reduce the methane content of the primary reformer exit stream to 1–2%. Excess nitrogen and other impurities were erased downstream of the methanator. Because the syngas was essentially free of impurities, two axial-flow ammonia converters were used. In early 2000 Uhde developed a process that enabled plant capacities of 3300 mtpd and more. The key innovation was a single-flow synthesis loop at medium pressure in series with a conventional high-pressure synthesis loop.
Small-scale onsite plants
In April 2017, Japanese company Tsubame BHB implemened a method of ammonia synthesis that could allow economic production at scales 1-2 orders of magnitude below than ordinary plants with utilizing electrochemical catalyst.
Green ammonia
In 2024, the BBC announced numerous companies were attempting to reduce the 2% of global carbon dioxide emissions caused by the use/production of ammonia by producing the product in labs. The industry has become known as "green ammonia."
Byproducts and shortages due to shutdowns
One of the main industrial byproducts of ammonia production is CO2. In 2018, high oil prices resulted in an extended summer shutdown of European ammonia factories causing a commercial CO2 shortage, thus limiting production of CO2-based products such as beer and soft drinks. This situation repeated in September 2021 due to a 250-400% increase in the wholesale price of natural gas over the course of the year.
See also
Ammonia
Amine gas treating
Haber process
Liquid nitrogen wash
Hydrogen economy
Methane pyrolysis
References
Works cited
External links
Today's Hydrogen Production Industry
Energy Use and Energy Intensity of the U.S. Chemical Industry , Report LBNL-44314, Lawrence Berkeley National Laboratory (Scroll down to page 39 of 40 PDF pages for a list of the ammonia plants in the United States)
Ammonia: The Next Step includes a detailed process flow diagram.
Ammonia production process plant flow sheet in brief with three controls.
Ammonia
Chemical processes | Ammonia production | Chemistry | 1,547 |
6,793,569 | https://en.wikipedia.org/wiki/Align-m | Align-m is a multiple sequence alignment program written by Ivo Van Walle.
Align-m has the ability to accomplish the following tasks:
multiple sequence alignment,
include extra information to guide the sequence alignment,
multiple structural alignment,
homology modeling by (iteratively) combining sequence and structure alignment data,
'filtering' of BLAST or other pairwise alignments,
combining many alignments into one consensus sequence,
multiple genome alignment (can cope with rearrangements).
See also
Sequence alignment software
Clustal
External links
Official website
Bioinformatics
Computational_phylogenetics | Align-m | Chemistry,Engineering,Biology | 117 |
72,188,498 | https://en.wikipedia.org/wiki/Pompidou%20Group | The Council of Europe International Cooperation Group on Drugs and Addiction, also known as Pompidou Group (French: Groupe Pompidou; and formerly Cooperation Group to Combat Drug Abuse and Illicit Trafficking in Drugs) is the co-operation platform of the Council of Europe on matters of drug policy currently composed of 42 countries. It was established as an ad'hoc inter-governmental platform in 1971 until its incorporation into the Council of Europe in 1980. Its headquarters are in Strasbourg, France.
History
During the 1960s, the "French Connection", a large-scale drug smuggling scheme allowing the import of heroin into the United States via Turkey and France, raised international concerns. On 6 August 1971, former French President Georges Pompidou sent a letter to his counterparts of Germany, Belgium, Italy, Luxembourg, the Netherlands and the United Kingdom expressing his concerns and proposing a joint effort "to better understand and tackle the growing drug problems in Europe." It has been suggested the initiative was pressed by a letter addressed to Pompidou by U.S. President Rixhard Nixon in 1969.
The Group was officially launched at the first ministerial meeting held in Paris on 4 November 1971. According to its website:"Until 1979, the group operated without a formal status supported by the countries holding its presidency: France from 1971 to 1977 and Sweden from 1977 to 1979. The group developed as a sui generis entity throughout the 1970s, and three other countries (Denmark, Ireland and Sweden) joined it during that decade."After the death of Pompidou in 1974, the group started to informally adopt the name "Pompidou Group."
On 27 March 1980, the Committee of Ministers of the Council of Europe adopted Resolution (80)2, integrating the Pompidou Group into the institutional framework of the Council as an inter-governmental body, after which numerous countries joined it.
As the European integration process and expansion of Schengen Area took over many drug-related areas of competences of European countries, the Pompidou Group reoriented its action towards monitoring. It publishes on a number of topics such as review of seizures carried out at borders, guidelines for custom officers, drug markets, and epidemiology. Since 1989, the Group started working on human rights, health, prevention (including the role of police in drug use prevention), and more recently on harm reduction and HIV/AIDS. Since 2004, the Group now awards every two years a "European Drug Prevention Prize" to drug prevention projects involving young people. More recently, the group has started involving on topics such as addiction to the internet, trade in precursors, on-line drug sales, gender-related issues, prison policies, etc.
In 1999 and 2010, the group signed Memoranda of Understanding with the EU's European Monitoring Centre for Drugs and Drug Addiction.
On 16 June 2021, marking the fiftieth anniversary of the initiative, the Committee of Ministers of the Council of Europe adopted Resolution CM/Res(2021)4 making important changes to the status and mandate of the group. It also officially changed its name to "Council of Europe International Cooperation Group on Drugs and Addiction."
Structure
The Presidency is the main political body of the Pompidou Group. It takes primary responsibility for supervising the work of the group. The Ministerial Conference elects the countries holding the Presidency and Vice-Presidency for a four years term.
The Ministerial Conference is the policy-making body and high-level political forum. It is composed of Ministers responsible for drug policies in their countries, who meet every four years. The Ministerial Conference establishes the Group's strategy and priorities.
The Permanent Correspondents is the main decision-making body. It is composed by officials representing their government in-between Ministerial Conferences. Permanent Correspondents meet twice a year.
Membership
Although the group was launched among seven European countries, its membership has expanded in number and in nature along the years.
Member states
As of 2022, the Pompidou Group consists of 41 member states. As an "Enlarged Partial Agreement", membership of the Pompidou Group is also open to countries not members of the Council of Europe, including states outside Europe. Observer status is also possible for states. In addition, the US and the Holy See "at their request and after deliberation by the Permanent Correspondents, have been associated with the work of the Pompidou Group on an ad hoc technical basis."
Intergovernmental and non-governmental observers
Beyond countries' governments, as of 2022, the European Commission, the European Monitoring Centre for Drugs and Drug Addiction, the Conference of INGOs of the Council of Europe, the Inter-American Drug Abuse Control Commission (CICAD/OAS), the United Nations Office on Drugs and Crime (UNODC), and the World Health Organisation enjoy observer status.
On its turn, the Pompidou Group enjoys observer or similar status in a number of EU and UN fora.
Criticism
Some governments have criticized the overlap of discussions held at the Pompidou Group with those taking place in fora like the European Union (Horizontal Drugs Group) or the United Nations (Commission on Narcotic Drugs). Countries have also lamented the membership fees.
Civil society stakeholders have criticized the Pompidou Group for leaving little room for the direct participation and involvement of non-governmental organizations in its work and discussions.
In 2022, while announcing the withdrawal of his country from the Pompidou Group, Russian Deputy Foreign Minister Oleg Syromolotov nonetheless declared that "the expert dialogue with the EU on combatting drugs has until recently been one of the few that has not been subject to political conjuncture." The consensus of the Pompidou Group around stringent drug policies, compatible with the zero-tolerance approach of the Russian Federation on drugs, and in particular with strong positions opposing decriminalization and legalization of drugs, has long been criticized by observers. In 2021, the Executive Secretary of the Pompidou Group Denis Huber declared:"The Pompidou Group, with the diversity of its members, has no official stance on the issue of decriminalisation, but it will continue to play its role of a platform of cooperation and dialogue for discussing both health and criminal related problems associated with drug use and abuse."
References
External links
1971 establishments in France
Council of Europe
Drug control law
Drug control treaties
Drug policy
Drug policy organizations
International organizations based in Europe
International organizations based in France
Organizations based in Strasbourg
Organizations established in 1971
Politics of Europe
Georges Pompidou | Pompidou Group | Chemistry | 1,340 |
33,397,867 | https://en.wikipedia.org/wiki/Haemagglutination%20activity%20domain | In molecular biology, the haemagglutination activity domain is a conserved protein domain found near the N terminus of a number of large, repetitive bacterial proteins, including many proteins of over 2500 amino acids. A number of the members of this family have been designated adhesins, filamentous haemagglutinins, haem/haemopexin-binding protein, etc. Members generally have a signal sequence, then an intervening region, then the region described in this entry. Following this region, proteins typically have regions rich in repeats but may show no homology between the repeats of one member and the repeats of another. This domain is suggested to be a carbohydrate-dependent haemagglutination activity site.
In Bordetella pertussis, the infectious agent in childhood whooping cough, filamentous haemagglutinin (FHA) is a surface-exposed and secreted protein that acts as a major virulence attachment factor, functioning as both a primary adhesin and an immunomodulator to bind the bacterial to cells of the respiratory epithelium. The FHA molecule has a globular head that consists of two domains: a shaft and a flexible tail. Its sequence contains two regions of tandem 19-residue repeats, where the repeat motif consists of short beta-strands separated by beta-turns.
References
Protein domains | Haemagglutination activity domain | Biology | 291 |
38,497,663 | https://en.wikipedia.org/wiki/Vertu%20Ti | The Vertu Ti is an Android mobile phone made by Vertu in England. It features a titanium case with a sapphire screen making it more robust than most smartphones. The phone retailed at £6700 (€7900, $10500).
It shares similar hardware and the same battery pack as Nokia's Lumia 920, and is also manufactured by Nokia, which formerly owned Vertu.
Made of grade 5 Titanium, polished ceramic and partially covered with leather, the Vertu Ti was announced and released in February 2013. The phone sports a 1.7 GHz dual-core CPU and 1 gigabyte of RAM, 64 gigabytes of built-in storage which can be increased by up to 32 gigabytes with a removable MicroSD memory card. The display is a TFT capacitive, multitouch, scratch-resistant, sapphire crystal glass touchscreen with 16,000,000 colors, 480 x 800 pixels in resolution, and 3.7 inches in physical size thereby giving out 252 pixels per inch (PPI).
The phone is equipped with an 8-megapixel camera which captures up to 3264x2448 pixel pictures, and is featured with autofocus, geo-tagging and an LED flash. The front-facing camera has a resolution of 1.3 megapixels.
References
Android (operating system) devices
Mobile phones introduced in 2013
Discontinued flagship smartphones | Vertu Ti | Technology | 299 |
68,426,974 | https://en.wikipedia.org/wiki/HD%2075116 | HD 75116, also known as HR 3491, is a solitary, orange hued star in the southern circumpolar constellation Volans, the flying fish. It has an apparent magnitude of 6.31, placing it near the limit for naked eye visibility. Parallax measurements from the Gaia spacecraft place the star relatively far at a distance of 930 light years. It appears to be approaching the Solar System, having a heliocentric radial velocity of .
This is a red giant with a spectral classification of K3 III:, but there is uncertainty behind the class. Gaia Data Release 3 stellar evolution models place it on the red giant branch. It has 2.29 times the Sun’s mass but has expanded to 52.2 times its girth. HD 75116 radiates 431 times the luminosity of the Sun from its swollen photosphere at an effective temperature of . It rotates slowly like many giant stars, having a projected rotational velocity .
References
K-type giants
075116
3491
042850
CD-67 00666
Volans
Volantis, 40 | HD 75116 | Astronomy | 231 |
38,823,347 | https://en.wikipedia.org/wiki/1%2C4-Dioxene | 1,4-Dioxene is an organic compound with the formula (CH)(CH)O. The compound is derived from dioxane by dehydrogenation. It is a colourless liquid.
References
Ethers
Dioxanes | 1,4-Dioxene | Chemistry | 53 |
8,993,287 | https://en.wikipedia.org/wiki/Gambler%27s%20conceit | Gambler's conceit is the fallacy described by behavioral economist David J. Ewing, where a gambler believes they will be able to stop a risky behavior while still engaging in it. This belief frequently operates during games of chance, such as casino games. The gambler believes they will be a net winner at the game, and thus able to avoid going broke by exerting the self-control necessary to stop playing while still ahead in winnings. This is often expressed as "I'll quit when I'm ahead."
Quitting while ahead is unlikely, though, since a gambler who is winning has little incentive to quit, and is instead encouraged to continue to gamble by their winning. Once in the throes of a winning streak, the individual may even become convinced that it is their skill, rather than chance, causing their winnings, or good luck is on their side, and thus it seems especially senseless to stop.
The gambler's conceit frequently works in conjunction with the gambler's fallacy: the mistaken idea that a losing streak in a game of chance, such as roulette, has to come to an end or is lowered because the frequency of one event has an effect on a following independent event.
Therefore, players think that it is necessary to continue playing while winning and necessary to continue playing while losing.
Relatedly, gambler's ruin shows that a player with finite resources continuously playing will inevitably go broke against a player with infinite resources in a fair or negative-expectation game.
As casinos have a house advantage in games of chance, a casino is more likely over time to take a player's money than a player is to win money from the casino, and thus it is to the casino's advantage to keep a winning player playing. Casinos thus frequently encourage winning players to continue playing. An example can be seen in the Martin Scorsese movie Casino in which Robert De Niro's character ensures that a high-stakes gambler continues to gamble to ensure that the money returns to the casino. On a smaller scale, casinos offer players free alcoholic drinks to encourage them to keep gambling.
See also
Behavioral economics
Inverse gambler's fallacy
Gambling
Online gambling
Short (finance)
References
Gambling terminology
Behavioral economics
Luck
Causal fallacies | Gambler's conceit | Biology | 469 |
398,404 | https://en.wikipedia.org/wiki/Lucas%20number | The Lucas sequence is an integer sequence named after the mathematician François Édouard Anatole Lucas (1842–1891), who studied both that sequence and the closely related Fibonacci sequence. Individual numbers in the Lucas sequence are known as Lucas numbers. Lucas numbers and Fibonacci numbers form complementary instances of Lucas sequences.
The Lucas sequence has the same recursive relationship as the Fibonacci sequence, where each term is the sum of the two previous terms, but with different starting values. This produces a sequence where the ratios of successive terms approach the golden ratio, and in fact the terms themselves are roundings of integer powers of the golden ratio. The sequence also has a variety of relationships with the Fibonacci numbers, like the fact that adding any two Fibonacci numbers two terms apart in the Fibonacci sequence results in the Lucas number in between.
The first few Lucas numbers are
2, 1, 3, 4, 7, 11, 18, 29, 47, 76, 123, 199, 322, 521, 843, 1364, 2207, 3571, 5778, 9349, ... .
which coincides for example with the number of independent vertex sets for cyclic graphs of length .
Definition
As with the Fibonacci numbers, each Lucas number is defined to be the sum of its two immediately previous terms, thereby forming a Fibonacci integer sequence. The first two Lucas numbers are and , which differs from the first two Fibonacci numbers and . Though closely related in definition, Lucas and Fibonacci numbers exhibit distinct properties.
The Lucas numbers may thus be defined as follows:
(where n belongs to the natural numbers)
All Fibonacci-like integer sequences appear in shifted form as a row of the Wythoff array; the Fibonacci sequence itself is the first row and the Lucas sequence is the second row. Also like all Fibonacci-like integer sequences, the ratio between two consecutive Lucas numbers converges to the golden ratio.
Extension to negative integers
Using , one can extend the Lucas numbers to negative integers to obtain a doubly infinite sequence:
..., −11, 7, −4, 3, −1, 2, 1, 3, 4, 7, 11, ... (terms for are shown).
The formula for terms with negative indices in this sequence is
Relationship to Fibonacci numbers
The Lucas numbers are related to the Fibonacci numbers by many identities. Among these are the following:
, so .
; in particular, , so .
Their closed formula is given as:
where is the golden ratio. Alternatively, as for the magnitude of the term is less than 1/2, is the closest integer to or, equivalently, the integer part of , also written as .
Combining the above with Binet's formula,
a formula for is obtained:
For integers n ≥ 2, we also get:
with remainder R satisfying
.
Lucas identities
Many of the Fibonacci identities have parallels in Lucas numbers. For example, the Cassini identity becomes
Also
where .
where except for .
For example if n is odd, and
Checking, , and
Generating function
Let
be the generating function of the Lucas numbers. By a direct computation,
which can be rearranged as
gives the generating function for the negative indexed Lucas numbers, , and
satisfies the functional equation
As the generating function for the Fibonacci numbers is given by
we have
which proves that
and
proves that
The partial fraction decomposition is given by
where is the golden ratio and is its conjugate.
This can be used to prove the generating function, as
Congruence relations
If is a Fibonacci number then no Lucas number is divisible by .
The Lucas numbers satisfy Gauss congruence. This implies that is congruent to 1 modulo if is prime. The composite values of which satisfy this property are known as Fibonacci pseudoprimes.
is congruent to 0 modulo 5.
Lucas primes
A Lucas prime is a Lucas number that is prime. The first few Lucas primes are
2, 3, 7, 11, 29, 47, 199, 521, 2207, 3571, 9349, 3010349, 54018521, 370248451, 6643838879, ... .
The indices of these primes are (for example, L4 = 7)
0, 2, 4, 5, 7, 8, 11, 13, 16, 17, 19, 31, 37, 41, 47, 53, 61, 71, 79, 113, 313, 353, 503, 613, 617, 863, 1097, 1361, 4787, 4793, 5851, 7741, 8467, ... .
, the largest confirmed Lucas prime is L148091, which has 30950 decimal digits. , the largest known Lucas probable prime is L5466311, with 1,142,392 decimal digits.
If Ln is prime then n is 0, prime, or a power of 2. L2m is prime for m = 1, 2, 3, and 4 and no other known values of m.
Lucas polynomials
In the same way as Fibonacci polynomials are derived from the Fibonacci numbers, the Lucas polynomials are a polynomial sequence derived from the Lucas numbers.
Continued fractions for powers of the golden ratio
Close rational approximations for powers of the golden ratio can be obtained from their continued fractions.
For positive integers n, the continued fractions are:
.
For example:
is the limit of
with the error in each term being about 1% of the error in the previous term; and
is the limit of
with the error in each term being about 0.3% that of the second previous term.
Applications
Lucas numbers are the second most common pattern in sunflowers after Fibonacci numbers, when clockwise and counter-clockwise spirals are counted, according to an analysis of 657 sunflowers in 2016.
See also
Generalizations of Fibonacci numbers
References
External links
"The Lucas Numbers", Dr Ron Knott
Lucas numbers and the Golden Section
A Lucas Number Calculator can be found here.
Eponymous numbers in mathematics
Integer sequences
Fibonacci numbers
Recurrence relations
Unsolved problems in mathematics
bn:লুকাস ধারা
fr:Suite de Lucas
he:סדרת לוקאס
pt:Sequência de Lucas | Lucas number | Mathematics | 1,348 |
66,013,128 | https://en.wikipedia.org/wiki/Aviation%20biofuel%20demonstrations | List of Aviation biofuel demonstration flights.
Demonstration flights
Commercial flights
References
Alternative fuels
Aviation and the environment
Aviation fuels
Biofuels
Renewable fuels | Aviation biofuel demonstrations | Engineering | 29 |
58,506,310 | https://en.wikipedia.org/wiki/Lynestrenol%20phenylpropionate | Lynestrenol phenylpropionate (LPP), also known as ethynylestrenol phenylpropionate, is a progestin and a progestogen ester which was developed for potential use as a progestogen-only injectable contraceptive by Organon but was never marketed. It was assessed at doses of 25 to 75 mg in an oil solution once a month by intramuscular injection. LPP was associated with high contraceptive failure at the low dose and with poor cycle control. The medication was found to produce estrogenic effects in the endometrium in women due to transformation into estrogenic metabolites.
A single intramuscular injection of 50 to 100 mg LPP in oil solution has been found to have a duration of action of 14 to 30 days in terms of clinical biological effect in the uterus and on body temperature in women.
LPP has a long biological half-life in rats when given as an intramuscular depot injection; its half-life was similar to that of nandrolone laurate (nandrolone dodecanoate) and was about 2-fold longer than that of nandrolone decanoate, 10-fold longer than that of lynestrenol and nandrolone phenylpropionate, 50-fold longer than that of progesterone, and 430-fold longer than that of nandrolone.
See also
List of progestogen esters § Esters of 19-nortestosterone derivatives
References
Abandoned drugs
Ethynyl compounds
Anabolic–androgenic steroids
Estranes
Phenylpropionate esters
Prodrugs
Progestogens
Progestogen esters
Synthetic estrogens | Lynestrenol phenylpropionate | Chemistry | 367 |
44,104,210 | https://en.wikipedia.org/wiki/Frank%20H.%20Field | Frank Henry Field (February 27, 1922 – April 12, 2013) was an American chemist and mass spectrometrist known for his work in the development of chemical ionization.
Early life and education
Frank Field was born in Keansburg, New Jersey, on February 27, 1922. His father died two months after he was born and his mother died in 1933, after which he was raised by his aunt in Cliffside Park, New Jersey. He attended Duke University, where he studied chemistry, receiving his B.S. degree in chemistry in 1943, M.S. in 1944, and Ph.D. in 1948.
Professional career
Field took a position as an instructor at the University of Texas at Austin in 1947 and became an assistant professor in 1949. In 1952, he took a position as a research chemist at Humble Oil in Baytown, Texas, where he, along with Burnaby Munson, discovered chemical ionization.
In 1966, he moved to Esso Research and Development Company in Linden, New Jersey, where he rose through the ranks to become a senior research associate. In 1970 he took a position as a professor at Rockefeller University and became emeritus in 1989.
Awards and honors
Field was a Guggenheim Fellow in 1963–1964 and was president of the American Society for Mass Spectrometry from 1972 to 1974. In 1987 he became a fellow of the American Association for the Advancement of Science. The Frank H. Field and Joe L. Franklin Award for Outstanding Achievement in Mass Spectrometry, given by the American Chemical Society, was created in 1983. Field received the award in 1988.
References
External links
Further reading
1922 births
2013 deaths
American chemists
Mass spectrometrists
Duke University alumni
University of Texas at Austin faculty | Frank H. Field | Physics,Chemistry | 352 |
60,777,557 | https://en.wikipedia.org/wiki/Value%20tree%20analysis | Value tree analysis is a multi-criteria decision-making (MCDM) implement by which the decision-making attributes for each choice to come out with a preference for the decision makes are weighted. Usually, choices' attribute-specific values are aggregated into a complete method. Decision analysts (DAs) distinguished two types of utility. The preferences of value are made among alternatives when there is no uncertainty. Risk preferences solves the attitude of DM to risk taking under uncertainty. This learning package focuses on deterministic choices, namely value theory, and in particular a decision analysis tool called a value tree.
History
The concept of utility was used by Daniel Bernoulli (1738) first in 1730s while explaining the evaluation of St Petersburg paradox, a specific uncertain gable. He explained that money was not enough to measure how much value is. For an individual, however, the worth of money was a non-linear function. This discovery led to the emergence of utility theory, which is a numerical measure that indicates how much value alternative choices have. With the development of decision analysis, utility played an important role in the explanation of economics behavior. Some utilitarian philosophers like Bentham and Mill took advantage of it as an implement to build a certain kind of ethics theory either. Nevertheless, there was no possibility of measuring one's utility function. Moreover, the theory was not so important as in practice. With the time past, the utility theory gradually based on a solid theoretical foundation. People started to use theory of games to explain the behavior of those who are rational and calm when engaging with others with conflict happening. In 1944 John von Neumann and Oskar Morgenstern's Theory of Games and Economic Behavior was published. Afterwards, it emerged since it has become of the key implement researchers and practitioners from statistics and operations research use to give a helping hand to decision makers when it was hard to make a decision. Decision analysts can be separated into two sorts of utility. The attitude of decision makers towards uncertain risk are solved by risk preference.
Process
The goal of the value tree analysis process is to offer a well-organized way to think and discuss about alternatives and support subjective judgements which are critical for correct or excellent decisions. The phases of process of the value tree analysis is shown as below:
Problem structuring:
defining the decision context
identifying the objectives
generating and identifying decision alternatives
creating a hierarchical model of the objectives
specifying the attributes
Preference elicitation
Recommended decision
Sentitvity analysis
These processes are usually large and iterative. For example, problem structure, collection of related information, and modeling of DM preferences often require a lot of work. DM's perception of the problem and preferences for results not previously considered may change and evolve during this process.
Methodology
Value tree was built to be an effective and essential technique for improving and enhancing goals and values by several aspects. The tree analysis displays a visual mode to problems that used to be only available in a verbal mode. Plus separate aspects, thoughts and opinions are united to a single visual representation, which gives birth to great clarity, stimulation of creative thinking, and constructive communication.
We take the steps below to create a value tree analysis with an example to help illustrate the steps:
Step1: Initial pool
Using a free brainstorming of all the values as a beginning, by which we mean all the problems which are related to the decision: the goals and criteria, the demands, etc.—all the things which have relevance to decision making. Write down what each value is on a piece of paper.
(A) Begin the process with several things:
Essences in your decision
The things that matter
The thing that you are looking for
The thing you want
Your passions, intentions, joys, ambition
The things which joy you
The things that you are fierce of
(B) Once you've exhausted your thoughts after this very open phase, consider the following topics to help yu come up with comprehensive values, interests, and concerns related to your decision:
Stakeholders
Consider who is affected by the decision and what their values might be. Stakeholders may be family, friends, neighbors, society, offspring or other species, but they can be anyone who might be affected by your decision, whether intentional or not.
Basic human needs:
Physiological value - for example, health and nutrition
Safety value - feel safe
Social values - be loved and respected
Self-realizing value - doing and becoming "fit"
Cognitive value - eager to satisfy curiosity, know, explain and understand
Aesthetic value - experience beauty
Intangible consequences. We are most inclined to ignore intangible consequences, such as:
If you make this choice, how would you feel about yourself?
How do others see you making this choice?
The lack of awareness of this intangible consequence can easily lead to our regretful decision. Moreover, if there is a disagreement between our intuitive and thorough analysis of decision-making, we are usually not aware of the underlying intangible consequences.
The pros and cons of the options you have seen:
For each option you can think of, what are the best and worst aspects of yourself? These will be values.
Special consideration of costs and risks. We tend to start our plan by thinking about the positive goals we hope to achieve. Considering costs and risks requires extra effort, but considering them is the first step to avoid them.
Future values
Consider future impacts and current impacts. People tend to ignore or mitigate future consequences.
Imagine your own future, perhaps in your death bed, reviewing this decision. What is important to you?
Step2: Clustering
When lacking of ideas, clustering the ideas is an efficient way to move the paper around until similar ideas are gathered together.
Step3: Labeling
Mark each group with a higher level value that holds them together to make each element clearer.
[Example]
As a simplified example, let us assume that some of the initial values we propose are self-determined, family, safe, friend and healthy. Health, safety and self-realization can be grouped together and labeled as "self", where families and friends can be grouped together and labeled as "other".
Step4: Moving up the tree
Seeing whether these groups can be grouped into still larger groups
[Example]
SELF and OTHERS group into OVERALL VALUE.
Step5: Moving down the tree
Also seeing if these groups can be divided into still smaller sub-groups.
[Example]
SELF-ACTUALIZATION could be divided into WORK and RECREATION.
Step6: Moving across the tree
Asking themselves is another valid way to bring new ideas to a tree, whether any additional thoughts at that level can come out(moving across the tree).
[Example]
In addition to FAMILY and FRIENDS, we could add SOCIETY.
The diagram on the right shows the final result of the (still simplified) example. Bold, italic indicates the basic values that were not originally written by us, but were thought of when we tried to fill in the tree.
Tool
PRIME Decisions
PRIME Decisions is a decision helping implement which use PRIME method to analyze incomplete preference information. Novel features are also offered by PRIME Decisions, which gives support to interactive decision process which includes an elicitation tour. PRIME Decisions are seen as an essential catalyst for further applied work due to its practitioners benefit from M. Köksalan et al. (eds.), Multiple Criteria Decision Making in the New Millennium © Springer-Verlag Berlin Heidelberg 2001 166 the explicit recognition of incomplete information.
Web-Hipre
Web-HIPRE, a Java applet, provides help to multiple criteria decision analysis. Moreover, a normal platform is provided for individual and group decision making. People can process the model at the same time at any time. Plus, they can easily have access to the model. It is possible to define links to other websites. All other sorts of information like geography, media files describing the criteria or alternatives can be referred to this link, which help make a better quality of decision support significantly.
Application
Some indicators obtained by process analysis are of great help to the value tree analysis. Especially in the value decomposition of internal operation indicators, the driving indicators of a first-level process indicator are usually the secondary sub-process indicators. For instance, the new product launch cycle (in terms of R&D project to production) is actually driven by two processes: R&D and testing in the company. The standardized R&D and testing process is a key success factor for improving the speed of innovation. To this end, the two process indicators development cycle, test cycle, sample acceptance and other indicators are the vital elements which drive the new product launch cycle indicators. Therefore, combining process analysis is of great significance for the decomposition of indicator value, especially for the decomposition of internal operational indicators. The instances of the main application areas are shown as below:
Application on business, production and services
Budget allocation
Allocating the engineering budget for products and projects annually is always a challenge. With value tree analysis aspects, such as strategic fit, which have no natural evaluation measure, but may have a significant role in decision-making can be included into the analysis. Furthermore, there is likelihood of communication being increased by explicit modelling of the relevant facts and a base for justified decisions is also provided.
Selection of R&D programs
As it is known to all that the risk in high in many R&D programs sometimes, thus the role of a good reason may be as essential as the decision itself. Value tree analysis offers a tool to give support to the reasoning of the selection of the R&D programme and modelling the facts affecting the decision.
Developing and deciding on marketing strategies
For instance, the analysis of new strategies for merchandising gasoline and other products through full-facility service stations.
Application on public policy problems
Analysis of responses to environmental risks
For instance, organization of negotiations between several parties in order to identify compromise regulations for acid rain and identify the objectives of the regulations.
Negotiation for oil and gas leases
Carry out an evaluation report of subcontractors and analyze the criteria which should be used.
Comparisons between alternative energy sources
For instance, organizing a debate about nuclear power, aiding the decision process, and studying value differences between the decision-makers.
Political decisions
Application on medicine
Deciding on the optimal usage and inventory of blood in a blood bank
Helping individuals to understand the risks of different treatments
In addition to the decision-making problems value tree analysis serves also other purposes.
Identifying and reformulating options
Definition of objectives
Providing a common language for communication
Quantification of subjective variables
For instance, a scale which measures the worth of military targets.
Development of value-relevant indices
Application on empirical pilot study variable selection
As value tree analysis is an approach that costs and computes little, it is one of the best choices for time-sensitive variable selection in empirical pilot healthcare studies. Moreover, value tree analysis offers a well-structured and strategic process for decision-making so that pilot study and patient data constraints can be accounted for and value for study stakeholders can be maximized.
Application on Coaching
Value tree analysis help creative and critical thinking and organize the thoughts in a logical way. Moreover, when a decision has come up, value tree analysis can also be an effective way to think about one's core goals and values. Afterwards, we can actively look for decision opportunities with the analysis done before.
Softwares
The software tools of value tree analysis are shown in the picture below:
References
Quality
Reliability engineering
Risk analysis methodologies
Safety engineering | Value tree analysis | Engineering | 2,298 |
30,388,594 | https://en.wikipedia.org/wiki/William%20Moffitt | William E. Moffitt (9 November 1925 – 19 December 1958) was a British quantum chemist. He died after a heart attack following a squash match. He had been thought to be one of Britain's most gifted academics.
Early life
Moffitt was born in Berlin, Germany to British parents; his father was working in Berlin on behalf of the British government. He was educated by private tuition up to the age of 11. He attended Harrow School from 1936–43. His chemistry master later said of him that "he was undoubtably the most able of a decade of gifted boys ... [and] has a profound effect on all who met him. He did more than anyone to create in the school the intellectual climate so necessary for the stimulation of young minds".
Academic career
He then studied chemistry at New College, Oxford, under an open scholarship, and graduated with first class honours. His D.Phil. supervisor, Charles Coulson, later wrote:
[his] exuberant delight in life remained with him to the end. "Moffit's method of Atoms in Molecules" will remain for many years to remind us of his remarkable ability to initiate new ways of thinking in his professional subject.
After receiving his D.Phil. for research in quantum chemistry, he joined the research staff of the British Rubber Producers Research Association.
He was made an Assistant Professor at Harvard in January 1953, and was give an A.M Honoris Causa in 1955. His colleague Edgar Bright Wilson said:
Few men had as great an impact at so early an age. The reasons are clear. Few have been endowed with such a sparkling, quick and keen intelligence, with such a capacity for spending long hours in the thorough study of fundamental subjects ... His intellectual powers were not only applied to the solution of problems but perhaps even more to their wise selection. He avoided areas where only formal solutions were attainable, with no contact with experience.
Doctoral students who were advised by Moffitt include R. Stephen Berry and S. M. Blinder.
Personal life and interests
He married Dorothy Silberman in 1956 and had a daughter, Alison in June 1958. He was a keen rugby player and enjoyed music and arts and particularly English literature. While sharing a cabin with a monk on a voyage to the UK from the US, he discussed the philosophy of religion with him in their only common language, Latin.
References
1925 births
1958 deaths
People educated at Harrow School
Alumni of New College, Oxford
Harvard University faculty
English chemists
Theoretical chemists | William Moffitt | Chemistry | 517 |
31,017,014 | https://en.wikipedia.org/wiki/C5H8O5 | {{DISPLAYTITLE:C5H8O5}}
The molecular formula C5H8O5 (molar mass: 148.12 g/mol, exact mass: 148.0372 u) may refer to:
Citramalic acid
α-Hydroxyglutaric acid | C5H8O5 | Chemistry | 65 |
1,859,768 | https://en.wikipedia.org/wiki/Lock-in%20amplifier | A lock-in amplifier is a type of amplifier that can extract a signal with a known carrier wave from an extremely noisy environment. Depending on the dynamic reserve of the instrument, signals up to a million times smaller than noise components, potentially fairly close by in frequency, can still be reliably detected. It is essentially a homodyne detector followed by low-pass filter that is often adjustable in cut-off frequency and filter order.
The device is often used to measure phase shift, even when the signals are large, have a high signal-to-noise ratio and do not need further improvement.
Recovering signals at low signal-to-noise ratios requires a strong, clean reference signal with the same frequency as the received signal. This is not the case in many experiments, so the instrument can recover signals buried in the noise only in a limited set of circumstances.
The lock-in amplifier is commonly believed to have been invented by Princeton University physicist Robert H. Dicke who founded the company Princeton Applied Research (PAR) to market the product. However, in an interview with Martin Harwit, Dicke claims that even though he is often credited with the invention of the device, he believes that he read about it in a review of scientific equipment written by Walter C. Michels, a professor at Bryn Mawr College. This could have been a 1941 article by Michels and Curtis, which in turn cites a 1934 article by C. R. Cosens, while another timeless article was written by C. A. Stutt in 1949.
Whereas traditional lock-in amplifiers use analog frequency mixers and RC filters for the demodulation, state-of-the-art instruments have both steps implemented by fast digital signal processing, for example, on an FPGA. Usually sine and cosine demodulation is performed simultaneously, which is sometimes also referred to as dual-phase demodulation. This allows the extraction of the in-phase and the quadrature component that can then be transferred into polar coordinates, i.e. amplitude and phase, or further processed as real and imaginary part of a complex number (e.g. for complex FFT analysis).
Basic principles
The operation of a lock-in amplifier relies on the orthogonality of sinusoidal functions. Specifically, when a sinusoidal function of frequency f1 is multiplied by a sinusoidal function of another frequency f2 and integrated over a time much longer than the period of the two functions, the result is close to zero. If instead f1 is equal to f2 and the two functions are in phase, the average value is equal to half of the product of the amplitudes.
In essence, a lock-in amplifier takes the input signal, multiplies it by the reference signal (either provided from the internal oscillator or an external source, and can be sinusoidal or square wave), and integrates it over a specified time, usually on the order of milliseconds to a few seconds. The resulting signal is a DC signal, where the contribution from any signal that is not at the same frequency as the reference signal is attenuated close to zero. The out-of-phase component of the signal that has the same frequency as the reference signal is also attenuated (because sine functions are orthogonal to the cosine functions of the same frequency), making a lock-in a phase-sensitive detector.
For a sine reference signal and an input waveform , the DC output signal can be calculated for an analog lock-in amplifier as
where φ is a phase that can be set on the lock-in (set to zero by default).
If the averaging time T is large enough (i.e. much larger than the signal period) to suppress all unwanted parts like noise and the variations at twice the reference frequency, the output is
where is the signal amplitude at the reference frequency, and is the phase difference between the signal and reference.
Many applications of the lock-in amplifier require recovering only the signal amplitude rather than relative phase to the reference signal. For a simple so called single-phase lock-in-amplifier the phase difference is adjusted (usually manually) to zero to get the full signal.
More advanced, so called two-phase lock-in-amplifiers have a second detector, doing the same calculation as before, but with an additional 90° phase shift. Thus one has two outputs: is called the "in-phase" component, and the "quadrature" component. These two quantities represent the signal as a vector relative to the lock-in reference oscillator. By computing the magnitude (R) of the signal vector, the phase dependency is removed:
The phase can be calculated from
Digital lock-in amplifiers
The majority of today's lock-in amplifiers are based on high-performance digital signal processing (DSP). Over the last 20 years, digital lock-in amplifiers have been replacing analog models across the entire frequency range, allowing users to perform measurements up to a frequency of 600 MHz. Initial problems of the first digital lock-in amplifiers, e.g. the presence of digital clock noise on the input connectors, could be completely eliminated by use of improved electronic components and better instrument design. Today's digital lock-in amplifiers outperform analog models in all relevant performance parameters, such as frequency range, input noise, stability and dynamic reserve. In addition to better performance, digital lock-in amplifiers can include multiple demodulators, which allows analyzing a signal with different filter settings or at multiple different frequencies simultaneously. Moreover, experimental data can be analyzed with additional tools such as an oscilloscope, FFT spectrum analyzers, boxcar averager or used to provide feedback by using internal PID controllers. Some models of the digital lock-in amplifiers are computer-controlled and feature a graphical user interface (can be a platform-independent browser user interface) and a choice of programming interfaces.
Signal measurement in noisy environments
Signal recovery takes advantage of the fact that noise is often spread over a much wider range of frequencies than the signal. In the simplest case of white noise, even if the root mean square of noise is 103 times as large as the signal to be recovered, if the bandwidth of the measurement instrument can be reduced by a factor much greater than 106 around the signal frequency, then the equipment can be relatively insensitive to the noise. In a typical 100 MHz bandwidth (e.g. an oscilloscope), a bandpass filter with width much narrower than 100 Hz would accomplish this. The averaging time of the lock-in amplifier determines the bandwidth and allows very narrow filters, less than 1 Hz if needed. However, this comes at the price of a slow response to changes in the signal.
In summary, even when noise and signal are indistinguishable in the time domain, if the signal has a definite frequency band and there is no large noise peak within that band, then the noise and signal can be separated sufficiently in the frequency domain.
If the signal is either slowly varying or otherwise constant (essentially a DC signal), then 1/f noise typically overwhelms the signal. It may then be necessary to use external means to modulate the signal. For example, when detecting a small light signal against a bright background, the signal can be modulated either by a chopper wheel, acousto-optical modulator, photoelastic modulator at a large enough frequency so that 1/f noise drops off significantly, and the lock-in amplifier is referenced to the operating frequency of the modulator. In the case of an atomic-force microscope, to achieve nanometer and piconewton resolution, the cantilever position is modulated at a high frequency, to which the lock-in amplifier is again referenced.
When the lock-in technique is applied, care must be taken to calibrate the signal, because lock-in amplifiers generally detect only the root-mean-square signal of the operating frequency. For a sinusoidal modulation, this would introduce a factor of between the lock-in amplifier output and the peak amplitude of the signal, and a different factor for non-sinusoidal modulation.
In the case of nonlinear systems, higher harmonics of the modulation frequency appear. A simple example is the light of a conventional light bulb being modulated at twice the line frequency. Some lock-in amplifiers also allow separate measurements of these higher harmonics.
Furthermore, the response width (effective bandwidth) of detected signal depends on the amplitude of the modulation. Generally, linewidth/modulation function has a monotonically increasing, non-linear behavior.
Applications
An example application of the above signal measurement principles can be found in some nondispersive infrared sensors. Infrared light is band-pass filtered to a region of the frequency spectrum that is predominantly absorbed by some gas of interest. This can then be detected, either by comparing to absorption in a second chamber containing a known reference gas, or by detecting the interaction between IR and gas particles using an acoustic sensor (see photoacoustic spectroscopy). If the signal needs to be amplified, a lock-in amplifier can be used by pulsing the IR source at a known frequency, and then feeding this frequency to the amplifier so only corresponding signals get amplified.
References
Publications
External links
Principles of lock-in detection and the state of the art from Zurich Instruments. A comprehensive overview on the essentials of lock-in measurements.
About LIAs from Stanford Research Systems. Application note detailing how lock-in amplifiers work.
Lock-in amplifier tutorial from Bentham Instruments. Comprehensive tutorial about the why and how of lock-in amplifiers.
Lock-in Technical Notes Range of Technical and Applications notes describing the design of digital and analog lock-ins, and guide to their specifications from SIGNAL RECOVERY.
PCSC-Lock-in Tool for data acquisition on acoustic chopping frequency using a computer sound card.
Electronic test equipment
Electronic amplifiers
Laboratory equipment | Lock-in amplifier | Technology,Engineering | 2,035 |
12,978,453 | https://en.wikipedia.org/wiki/Enclomifene | Enclomifene (), or enclomiphene (), a nonsteroidal selective estrogen receptor modulator of the triphenylethylene group, acts by antagonizing the estrogen receptor (ER) in the pituitary gland, which reduces negative feedback by estrogen on the hypothalamic-pituitary-gonadal axis, thereby increasing gonadotropin secretion and hence gonadal production of testosterone. It is one of the two stereoisomers of clomifene, which itself is a mixture of 38% zuclomifene and 62% enclomifene. Enclomifene is the (E)-stereoisomer of clomifene, while zuclomifene is the (Z)-stereoisomer. Whereas zuclomifene is more estrogenic, enclomifene is more antiestrogenic. In accordance, unlike enclomifene, zuclomifene is antigonadotropic due to activation of the ER and reduces testosterone levels in men. As such, isomerically pure enclomifene is more favorable than clomifene as a progonadotropin for the treatment of male hypogonadism.
Enclomiphene (former tentative brand names Androxal and EnCyzix), was under development for the treatment of male hypogonadism and type 2 diabetes. By December 2016, it was in preregistration and was under review by the Food and Drug Administration in the United States and the European Medicines Agency in the European Union. In January 2018, the Committee for Medicinal Products for Human Use of the European Medicines Agency recommended refusal of marketing authorization for enclomifene for the treatment of secondary hypogonadism. In April 2021, development of enclomifene was discontinued for all indications.
Medical uses
Enclomiphene is primarily used as a treatment for men with persistent low testosterone as a result of secondary hypogonadotropic hypogonadism. In secondary hypogonadotropic hypogonadism, the resulting low levels of testosterone is attributed to inadequacies in the hypothalamic-pituitary-gonadal axis. In contrast, primary hypogonadism is caused by defects in the testes that causes them to be unable to produce the required amount of testosterone.
Enclomiphene, which stimulates the endogenous production of testosterone, is not currently known to have common adverse effects of exogenous testosterone replacement therapy, such as reduced spermatogenesis or infertility.
Contraindications
Enclomiphene citrate is contraindicated in the groups of individuals below:
Pregnant women.
Breastfeeding women.
Women with unexplained uterine bleeding.
Women with ovarian growths or cysts unrelated to polycystic ovary syndrome.
Patients with a history of liver disease.
Patients with uncontrolled adrenal or thyroid dysfunction.
Patient with known allergy to enclomiphene or clomiphene.
Adverse effects
The adverse effects of enclomiphene have not been extensively studied. Enclomiphene is a selective estrogen receptor modulator (SERM), which is associated with an increased risk of thrombo-embolic events. Enclomiphene, unlike testosterone replacement therapy, is not associated with infertility or decreased spermatogenesis.
The following adverse events were observed in a population of 1,403 persons participating in phase 2 and phase 3 studies of enclomiphene:
Mechanism of action
Enclomiphene is a selective estrogen receptor antagonist, antagonizing the estrogen receptors in the pituitary gland, disrupting the negative feedback loop by estrogen towards the hypothalamic-pituitary-gonadal axis, ultimately resulting in an increase in gonadotropin secretion.
In men with secondary hypogonadotropic hypogonadism, this improves testosterone levels and sperm motility. Men with secondary hypogonadotropic hypogonadism have abnormally low testosterone levels due to low-normal levels of luteinizing hormone (LH) and follicular stimulating hormone (FSH). The biological role of these hormones is to stimulate the endogenous production of testosterone by the testes.
Common symptoms of secondary hypogonadotropic hypogonadism include low libido, energy, and mood. In addition, men with low testosterone may experience osteoporosis, an increase in visceral fat, and the regression of secondary sexual characteristics. Enclomiphene stimulates the endogenous production of testosterone. It works differently from traditional testosterone replacement therapy, which replaces testosterone using an exogenous source.
In addition, research has uncovered that enclomiphene increases total and free testosterone levels without increasing dihydrotestosterone disproportionately, suggesting that it "normalizes endogenous testosterone production pathways and restores normal testosterone levels in men with secondary hypogonadism."
History
Enclomifene or Enclomiphene (former tentative brand names Androxal and EnCyzix), was under development for the treatment of male hypogonadism and type 2 diabetes. By December 2016, it was in preregistration and was under review by the Food and Drug Administration in the United States and the European Medicines Agency in the European Union. In January 2018, the Committee for Medicinal Products for Human Use of the European Medicines Agency recommended refusal of marketing authorization for enclomifene for the treatment of secondary hypogonadism. In April 2021, development of enclomifene was discontinued for all indications.
Clomiphene citrate, which enclomiphene citrate is derived from, is a drug approved by the Food and Drug Association (FDA) for indications of anovulatory or oligo-ovulatory infertility and male infertility (spermatogenesis induction).
A media release by the FDA for the pharmacy compounding advisory committee compared the efficacy of testosterone replacement therapy against enclomiphene. They wrote that while testosterone replacement therapy often resulted in side effects such as transference risk, supranormal testosterone levels, suppressed spermatogenesis, suppressed testicular function, and testicular atrophy, none of these risks are present in enclomiphene.
In 2009, a study discovered that "short-term clinical safety data for enclomiphene have been satisfactory and equivalent to safety data for testosterone gels and placebo."
In 2016, a study on enclomiphene citrate reported that "the ability [of enclomiphene citrate] to treat testosterone deficiency in men while maintaining fertility supports a role for enclomiphene citrate in the treatment of men in whom testosterone therapy is not a suitable option."
In 2019, a study was published that found that "enclomiphene has been shown to increase testosterone levels while stimulating [follicular-stimulating hormone] and [luteinizing hormone] production."
The key difference between enclomiphene citrate and traditional testosterone replacement therapy is that enclomiphene citrate stimulates the body to produce its own testosterone, while traditional testosterone replacement therapy replaces low testosterone levels in men with exogenous, synthetic testosterone.
A study conducted in 2013 offered this assessment of the potential of enclomiphene citrate to increase sexual function in men: "If enclomiphene citrate can correct the central defect in men that blocks their ability to produce [lutenizing hormone] and [follicular-stimulating hormone] and thus to produce both testosterone and sperm in the testes, this drug may prove itself superior to other treatments."
References
External links
Enclomifene - AdisInsight
Abandoned drugs
Diethylamino compounds
Organochlorides
Phenol ethers
Progonadotropins
Selective estrogen receptor modulators
Triphenylethylenes
Ethanolamines | Enclomifene | Chemistry | 1,705 |
27,092,961 | https://en.wikipedia.org/wiki/Link-richness | Link-richness is the quality, possessed by some websites, of having many hyperlinks. Classified advertising sites like Craigslist tend to be very link-rich, sometimes with hundreds of links on their main page. They help users find the links they are looking for by grouping links into clusters. Inadequate link richness has been described as frustrating to readers, as it reduces transparency of site content from the main page. Students new to wiki collaboration were found to need guidance in how to take full advantage of the medium's potential for creating link-rich content.
Link-richness in some contexts can be distracting, as when an article is surrounded by extraneous links. Indeed, it is becoming accepted as a best practice for universities to have link-rich home pages that do not rely on user categorisation and exploration of long sequences of links and are not constrained by traditional boundaries between departments. Tools are sometimes needed to make the publishing of link-rich web sites tractable, and many people may lack the technical skills, time, or inclination to engage in hand- crafting new digital document forms.
A link-rich site that is low on content is sometimes referred to as a "gateway site." Link-rich portals were popular on the Web in 2000. Yahoo! and other sites featuring categories with many links were heavily used and often required fewer than three clicks to reach the content. Web designers were creating flat sites with content positioned close to the top of pages.
References
External links
Web Design & Development
Web development | Link-richness | Engineering | 311 |
312,976 | https://en.wikipedia.org/wiki/N%C3%BCwa | Nüwa, also read Nügua, is a mother goddess, culture hero, and/or member of the Three Sovereigns of Chinese mythology. She is a goddess in Chinese folk religion, Chinese Buddhism, Confucianism and Taoism. She is credited with creating humanity and repairing the Pillar of Heaven.
As creator of mankind, she molded humans individually by hand with yellow clay. In other stories where she fulfills this role, she only created nobles and/or the rich out of yellow soil. The stories vary on the other details about humanity's creation, but it was a tradition commonly believed in ancient China that she created commoners from brown mud. A story holds that she was tired when she created "the rich and the noble", so all others, or "cord-made people", were created from her "dragg[ing] a string through mud".
In the Huainanzi, there is a description of a great battle between deities that broke the pillars supporting Heaven and caused great devastation. There was great flooding, and Heaven had collapsed. Nüwa was the one who patched the holes in Heaven with five colored stones, and she used the legs of a tortoise to mend the pillars.
There are many instances of her in literature across China which detail her in creation stories, and today, she remains a figure important to Chinese culture. She is one of the most venerated Chinese goddesses alongside Guanyin and Mazu.
In Chinese mythology, the goddess Nüwa is a legendary progenitor of all human beings. She also creates a magic stone. Her husband Fu Xi is suggested to be the progenitor of divination and the patron saint of numbers.
Name
The character nü () is a common prefix on the names of goddesses. The proper name is wa, also read as gua (). The Chinese character is unique to this name. Birrell translates it as 'lovely', but notes that it "could be construed as 'frog, which is consistent with her aquatic myth. In Chinese, the word for 'whirlpool' is wo (), which shares the same pronunciation with the word for 'snail' (). These characters all have their right side constructed by the word wa (), which can be translated as 'spiral' or 'helix' as noun, and as 'spin' or 'rotate' when as verb, to describe the 'helical movement'. This mythical meaning has also been symbolically pictured as compasses in the hand which can be found on many paintings and portraits associated with her.
Her reverential name is Wahuang ().
Description
The Huainanzi relates Nüwa to the time when Heaven and Earth were in disruption:
The catastrophes were supposedly caused by the battle between the deities Gonggong and Zhuanxu (an event that was mentioned earlier in the Huainanzi), the five-colored stones symbolize the five Chinese elements (wood, fire, earth, metal, and water), the black dragon was the essence of water and thus cause of the floods, Ji Province serves metonymically for the central regions (the Sinitic world). Following this, the Huainanzi tells about how the sage-rulers Nüwa and Fuxi set order over the realm by following the Way () and its potency ().
The Classic of Mountains and Seas, dated between the Warring States period and the Han dynasty, describes Nüwa's intestines as being scattered into ten spirits.
In Liezi (c. 475 – 221 BC), Chapter 5 "Questions of Tang" (), author Lie Yukou describes Nüwa repairing the original imperfect heaven using five-colored stones, and cutting the legs off a tortoise to use as struts to hold up the sky.
In Songs of Chu (c. 340 – 278 BC), Chapter 3 "Asking Heaven" (), author Qu Yuan writes that Nüwa molded figures from the yellow earth, giving them life and the ability to bear children. After demons fought and broke the pillars of the heavens, Nüwa worked unceasingly to repair the damage, melting down the five-coloured stones to mend the heavens.
In Shuowen Jiezi (c. 58 – 147 AD), China's earliest dictionary, under the entry for Nüwa author Xu Shen describes her as being both the sister and the wife of Fuxi. Nüwa and Fuxi were pictured as having snake-like tails interlocked in an Eastern Han dynasty mural in the Wuliang Temple in Jiaxiang county, Shandong province.
In Duyi Zhi (; c. 846 – 874 AD), Volume 3, author Li Rong gives this description.
There are stories that have her as the "consort" of Fuxi rather than his sister.
In Yuchuan Ziji ( c. 618 – 907 AD), Chapter 3 (), author Lu Tong describes Nüwa as the wife of Fuxi.
In Siku Quanshu, Sima Zhen (679–732) provides commentary on the prologue chapter to Sima Qian's Shiji, "Supplemental to the Historic Record: History of the Three August Ones", wherein it is found that the Three August Ones are Nüwa, Fuxi, and Shennong; Fuxi and Nüwa have the same last name, Feng (; Hmong: Faj).
In the collection Four Great Books of Song (c. 960 – 1279 AD), compiled by Li Fang and others, Volume 78 of the book Imperial Readings of the Taiping Era contains a chapter "Customs by Yingshao of the Han Dynasty" in which it is stated that there were no men when the sky and the earth were separated. Thus Nüwa used yellow clay to make people. But the clay was not strong enough so she put ropes into the clay to make the bodies erect. It is also said that she prayed to gods to let her be the goddess of marital affairs. Variations of this story exist.
In Ming dynasty myths about the transition from the Shang dynasty to the Zhou dynasty, Nüwa made evil decisions that ultimately benefited China, such as sending a fox spirit to encourage the debauchery of King Zhou, which led to him being deposed. Other tales have her and Fuxi as exclusively the "great gentle protectors of humanity" unwilling to use subterfuge.
Nüwa and Fuxi were also thought to be gods of silk.
Iconography of Fuxi and Nüwa
The iconography of Fuxi and Nüwa vary in physical appearance depending on the time period and also shows regional differences. In Chinese tomb murals and iconography, Fuxi and Nüwa generally have snake-like bodies and human face or head.
Nüwa is often depicted holding a compass or multiple compasses, which were a traditional Chinese symbol of a dome-like sky. She was also thought to be an embodiment of the stars and the sky or a star god.
Fuxi and Nüwa can be depicted as individual figures arranged as a symmetrical pair or they can be depicted in double figures with intertwined snake-like bodies. Their snake-like tails can also be depicted stretching out towards each other. This is similar to the representation of Rahu and Ketu in Indian astrology.
Fuxi and Nüwa can also appear individually on separate tomb bricks. They generally hold or embrace the sun or moon discs containing the images of a bird (or a three-legged crow) or a toad (sometimes a hare) which are the sun and moon symbolism respectively, and/or each holding a try square or a pair of compasses, or holding a longevity mushroom () plant. Fuxi and Nüwa holding the sun and the moon appears as early as the late Western Han dynasty. Other physical appearance variation, such as lower snake-like body shape (e.g. thick vs thin tails), depictions of legs (i.e. legs found along the snake-like body) and wings (e.g. wings with feathers which protrude from their backs as found in late Western Han Xinan (新安) Tomb or smaller quills found on their shoulders), and in hats and hairstyles, also exist.
In the Luoyang regions murals dating to the late Western Han dynasty, Fuxi and Nüwa are generally depicted as individual figures, each one found at each side of the central ridge of tomb chambers as found in the Bu Qianqiu Tomb. They can also be found without intertwining tails from the stone murals of the same period. Since the middle of the Eastern Han dynasty, their tails started to intertwine.
In the Gansu murals dating to the Wei and Western Jin period, one of the most typical features of Fuxi is the "mountain-hat" () which looks like a three-peaked cap while Nüwa is depicted wearing various hairstyles characteristic of Han women. Both deities dressed in wide-sleeved clothing, which reflects typical Han clothing style also commonly depicted in Han dynasty art.
Legends
Appearance in Fengshen Yanyi
Nüwa is featured within the famed Ming dynasty novel Fengshen Bang. As featured within this novel, Nüwa is revered since Xia dynasty for creating the five-colored stones to mend the heavens, which tilted after Gonggong toppled one of the heavenly pillars, Mount Buzhou. Shang Rong asked King Zhou of Shang to pay her a visit as a sign of deep respect. Upon seeing her statue, Zhou was completely overcome with lust at the sight of the beautiful ancient goddess Nüwa. He wrote an erotic poem on a neighboring wall and took his leave. When Nüwa later returned to her temple after visiting the Yellow Emperor, she saw the foulness of Zhou's words. In her anger, she swore that the Shang dynasty would end in payment for his offense. In her rage, Nüwa personally ascended to the palace in an attempt to kill the king, but was suddenly struck back by two large beams of red light.
After Nüwa realized that King Zhou was already destined to rule the kingdom for twenty-six more years, Nüwa summoned her three subordinates—the Thousand-Year Vixen (later becoming Daji), the Jade Pipa, and the Nine-Headed Pheasant. With these words, Nüwa brought destined chaos to the Shang dynasty, "The luck Cheng Tang won six hundred years ago is dimming. I speak to you of a new mandate of heaven which sets the destiny for all. You three are to enter King Zhou's palace, where you are to bewitch him. Whatever you do, do not harm anyone else. If you do my bidding, and do it well, you will be permitted to reincarnate as human beings." With these words, Nüwa was never heard of again, but was still a major indirect factor towards the Shang dynasty's fall.
Creation of humanity
Pangu was said to be the creation god in Chinese mythology. He was a giant sleeping within an egg of chaos. As he awoke, he stood up and divided the sky and the earth. Pangu then died after standing up, and his body turned into rivers, mountains, plants, animals, and everything else in the world, among which is a powerful being known as Huaxu (華胥). Huaxu gave birth to a twin brother and sister, Fuxi and Nüwa. Fuxi and Nüwa are said to be creatures that have faces of human and bodies of snakes.
Nüwa created humanity due to her loneliness, which grew more intense over time. She molded yellow earth or, in other versions, yellow clay into the shape of people. These individuals later became the wealthy nobles of society, because they had been created by Nüwa's own hands. However, the majority of humanity was created when Nüwa dragged string across mud to mass-produce them, which she did because creating every person by hand was too time- and energy-consuming. This creation story gives an aetiological explanation for the social hierarchy in ancient China. The nobility believed that they were more important than the mass-produced majority of humanity, because Nüwa took time to create them, and they had been directly touched by her hand. In another version of the creation of humanity, Nüwa and Fuxi were survivors of a great flood. By the command of the God of the heaven, they were married and Nüwa had a child which was a ball of meat. This ball of meat was cut into small pieces, and the pieces were scattered across the world, which then became humans.
Nüwa was born three months after her brother, Fuxi, whom she later took as her husband; this marriage is the reason why Nüwa is credited with inventing the idea of marriage.
Before the two of them got married, they lived on mount K'un-lun. A prayer was made after the two became guilty of falling for each other. The prayer is as follows,
"Oh Heaven, if Thou wouldst send us forth as man and wife, then make all the misty vapor gather. If not, then make all the misty vapor disperse."
Misty vapor then gathered after the prayer signifying the two could marry. When intimate, the two made a fan out of grass to screen their faces which is why during modern day marriages, the couple hold a fan together. By connecting, the two were representative of Yin and Yang; Fuxi being connected to Yang and masculinity along Nüwa being connected to Yin and femininity. This is further defined with Fuxi receiving a carpenter's square which symbolizes his identification with the physical world because a carpenter's square is associated with straight lines and squares leading to a more straightforward mindset. Meanwhile, Nüwa was given a compass to symbolize her identification with the heavens because a compass is associated with curves and circles leading to a more abstract mindset. With the two being married, it symbolized the union between heaven and Earth. Other versions have Nüwa invent the compass rather than receive it as a gift. In addition, the system of male and female sex, the yang-yin philosophy, is expressed here in a complex way: first as Fuxi and Nüwa, then as a compass (masculine) and a square (feminine), and thirdly, as Nüwa (woman) with a compass (man) and Fuxi (man) with a square (woman).
Nüwa Mends the Heavens
Nüwa Mends the Heavens () is a well-known theme in Chinese culture. The courage and wisdom of Nüwa inspired the ancient Chinese to control nature's elements and has become a favorite subject of Chinese poets, painters, and sculptors, along with so many poetry and arts like novels, films, paintings, and sculptures; e.g. the sculptures that decorate Nanshan and Ya'an.
The Huainanzi tells an ancient story about how the four pillars that support the sky crumbled inexplicably. Other sources have tried to explain the cause, i.e. the battle between Gong Gong and Zhuanxu or Zhurong. Unable to accept his defeat, Gong Gong deliberately banged his head onto Mount Buzhou (不周山) which was one of the four pillars. Half of the sky fell which created a gaping hole and the Earth itself was cracked; the Earth's axis mundi was tilted into the southeast while the sky rose into the northwest. This is said to be the reason why the western region of China is higher than the eastern and that most of its rivers flow towards the southeast. This same explanation is applied to the Sun, Moon, and stars which moved into the northwest. A wildfire burnt the forests and led the wild animals to run amok and attack the innocent peoples, while the water which was coming out from the earth's crack didn't seem to be slowing down.
Nüwa pitied the humans she had made and attempted to repair the sky. She gathered five colored-stones (red, yellow, blue, black, and white) from the riverbed, melted them and used them to patch up the sky: since then the sky (clouds) have been colorful. She then killed a giant turtle (or tortoise), some version named the tortoise as Ao, cut off the four legs of the creature to use as new pillars to support the sky. But Nüwa didn't do it perfectly because the unequal length of the legs made the sky tilt. After the job was done, Nüwa drove away the wild animals, extinguished the fire, and controlled the flood with a huge amount of ashes from the burning reeds and the world became as peaceful as it was before.
Empress Nüwa
Many Chinese know well their Three Sovereigns and Five Emperors, i.e. the early leaders of humanity as well as culture heroes according to the Northern Chinese belief. But the lists vary and depend on the sources used. One version includes Nüwa as one of the Three Sovereigns, who reigned after Fuxi and before Shennong.
The myth of the Three Sovereigns sees the three as demigod figures, and the myth is used to stress the importance of an imperial reign. The variation between sources stems from China being generally divided before the Qin and Han dynasties, and the version with Fuxi, Shennong, and Nüwa was used to emphasize rule and structure.
In her matriarchal reign, she battled against a neighboring tribal chief, defeated him, and took him to the peak of a mountain. Defeated by a woman, the chief felt ashamed to be alive and banged his head on the heavenly bamboo to kill himself and for revenge. His act tore a hole in the sky and made a flood hit the whole world. The flood killed all people except Nüwa and her army which was protected by her divinity. After that, Nüwa patched the sky with five colored-stones until the flood receded.
Popular culture
The Ming dynasty fantasy novel Investiture of the Gods (1567) has Nüwa being an instigator of the Shang dynasty's collapse, as she sent the fox demon Daji to corrupt King Zhou for the latter verbally desecrating her statue at a temple.
The Qing dynasty novel Dream of the Red Chamber (1754) narrates how Nüwa gathered 36,501 stones to patch the sky but left one unused. The unused stone plays an important role in the novel's storyline.
A goddess Nüwa statue named Sky Patching by Yuan Xikun was exhibited at Times Square, New York City, on 19 April 2012 to celebrate Earth Day (2012), symbolized the importance of protecting the ozone layer. Previously, this 3.9 meter tall statue was exhibited on Beijing and now is placed on Vienna International Centre, Vienna since 21 November 2012.
The story of Nüwa patching the sky was being retold by Carol Chen in her book Goddess Nuwa Patches Up the Sky (2014) which was illustrated by Meng Xianlong.
In Shin Megami Tensei 5, Nuwa (voiced by Ayana Taketatsu) is the partner to Shohei Yakumo (voiced by Tomokazu Sugita) as two of the main characters who aid the protagonist.
In the Gremlins animated series, Nuwa (voiced by Sandra Oh) is portrayed as the creator of the Mogwai species that Gizmo originated from and fell into a depression when the humans could not properly coexist with them.
See also
Flood Mythology of China
Explanatory notes
Citations
General bibliography
.
.
.
Further reading
External links
Three Sovereigns and Five Emperors
Investiture of the Gods characters
Journey to the West characters
Dream of the Red Chamber characters
Arts goddesses
Bodhisattvas
Buddhist goddesses
Deities in Chinese folk religion
Chinese goddesses
Creation myths
Creator goddesses
Marriage goddesses
Mother goddesses
Mythological queens
Snake goddesses
Sky supporters
Taoist deities
Legendary progenitors
Heroes in mythology and legend | Nüwa | Astronomy | 4,098 |
68,140,494 | https://en.wikipedia.org/wiki/Adrenal%20androgen-stimulating%20hormone | Adrenal androgen stimulating hormone (AASH), also known as cortical androgen stimulating hormone (CASH), is a hypothetical hormone which has been proposed to stimulate the adrenal glands to produce adrenal androgens such as dehydroepiandrosterone (DHEA), dehydroepiandrosterone sulfate (DHEA-S), and androstenedione (A4). It is hypothesized to be involved in adrenarche and adrenopause. The existence of this hormone is controversial and disputed and it has not been identified to date. A number of other mechanisms and/or hormones may instead play the functional role of the so-called AASH.
See also
Adrenocorticotrophic hormone (ACTH)
References
Biochemistry
Hormones
Adrenal gland | Adrenal androgen-stimulating hormone | Chemistry,Biology | 177 |
66,110,715 | https://en.wikipedia.org/wiki/Slot-die%20coating | Slot-die coating is a coating technique for the application of solution, slurry, hot-melt, or extruded thin films onto typically flat substrates such as glass, metal, paper, fabric, plastic, or metal foils. The process was first developed for the industrial production of photographic papers in the 1950's. It has since become relevant in numerous commercial processes and nanomaterials related research fields.
Slot-die coating produces thin films via solution processing. The desired coating material is typically dissolved or suspended into a precursor solution or slurry (sometimes referred to as "ink") and delivered onto the surface of the substrate through a precise coating head known as a slot-die. The slot-die has a high aspect ratio outlet controlling the final delivery of the coating liquid onto the substrate. This results in the continuous production of a wide layer of coated material on the substrate, with adjustable width depending on the dimensions of the slot-die outlet. By closely controlling the rate of solution deposition and the relative speed of the substrate, slot-die coating affords thin material coatings with easily controllable thicknesses in the range of 10 nanometers to hundreds of micrometers after evaporation of the precursor solvent.
Commonly cited benefits of the slot-die coating process include its pre-metered thickness control, non-contact coating mechanism, high material efficiency, scalability of coating areas and throughput speeds, and roll-to-roll compatibility. The process also allows for a wide working range of layer thickness and precursor solution properties such as material choice, viscosity, and solids content. Commonly cited drawbacks of the slot-die coating process include its comparatively high complexity of apparatus and process optimization relative to similar coating techniques such as blade coating and spin coating. Furthermore, slot-die coating falls into the category of coating processes rather than printing processes. It is therefore better suited for coating of uniform, thin material layers rather than printing or consecutive buildup of complex images and patterns.
Coating apparatus
Typical components
Slot-die coating equipment is available in a variety of configurations and form factors. However, the vast majority of slot-die processes are driven by a similar set of common core components. These include:
A fluid reservoir to store the main supply of coating fluid for the system
A pump to drive the coating fluid through the system
A slot-die to distribute the coating fluid across the desired coating width before coating onto the substrate
A substrate mounting system to support the substrate in a controlled manner as it moves through the system
A coating motion system to drive the relative speed of the slot-die and substrate in a controlled manner during coating
Depending on the complexity of the coating apparatus, a slot-die coating system may include additional modules for e.g. precise positioning of the slot-die over the substrate, particulate filtering of the coating solution, pre-treatment of the substrate (e.g. cleaning and surface energy modification), and post-processing steps (e.g. drying, curing, calendering, printing, slitting, etc.).
Industrial coating systems
Slot-die coating was originally developed for industrial use and remains primarily applied in production-scale settings. This is due to its potential for large-scale production of high-value thin films and coatings at a low operating cost via roll-to-roll and sheet-to-sheet line integration. Such roll-to-roll and sheet-to-sheet coating systems are similar in their intent for large-scale production, but are distinguished from each other by the physical rigidity of the substrates they handle. Roll-to-roll systems are designed to coat and handle flexible substrate rolls such as paper, fabric, plastic or metal foils. Conversely, sheet-to-sheet systems are designed to coat and handle rigid substrate sheets such as glass, metal, or plexiglass. Combinations of these systems such roll-to-sheet lines are also possible.
Both industrial roll-to-roll and sheet-to-sheet systems typically feature slot-dies in the range of 300 to 1000 mm in coating width, though slot-dies up to 4000 mm wide have been reported. Commercial slot-die systems are claimed to operate at speeds up to several hundred square meters per minute, with roll-to-roll systems typically offering higher throughput due to decreased complexity of substrate handling. Such large-scale coating systems can be driven by a variety of industrial pumping solutions including gear pumps, progressive cavity pumps, pressure pots, and diaphragm pumps depending on process requirements.
Roll-to-roll lines
To handle flexible substrates, roll-to-roll lines typically use a series of rollers to continually drive the substrate through the various stations of the process line. The bare substrate originates at an "unwind" roll at the start of the line and is collected at a "rewind" roll at the end. Hence, the substrate is often referred to as a "web" as it winds its way through the process line from start to finish. When a substrate roll has been fully processed, it is collected from the rewind roll, allowing for a new, bare substrate roll to be mounted onto the unwind roller to begin the process again. Slot-die coating often comprises just a single step of an overall roll-to-roll process. The slot-die is typically mounted in a fixed position on the roll-to-roll line, dispensing coating fluid onto the web in a continuous or patch-based manner as the substrate passes by. Because the substrate web spans all stations of the roll-to-roll line simultaneously, the individual processes at these stations are highly coupled and must be optimized to work in tandem with each other at the same web speed.
Sheet-to-sheet lines
The rigid substrates employed in sheet-to-sheet systems are not compatible with the roll-to-roll processing method. Sheet-to-sheet systems rely instead on a rack-based system to transport individual sheets between the various stations of a process line, where transfer between stations may occur in a manual or automated manner. Sheet-to-sheet lines are therefore more analogous to a series of semi-coupled batch operations rather than a single continuous process. This allows for easier optimization of individual unit operations at the expense of potentially increased handling complexity and reduced throughput. Furthermore, the need to start and stop the slot-die coating process for each substrate sheet places higher tolerance requirements on the leading and trailing edge uniformity of the slot-die step. In sheet-to-sheet lines, the substrate may be fixed in place as the substrate passes underneath on a moving support bed (sometimes referred to as a "chuck"). Alternatively, the slot-die may move during coating while the substrate remains fixed in place.
Lab-scale development tools
Miniaturized slot-die tools have become increasingly available to support the development of new roll-to-roll compatible processes prior to the requirement of full pilot- and production-scale equipment. These tools feature similar core components and functionality as compared to larger slot-die coating lines, but are designed to integrate into pre-production research environments. This is typically achieved by e.g. accepting standard A4 sized substrate sheets rather than full substrate rolls, using syringe pumps rather than industrial pumping solutions, and relying upon hot-plate heating rather than large industrial drying ovens, which can otherwise reach lengths of several meters to provide suitable residence times for drying.
Because the slot-die coating process can be readily scaled between large and small areas by adjusting the size of the slot-die and throughput speed, processes developed on lab-scale tools are considered to be reasonably scalable to industrial roll-to-roll and sheet-to-sheet coating lines. This has led to significant interest in slot-die coating as a method of scaling new thin film materials and devices, particularly in the sphere of thin film solar cell research for e.g. perovskite and organic photovoltaics.
Common coating modalities
Slot-die hardware can be applied in several distinct coating modalities, depending on the requirements of a given process. These include:
Proximity coating, in which the substrate is supported by a hard surface (e.g. a precision backing roll or moving support bed) and the slot-die is held at a relatively small coating gap (typically 25 μm to several mm away from the substrate, depending on the wet thickness of the coated layer).
Curtain coating, in which the substrate is supported by a hard surface (e.g. a precision backing roll or moving support bed) and the slot-die is held at a much larger coating gap, enabling much higher coating speeds as long as a suitable Weber number is achieved.
Tensioned web over slot-die coating, in which the substrate web is suspended between two idle rollers placed on opposite sides of the slot-die. The web is then pressed against the lips of the slot-die such that the slot-die itself applies tension to the web. When fluid is pumped through the slot-die onto the substrate, the fluid lubricates the slot-die-substrate interface, preventing the slot-die from scratching the substrate during coating.
The dynamics of proximity coating have been extensively studied and applied over a wide range of scales and applications. Furthermore, the concepts governing proximity coating are relevant in understanding the behavior of other coating modalities. Proximity coating is therefore considered to be the default configuration for the purposes of this introductory article, though curtain coating and tensioned web over slot die configurations remain highly relevant in industrial manufacturing.
Key process parameters
Film thickness control
Slot-die coating is a non-contact coating method, in which the slot-die is typically held over the substrate at a height several times higher than the target wet film thickness. The coating fluid transfers from the slot-die to the substrate via a fluid bridge that spans the air gap between the slot-die lips and substrate surface. This fluid bridge is commonly referred to as the coating meniscus or coating bead. The thickness of the resulting wet coated layer is controlled by tuning the ratio between the applied volumetric pump rate and areal coating rate. Unlike in self-metered coating methods such as blade- and bar coating, the slot-die does not influence the thickness of the wet coated layer via any form of destructive physical contact or scraping. The height of the slot-die therefore does not determine the thickness of the wet coated layer. The height of the slot-die is instead significant in determining the quality of the coated film, as it controls the distance that must be spanned by the meniscus to maintain a stable coating process.
Slot-die coating operates via a pre-metered liquid coating mechanism. The thickness of the wet coated layer () is therefore significantly determined by the width of coating (), the volumetric pump rate (), and the coating speed, or relative speed between the slot-die and the substrate during coating (). Increasing the pump rate increases the thickness of the wet layer, while increasing the coating speed or coating width decreases the wet layer thickness. The coating width is typically a fixed value for a given slot-die process. Hence, pump rate and coating speed can be used to calculate, control, and adjust the wet film thickness in a highly predictable manner. However, deviation from this idealized relationship can occur in practice due to non-ideal behavior of materials and process components; for example when using highly viscoelastic fluids, or a sub-optimal process setup where fluid creeps up the slot-die component rather than transferring fully to the substrate.
The final thickness of the dry layer after solvent evaporation () is further determined by the solids concentration of the precursor solution () and the volumetric density of the coated material in its final form (). Increasing the solids content of the precursor solution increases the thickness of the dry layer, while using a more dense material results a thinner dry layer for a given concentration.
Film quality control
As with all solution processed coating methods, the final quality of a thin film produced via slot-die coating depends on a wide array of parameters both intrinsic and external to the slot-die itself. These parameters can be broadly categorized into:
Coating window effects, determining the stability of fluid transfer between the slot-die and substrate in an ideal slot-die process isolated from external imperfections
Downstream process effects, determining the behavior of the coating fluid on the substrate surface after exiting the slot-die component
External effects, determining the degree to which the coating apparatus is capable of delivering the ideal coating process characterized by the pre-metered slot-die coating mechanism and the coating window of a given process
Coating window parameters
Under ideal conditions, the potential to achieve a defect-free film via slot-die is entirely governed by the coating window of the a given process. The coating window is a multivariable map of key process parameters, describing the range over which they can be applied together to achieve a defect-free film. Understanding the coating window behavior of a typical slot-die process enables operators to observe defects in a slot-die coated layer and intuitively determine a course of action for defect resolution. The key process parameters used to define the coating window typically include:
The ratio of slot-die height to wet film thickness ()
The volumetric pump rate ()
The coating speed, or relative speed of the substrate ()
The capillary number of the coating liquid ()
The difference in pressure across the upstream and downstream faces of the meniscus ()
The coating window can be visualized by plotting two such key parameters against each other while assuming the others to remain constant. In an initial simple representation, the coating window can be described by plotting the relationship between viable pump rates and coating speeds for a given process. Excessive pumping or insufficient coating speeds result in defect spilling of the coating liquid outside of the desired coating area, while coating too quickly or pumping insufficiently results in defect breakup of the meniscus. The pump rate and coating speed can therefore be adjusted to directly compensate for these defects, though changing these parameters also affects wet film thickness via the pre-metered coating mechanism. Implicit in this relationship is the effect of the slot-die height parameter, as this affects the distance over which the meniscus must be stretched while remaining stable during coating. Raising the slot-die higher can thus counteract spilling defects by stretching the meniscus further, while lowering the slot-die can counteract streaking and breakup defects by reducing the gap that the meniscus must breach. Other helpful coating window plots to consider include the relationship between fluid capillary number and slot-die height, as well as the relationship between pressure across the meniscus and slot-die height. The former is particularly relevant when considering changes in fluid viscosity and surface tension (i.e. the effect of coating various materials with significantly different rheology), while the latter is relevant in the context of applying a vacuum box at the upstream face of the meniscus to stabilize the meniscus against breakup.
Downstream process effects
In reality, the final quality of a slot-die coated film is heavily influenced by a variety of factors beyond the parameter boundaries of the ideal coating window. Surface energy effects and drying effects are examples of common downstream effects with a significant influence on final film morphology. Sub-optimal matching of surface energy between the substrate and coating fluid can cause dewetting of the liquid film after it has been applied to the substrate, resulting in pinholes or beading of the coated layer. Sub-optimal drying processes are also often noted to influence film morphology, resulting in increased thickness at the edge of a film caused by the coffee ring effect. Surface energy and downstream processing must therefore be carefully optimized to maintain the integrity of the slot-die coated layer as it moves through the system, until the final thin film product can be collected.
External effects
Slot-die coating is a highly mechanical process in which uniformity of motion and high hardware tolerances are critical to achieving uniform coatings. Mechanical imperfections such as jittery motion in the pump and coating motion systems, poor parallelism between the slot-die and substrate, and external vibrations in the environment can all lead to undesired variations in film thickness and quality. Slot-die coating apparatus and its environment must therefore be suitably specified to meet the needs of a given process and avoid hardware- and environment-derived defects in the coated film.
Applications
Industrial applications
Slot-die coating was originally developed for the commercial production of photographic films and papers. In the past several decades it has become a critical process in the production of adhesive films, flexible packaging, transdermal and oral pharmaceutical patches, LCD panels, multi-layer ceramic capacitors, lithium-ion batteries and more.
Research applications
With growing interest in the potential of nanomaterials and functional thin film devices, slot-die coating has become increasingly applied in the sphere of materials research. This is primarily attributed to the flexibility, predictability and high repeatability of the process, as well as its scalability and origin as a proven industrial technique. Slot-die coating has been most notably employed in research related to flexible, printed, and organic electronics, but remains relevant in any field where scalable thin film production is required.
Examples of research enabled by slot-die coating include:
Thin film solar cells, to produce electron transport layers, hole transport layers, photoactive layers, and passivating layers in perovskite, organic, quantum dot and multi-junction photovoltaic devices
Solid state and next-gen batteries, to produce electrodes, solid electrolytes, ion selective membranes, protective coatings, and interface modification coatings
Fuel cells and water electrolysis, to produce electrolytes and electrode catalyst coatings
Flexible touch-sensitive surfaces, to produce transparent conductive films
OLED devices, to produce electron transport layers, hole transport layers, and electroactive layers
Printed diagnostics and molecular sensors, to produce active layers and ion selective membranes
Microfluidics and lab-on-a-chip devices, to produce hydrophobic/hydrophilic surface coatings for enhanced liquid flow
Water purification, to produce nanofiltration membranes
Biobased and biodegradable packaging, to produce multilayer barrier foils from sustainable materials
References
Materials science
Coatings | Slot-die coating | Physics,Chemistry,Materials_science,Engineering | 3,718 |
6,668,150 | https://en.wikipedia.org/wiki/Clearing%20the%20neighbourhood | In celestial mechanics, "clearing the neighbourhood" (or dynamical dominance) around a celestial body's orbit describes the body becoming gravitationally dominant such that there are no other bodies of comparable size other than its natural satellites or those otherwise under its gravitational influence.
"Clearing the neighbourhood" is one of three necessary criteria for a celestial body to be considered a planet in the Solar System, according to the definition adopted in 2006 by the International Astronomical Union (IAU). In 2015, a proposal was made to extend the definition to exoplanets.
In the end stages of planet formation, a planet, as so defined, will have "cleared the neighbourhood" of its own orbital zone, i.e. removed other bodies of comparable size. A large body that meets the other criteria for a planet but has not cleared its neighbourhood is classified as a dwarf planet. This includes Pluto, whose orbit is partly inside Neptune's and shares its orbital neighbourhood with many Kuiper belt objects. The IAU's definition does not attach specific numbers or equations to this term, but all IAU-recognised planets have cleared their neighbourhoods to a much greater extent (by orders of magnitude) than any dwarf planet or candidate for dwarf planet.
The phrase stems from a paper presented to the 2000 IAU general assembly by the planetary scientists Alan Stern and Harold F. Levison. The authors used several similar phrases as they developed a theoretical basis for determining if an object orbiting a star is likely to "clear its neighboring region" of planetesimals based on the object's mass and its orbital period. Steven Soter prefers to use the term dynamical dominance, and Jean-Luc Margot notes that such language "seems less prone to misinterpretation".
Prior to 2006, the IAU had no specific rules for naming planets, as no new planets had been discovered for decades, whereas there were well-established rules for naming an abundance of newly discovered small bodies such as asteroids or comets. The naming process for Eris stalled after the announcement of its discovery in 2005, because its size was comparable to that of Pluto. The IAU sought to resolve the naming of Eris by seeking a taxonomical definition to distinguish planets from minor planets.
Criteria
The phrase refers to an orbiting body (a planet or protoplanet) "sweeping out" its orbital region over time, by gravitationally interacting with smaller bodies nearby. Over many orbital cycles, a large body will tend to cause small bodies either to accrete with it, or to be disturbed to another orbit, or to be captured either as a satellite or into a resonant orbit. As a consequence it does not then share its orbital region with other bodies of significant size, except for its own satellites, or other bodies governed by its own gravitational influence. This latter restriction excludes objects whose orbits may cross but that will never collide with each other due to orbital resonance, such as Jupiter and its trojans, Earth and 3753 Cruithne, or Neptune and the plutinos. As to the extent of orbit clearing required, Jean-Luc Margot emphasises "a planet can never completely clear its orbital zone, because gravitational and radiative forces continually perturb the orbits of asteroids and comets into planet-crossing orbits" and states that the IAU did not intend the impossible standard of impeccable orbit clearing.
Stern–Levison's
In their paper, Stern and Levison sought an algorithm to determine which "planetary bodies control the region surrounding them". They defined (lambda), a measure of a body's ability to scatter smaller masses out of its orbital region over a period of time equal to the age of the Universe (Hubble time). is a dimensionless number defined as
where is the mass of the body, is the body's semi-major axis, and is a function of the orbital elements of the small body being scattered and the degree to which it must be scattered. In the domain of the solar planetary disc, there is little variation in the average values of for small bodies at a particular distance from the Sun.
If > 1, then the body will likely clear out the small bodies in its orbital zone. Stern and Levison used this discriminant to separate the gravitationally rounded, Sun-orbiting bodies into überplanets, which are "dynamically important enough to have cleared [their] neighboring planetesimals", and unterplanets. The überplanets are the eight most massive solar orbiters (i.e. the IAU planets), and the unterplanets are the rest (i.e. the IAU dwarf planets).
Soter's
Steven Soter proposed an observationally based measure (mu), which he called the "planetary discriminant", to separate bodies orbiting stars into planets and non-planets. He defines as
where is a dimensionless parameter, is the mass of the candidate planet, and is the mass of all other bodies that share an orbital zone, that is all bodies whose orbits cross a common radial distance from the primary, and whose non-resonant periods differ by less than an order of magnitude.
The order-of-magnitude similarity in period requirement excludes comets from the calculation, but the combined mass of the comets turns out to be negligible compared with the other small Solar System bodies, so their inclusion would have little impact on the results. μ is then calculated by dividing the mass of the candidate body by the total mass of the other objects that share its orbital zone. It is a measure of the actual degree of cleanliness of the orbital zone. Soter proposed that if > 100, then the candidate body be regarded as a planet.
Margot's
Astronomer Jean-Luc Margot has proposed a discriminant, (pi), that can categorise a body based only on its own mass, its semi-major axis, and its star's mass. Like Stern–Levison's , is a measure of the ability of the body to clear its orbit, but unlike , it is solely based on theory and does not use empirical data from the Solar System. is based on properties that are feasibly determinable even for exoplanetary bodies, unlike Soter's , which requires an accurate census of the orbital zone.
where is the mass of the candidate body in Earth masses, is its semi-major axis in AU, is the mass of the parent star in solar masses, and is a constant chosen so that > 1 for a body that can clear its orbital zone. depends on the extent of clearing desired and the time required to do so. Margot selected an extent of times the Hill radius and a time limit of the parent star's lifetime on the main sequence (which is a function of the mass of the star). Then, in the mentioned units and a main-sequence lifetime of 10 billion years, = 807. The body is a planet if > 1. The minimum mass necessary to clear the given orbit is given when = 1.
is based on a calculation of the number of orbits required for the candidate body to impart enough energy to a small body in a nearby orbit such that the smaller body is cleared out of the desired orbital extent. This is unlike , which uses an average of the clearing times required for a sample of asteroids in the asteroid belt, and is thus biased to that region of the Solar System. 's use of the main-sequence lifetime means that the body will eventually clear an orbit around the star; 's use of a Hubble time means that the star might disrupt its planetary system (e.g. by going nova) before the object is actually able to clear its orbit.
The formula for assumes a circular orbit. Its adaptation to elliptical orbits is left for future work, but Margot expects it to be the same as that of a circular orbit to within an order of magnitude.
To accommodate planets in orbit around brown dwarfs, an updated version of the criterion with a uniform clearing time scale of 10 billion years was published in 2024. The values of for Solar System bodies remain unchanged.
Numerical values
Below is a list of planets and dwarf planets ranked by Margot's planetary discriminant , in decreasing order. For all eight planets defined by the IAU, is orders of magnitude greater than 1, whereas for all dwarf planets, is orders of magnitude less than 1. Also listed are Stern–Levison's and Soter's ; again, the planets are orders of magnitude greater than 1 for and 100 for , and the dwarf planets are orders of magnitude less than 1 for and 100 for . Also shown are the distances where = 1 and = 1 (where the body would change from being a planet to being a dwarf planet).
The mass of Sedna is not known; it is very roughly estimated here as , on the assumption of a density of about .
Disagreement
Stern, the principal investigator of the New Horizons mission to Pluto, disagreed with the reclassification of Pluto on the basis of its inability to clear a neighbourhood. He argued that the IAU's wording is vague, and that — like Pluto — Earth, Mars, Jupiter and Neptune have not cleared their orbital neighbourhoods either. Earth co-orbits with 10,000 near-Earth asteroids (NEAs), and Jupiter has 100,000 trojans in its orbital path. "If Neptune had cleared its zone, Pluto wouldn't be there", he said.
The IAU category of 'planets' is nearly identical to Stern's own proposed category of 'überplanets'. In the paper proposing Stern and Levison's discriminant, they stated, "we define an überplanet as a planetary body in orbit about a star that is dynamically important enough to have cleared its neighboring planetesimals ..." and a few paragraphs later, "From a dynamical standpoint, our solar system clearly contains 8 überplanets" — including Earth, Mars, Jupiter, and Neptune. Although Stern proposed this to define dynamical subcategories of planets, he rejected it for defining what a planet is, advocating the use of intrinsic attributes over dynamical relationships.
See also
List of Solar System objects
List of gravitationally rounded objects of the Solar System
List of Solar System objects by size
List of notable asteroids
Sphere of influence (astrodynamics)
Notes
References
Astronomical controversies
Celestial mechanics
Definition of planet
Dynamics of the Solar System
Planetary science
Pluto's planethood
Solar System | Clearing the neighbourhood | Physics,Astronomy | 2,141 |
1,140,608 | https://en.wikipedia.org/wiki/Tridilosa | Tridilosa is a very light and resistant, materials-efficient 3-D structure, made from steel and concrete and widely used in civil engineering. Tridilosa was invented by the Mexican engineer Heberto Castillo.
Among the most remarkable features of this structure is that it can save up to 66% on concrete usage and up to 40% on steel, because filling with concrete is not required in the tension zone, only in the superior compression zone. It is so light that it can float on water, but is three times stronger than traditional construction flagstone. It was used, for example, to construct the 54-floor World Trade Center of Mexico City.
External links
Heberto Castillo Martinez
Science and technology in Mexico
Mexican inventions | Tridilosa | Physics | 149 |
7,781,606 | https://en.wikipedia.org/wiki/Plasma%20wave%20instrument | A plasma wave instrument (PWI), also known as a plasma wave receiver, is a device capable of detecting vibrations in outer space plasma and transforming them into audible sound waves or air vibrations that can be heard by the human ear. This instrument was pioneered by then-University of Iowa physics professor, Donald Gurnett. Plasma wave instruments are commonly employed on space probes such as GEOTAIL, Polar, Voyager I and II (see Plasma Wave Subsystem), and Cassini–Huygens.
Operating principle
Vibrations in the audible frequency range are perceived by humans when air vibrates against their eardrum. Air, or some other vibrating medium such as water, is essential for sound perception by the human ear. Without a medium to transmit it, the sound produced by a source will not be heard by a human. There is no air in outer space, nor is there any other type of medium capable of transmitting vibrations from a source to a human ear. However, there are sources in outer space that vibrate at frequencies audible to humans if only there were some transmitting medium to carry those vibrations from the source to a human eardrum.
One such source capable of vibrating at audible frequencies (ranging from 45 to 20,000 vibrations per second) is plasma. Plasma is a collection of charged particles, such as free electrons or ionized gas atoms. Examples of plasma include solar flares, solar wind, neon signs, and fluorescent lamps. Plasma interacts with electrical and magnetic fields in ways that can result in vibrations across various frequencies, including the audible range.
Other applications
The recordings of interplanetary and outer space plasma vibrations, captured by plasma wave instruments, were provided by NASA to composer Terry Riley and Kronos quartet founder David Harrington as inspiration for the composition of "Sun Rings", a 85-minute multimedia piece for a string quartet and choir. "Sun Rings" was performed November 3, 2006, at the Veteran's Auditorium, in Providence, Rhode Island.
References
Plasma diagnostics
Remote sensing
Spacecraft instruments | Plasma wave instrument | Physics,Technology,Engineering | 409 |
1,146,294 | https://en.wikipedia.org/wiki/Current%20algebra | Certain commutation relations among the current density operators in quantum field theories define an infinite-dimensional Lie algebra called a current algebra. Mathematically these are Lie algebras consisting of smooth maps from a manifold into a finite dimensional Lie algebra.
History
The original current algebra, proposed in 1964 by Murray Gell-Mann, described weak and electromagnetic currents of the strongly interacting particles, hadrons, leading to the Adler–Weisberger formula and other important physical results. The basic concept, in the era just preceding quantum chromodynamics, was that even without knowing the Lagrangian governing hadron dynamics in detail, exact kinematical information – the local symmetry – could still be encoded in an algebra of
currents.
The commutators involved in current algebra amount to an infinite-dimensional extension of the Jordan map, where the quantum fields represent infinite arrays of oscillators.
Current algebraic techniques are still part of the shared background of particle physics when analyzing symmetries and indispensable in discussions of the Goldstone theorem.
Example
In a non-Abelian Yang–Mills symmetry, where and are flavor-current and axial-current 0th components (charge densities), respectively, the paradigm of a current algebra is
and
where are the structure constants of the Lie algebra. To get meaningful expressions, these must be normal ordered.
The algebra resolves to a direct sum of two algebras, and , upon defining
whereupon
Conformal field theory
For the case where space is a one-dimensional circle, current algebras arise naturally as a central extension of the loop algebra, known as Kac–Moody algebras or, more specifically, affine Lie algebras. In this case, the commutator and normal ordering can be given a very precise mathematical definition in terms of integration contours on the complex plane, thus avoiding some of the formal divergence difficulties commonly encountered in quantum field theory.
When the Killing form of the Lie algebra is contracted with the current commutator, one obtains the energy–momentum tensor of a two-dimensional conformal field theory. When this tensor is expanded as a Laurent series, the resulting algebra is called the Virasoro algebra. This calculation is known as the Sugawara construction.
The general case is formalized as the vertex operator algebra.
See also
Affine Lie algebra
Chiral model
Jordan map
Virasoro algebra
Vertex operator algebra
Kac–Moody algebra
Notes
References
Sample.
Quantum field theory
Lie algebras | Current algebra | Physics | 500 |
796,928 | https://en.wikipedia.org/wiki/Consistent%20histories | In quantum mechanics, the consistent histories or simply "consistent quantum theory" interpretation generalizes the complementarity aspect of the conventional Copenhagen interpretation. The approach is sometimes called decoherent histories and in other work decoherent histories are more specialized.
First proposed by Robert Griffiths in 1984, this interpretation of quantum mechanics is based on a consistency criterion that then allows probabilities to be assigned to various alternative histories of a system such that the probabilities for each history obey the rules of classical probability while being consistent with the Schrödinger equation. In contrast to some interpretations of quantum mechanics, the framework does not include "wavefunction collapse" as a relevant description of any physical process, and emphasizes that measurement theory is not a fundamental ingredient of quantum mechanics. Consistent histories allows predictions related to the state of the universe needed for quantum cosmology.
Key assumptions
The interpretation rests on three assumptions:
states in Hilbert space describe physical objects,
quantum predictions are not deterministic, and
physical systems have no single unique description.
The third assumption generalizes complementarity and this assumption separates consistent histories from other quantum theory interpretations.
Formalism
Histories
A homogeneous history (here labels different histories) is a sequence of Propositions specified at different moments of time (here labels the times). We write this as:
and read it as "the proposition is true at time and then the proposition is true at time and then ". The times are strictly ordered and called the temporal support of the history.
Inhomogeneous histories are multiple-time propositions which cannot be represented by a homogeneous history. An example is the logical OR of two homogeneous histories: .
These propositions can correspond to any set of questions that include all possibilities.
Examples might be the three propositions meaning "the electron went through the left slit", "the electron went through the right slit" and "the electron didn't go through either slit". One of the aims of the approach is to show that classical questions such as, "where are my keys?" are consistent. In this case one might use a large number of propositions each one specifying the location of the keys in some small region of space.
Each single-time proposition can be represented by a projection operator acting on the system's Hilbert space (we use "hats" to denote operators). It is then useful to represent homogeneous histories by the time-ordered product of their single-time projection operators. This is the history projection operator (HPO) formalism developed by Christopher Isham and
naturally encodes the logical structure of the history propositions.
Consistency
An important construction in the consistent histories approach is the class operator for a homogeneous history:
The symbol indicates that the factors in the product are ordered chronologically according to their values of : the "past" operators with smaller values of appear on the right side, and the "future" operators with greater values of appear on the left side.
This definition can be extended to inhomogeneous histories as well.
Central to the consistent histories is the notion of consistency. A set of histories is consistent (or strongly consistent) if
for all . Here represents the initial density matrix, and the operators are expressed in the Heisenberg picture.
The set of histories is weakly consistent if
for all .
Probabilities
If a set of histories is consistent then probabilities can be assigned to them in a consistent way. We postulate that the probability of history is simply
which obeys the axioms of probability if the histories come from the same (strongly) consistent set.
As an example, this means the probability of " OR " equals the probability of "" plus the probability of "" minus the probability of " AND ", and so forth.
Interpretation
The interpretation based on consistent histories is used in combination with the insights about quantum decoherence.
Quantum decoherence implies that irreversible macroscopic phenomena (hence, all classical measurements) render histories automatically consistent, which allows one to recover classical reasoning and "common sense" when applied to the outcomes of these measurements. More precise analysis of decoherence allows (in principle) a quantitative calculation of the boundary between the classical domain and the quantum domain. According to Roland Omnès,
In order to obtain a complete theory, the formal rules above must be supplemented with a particular Hilbert space and rules that govern dynamics, for example a Hamiltonian.
In the opinion of others this still does not make a complete theory as no predictions are possible about which set of consistent histories will actually occur. In other words, the rules of consistent histories, the Hilbert space, and the Hamiltonian must be supplemented by a set selection rule. However, Robert B. Griffiths holds the opinion that asking the question of which set of histories will "actually occur" is a misinterpretation of the theory; histories are a tool for description of reality, not separate alternate realities.
Proponents of this consistent histories interpretation—such as Murray Gell-Mann, James Hartle, Roland Omnès and Robert B. Griffiths—argue that their interpretation clarifies the fundamental disadvantages of the old Copenhagen interpretation, and can be used as a complete interpretational framework for quantum mechanics.
In Quantum Philosophy, Roland Omnès provides a less mathematical way of understanding this same formalism.
The consistent histories approach can be interpreted as a way of understanding which sets of classical questions can be consistently asked of a single quantum system, and which sets of questions are fundamentally inconsistent, and thus meaningless when asked together. It thus becomes possible to demonstrate formally why it is that the questions which Einstein, Podolsky and Rosen assumed could be asked together, of a single quantum system, simply cannot be asked together. On the other hand, it also becomes possible to demonstrate that classical, logical reasoning often does apply, even to quantum experiments – but we can now be mathematically exact about the limits of classical logic.
See also
HPO formalism
References
External links
The Consistent Histories Approach to Quantum Mechanics – Stanford Encyclopedia of Philosophy
Interpretations of quantum mechanics
Quantum measurement | Consistent histories | Physics | 1,214 |
159,284 | https://en.wikipedia.org/wiki/Novartis | Novartis AG is a Swiss multinational pharmaceutical corporation based in Basel, Switzerland. Consistently ranked in the global top five, Novartis is one of the largest pharmaceutical companies in the world and was the fourth largest by revenue in 2022.
Novartis manufactures the drugs clozapine (Clozaril), diclofenac (Voltaren; sold to GlaxoSmithKline in 2015 deal), carbamazepine (Tegretol), valsartan (Diovan), imatinib mesylate (Gleevec/Glivec), cyclosporine (Neoral/Sandimmune), letrozole (Femara), methylphenidate (Ritalin; production ceased 2020), terbinafine (Lamisil), deferasirox (Exjade), and others.
Novartis was formed in 1996 by the merger of Ciba-Geigy and Sandoz. It was considered the largest corporate merger in history during that time. The pharmaceutical and agrochemical divisions of both companies formed Novartis as an independent entity. The name Novartis was based on the Latin terms, “novae artes” (new skills).
After the merger, other Ciba-Geigy and Sandoz businesses were sold, or, like Ciba Specialty Chemicals, spun off as independent companies. The Sandoz brand disappeared for three years, but was revived in 2003 when Novartis consolidated its generic drugs businesses into a single subsidiary and named it Sandoz. Novartis divested its agrochemical and genetically modified crops business in 2000 with the spinout of Syngenta in partnership with AstraZeneca, which also divested its agrochemical business. The new company also acquired a series of acquisitions in order to strengthen its core businesses.
Novartis is a full member of the European Federation of Pharmaceutical Industries and Associations (EFPIA), the Biotechnology Innovation Organization (BIO), the International Federation of Pharmaceutical Manufacturers and Associations (IFPMA), and the Pharmaceutical Research and Manufacturers of America (PhRMA). Novartis is the third most valuable pharmaceutical company in Europe, after Novo Nordisk and Roche.
History
Novartis was created in March 1996 and began operations on 20 December from the merger of Ciba-Geigy and Sandoz Laboratories, both Swiss companies.
Ciba-Geigy
Ciba-Geigy was formed in 1970 by the merger of J. R. Geigy Ltd (founded in Basel in 1857) and CIBA (founded in Basel in 1859).
Ciba began in 1859, when Alexander Clavel (1805–1873) took up the production of fuchsine in his factory for silk-dyeing works in Basel. By 1873, he sold his dye factory to the company Bindschedler and Busch. In 1884, Bindschedler and Busch was transformed into a joint-stock company named "" (Company for Chemical Industry Basel). The acronym, CIBA, was adopted as the company's name in 1945.
The foundation for Geigy was established in 1857, when Johann Rudolf Geigy-Merian (1830–1917) and Johann Muller-Pack acquired a site in Basel, where they built a dyewood mill and a dye extraction plant. Two years later, they began the production of synthetic fuchsine. In 1901, they formed the public limited company Geigy, and the name of the company was changed to J. R. Geigy Ltd in 1914.
CIBA and Geigy merged in 1970 to form Ciba‑Geigy Ltd. .
Mid-1990s controversy
In the mid-1990s, state and federal health and environmental agencies identified an increased incidence of childhood cancers in Toms River, New Jersey, from the 1970–1995 period. Multiple investigations by state and federal environmental and health agencies indicated that the likely source of the increased cancer risk was contamination from Toms River Chemical Plant (then operated by Ciba-Geigy), which had been in operation since 1952, and the Reich Farm/Union Carbide. The area was designated a United States Environmental Protection Agency Superfund site in 1983 after an underground plume of toxic chemicals was identified. The following year, a discharge pipe was shut down after a sinkhole at the corner of Bay Avenue and Vaughn Avenue revealed that it had been leaking. The plant ceased operation in 1996. A follow-up study from the 1996–2000 period indicated that while there were more cancer cases than expected, rates had significantly fallen and the difference was statistically insignificant compared to normal statewide cancer rates. Since 1996, the Toms River water system has been subject to the most stringent water testing in New Jersey and is considered safe for consumption. Dan Fagin's Toms River: A Story of Science and Salvation, the 2014 Pulitzer Prize winning book, examined the issue of industrial pollution at the site in detail.
Sandoz
Sandoz is the generic drugs division of Novartis. Before the 1996 merger with Ciba-Geigy to form Novartis, Sandoz Pharmaceuticals (Sandoz AG) was a pharmaceutical company headquartered in Basel, Switzerland (as was Ciba-Geigy), and was best known for developing drugs such as Sandimmune for organ transplantation, the antipsychotic Clozaril, Mellaril Tablets and Serentil Tablets for treating psychiatric disorders, and Cafergot Tablets and Torecan Suppositories for treating migraine headaches.
The Chemiefirma Kern und Sandoz ("Kern and Sandoz Chemistry Firm") was founded in 1886 by Alfred Kern (1850–1893) and Edouard Sandoz (1853–1928). The first dyes manufactured by them were alizarinblue and auramine. After Kern's death, the partnership became the corporation Chemische Fabrik vormals Sandoz in 1895. The company began producing the fever-reducing drug antipyrin in the same year. In 1899, the company began producing the sugar substitute saccharin. Further pharmaceutical research began in 1917 under Arthur Stoll (1887–1971), who is the founder of Sandoz's pharmaceutical department in 1917. In 1918, Arthur Stoll isolated ergotamine from ergot; the substance was eventually used to treat migraine and headaches and was introduced under the trade name Gynergen in 1921.
Between the World Wars, Gynergen (1921) and Calcium-Sandoz (1929) were brought to market. Sandoz also produced chemicals for textiles, paper, and leather, beginning in 1929. In 1939, the company began producing agricultural chemicals.
The psychedelic effects of lysergic acid diethylamide (LSD) were discovered at the Sandoz laboratories in 1943 by Arthur Stoll and Albert Hofmann. Sandoz began clinical trials and marketed the substance, from 1947 through the mid-1960s, under the name Delysid as a psychiatric drug, thought useful for treating a wide variety of mental ailments, ranging from alcoholism to sexual deviancy. Sandoz suggested in its marketing literature that psychiatrists take LSD themselves, to gain a better subjective understanding of the schizophrenic experience, and many did exactly that and so did other scientific researchers. The Sandoz product received mass publicity as early as 1954, in a Time magazine feature. Research on LSD peaked in the 1950s and early 1960s. The CIA purchased quantities of LSD from Sandoz for use in its illegal human experimentation program known as MKUltra. Sandoz withdrew the drug from the market in 1965. The drug became a cultural novelty of the 1960s after psychologist Timothy Leary at Harvard University began to promote its use for recreational and spiritual experiences among the general public.
Sandoz opened its first foreign offices in 1964. In 1967, Sandoz merged with Wander AG (known for Ovomaltine and Isostar). Sandoz acquired the companies Delmark, Wasabröd (a Swedish manufacturer of crisp bread), and Gerber Products Company (a baby food company). On 1 November 1986, a fire broke out in a production plant storage room, which led to the Sandoz chemical spill and a large amount of pesticide being released into the upper Rhine river. This exposure killed many fish and other aquatic life. In 1995, Sandoz spun off its specialty chemicals business to form Clariant. In 1997, Clariant merged with the specialty chemicals business that was spun off from Hoechst AG in Germany.
Merger
In 1996, Ciba-Geigy merged with Sandoz, with the pharmaceutical and agrochemical divisions of both staying together to form Novartis. Other Ciba-Geigy and Sandoz businesses were spun off as independent companies. notably Ciba Specialty Chemicals. Sandoz's Master Builders Technologies, a producer of chemicals for the construction industry, was sold off to SKW Trostberg A.G., a subsidiary of the German energy company VIAG, while its North American corn herbicide business became part of the German chemical maker BASF.
Post-merger
In 1998, the company entered into a biotechnology licensing agreement with the University of California at Berkeley Department of Plant and Microbial Biology. Critics of the agreement expressed concern over prospects that the agreement would diminish academic objectivity, or lead to the commercialization of genetically modified plants. The agreement expired in 2003.
2000–2010
In 2000, Novartis and AstraZeneca combined their agrobusiness divisions to create a new company, Syngenta.
In 2003, Novartis organized all its generics businesses into one division, and merged some of its subsidiaries into one company, reusing the predecessor brand name of Sandoz.
In 2005, Novartis expanded its subsidiary Sandoz significantly through the US$8.29 billion acquisition of Hexal, one of Germany's leading generic drug companies, and Eon Labs, a fast-growing United States generic pharmaceutical company.
In 2006, Novartis acquired the California-based Chiron Corporation. Chiron had been divided into three units: Chiron Vaccines, Chiron Blood Testing, and Chiron BioPharmaceuticals. The biopharmaceutical unit was integrated into Novartis Pharmaceuticals, while the vaccines and blood testing units were made into a new Novartis Vaccines and Diagnostics division. Also in 2006, Sandoz became the first company to have a biosimilar drug approved in Europe with its recombinant human growth hormone drug.
In 2007, Novartis sold the Gerber Products Company to Nestlé as part of its continuing effort to shed old Sandoz and Ciba-Geigy businesses and focus on healthcare.
In 2009, Novartis reached an agreement to acquire an 85 percent stake in the Chinese vaccines company Zhejiang Tianyuan Bio-Pharmaceutical Co., Ltd. as part of a strategic initiative to build a vaccines industry leader in this country and expand the group's limited presence in this fast-growing market segment. This proposed acquisition will require government and regulatory approvals in China.
In 2010, Novartis offered to pay US$39.3 billion to fully acquire Alcon, the world's largest eye-care company, including a majority stake held by Nestlé. Novartis had bought 25 percent of Alcon in 2008. Novartis created a new division and called it Alcon, under which it placed its CIBA VISION subsidiary and Novartis Ophthalmics, which became the second-largest division of Novartis. The total cost for Alcon amounted to $60 billion.
2011–present
In 2011, Novartis acquired the medical laboratory diagnostics company Genoptix to "serve as a strong foundation for our (Novartis') individualized treatment programs".
In 2012, the Company cut approximately 2,000 positions in the United States, primarily in sales, in response to anticipated revenue downturns from the hypertension drug Diovan, which was losing patent protection, and the realization that the anticipated successor to Diovan, Rasilez, was failing in clinical trials. The 2012 personnel reductions follow ~2000 cut positions in Switzerland and the United States in 2011, ~1400 cut positions in the United States in 2010, and a reduction of "thousands" and several site closures in previous years. Also in 2012, Novartis became the biggest manufacturer of generic skin care medicine, after agreeing to buy Fougera Pharmaceuticals for $1.525 billion in cash.
In 2013, the Indian Supreme Court issued a decision rejecting Novartis' patent application in India on the final form of Gleevec, Novartis's cancer drug; the case caused great controversy. In 2013, Novartis was sued again by the US government, this time for allegedly bribing doctors for a decade so that their patients are steered towards the company's drugs.
In January 2014, Novartis announced plans to cut 500 jobs from its pharmaceuticals division. In February 2014, Novartis announced that it acquired CoStim Pharmaceuticals.
In May 2014, Novartis purchased the rights to market Ophthotech's Fovista (an anti-PDGF aptamer, also being investigated for use in combination with anti-VEGF treatments) outside the U.S. for up to $1 billion. Novartis acquired exclusive rights to market the eye drug outside of the states while retaining U.S. marketing rights. The company agreed to pay Ophthotech $200 million upfront, and $130 million in milestone payments relating to Phase III trials. Ophthotech is also eligible to receive up to $300 million dependent upon future marketing approval milestones outside of America and up to $400 million relating to sales milestones. In September 2014, Ophthotech received its first $50 million phase III trial milestone payment from Novartis. In April 2014, Novartis announced that it would acquire GlaxoSmithKline's cancer drug business for $16 billion as well as selling its vaccines business to GlaxoSmithKline for $7.1 billion. In August 2014 Genetic Engineering & Biotechnology News reported that Novartis had acquired a 15 percent stake in Gamida Cell for $35 million, with the option to purchase the whole company for approximately $165 million. In October 2014, Novartis announced its intention to sell its influenza vaccine business (inclusive of its development pipeline), subject to regulatory approval, to CSL for $275 million.
In March 2015, the company announced BioPharma had completed its acquisition of two Phase III cancer-drug candidates; the MEK inhibitor binimetinib (MEK 162) and the BRAF inhibitor encorafenib (LGX818), for $85 million. In addition, the company sold its RNAi portfolio to Arrowhead Research for $10 million and $25 million in stock. In June, the company announced it would acquire Spinifex Pharmaceuticals for more than $200 million. In August, the company acquired the remaining rights to the CD20 monoclonal antibody Ofatumumab from GlaxoSmithKline for up to $1 billion. In October the company acquired Admune Therapeutics for an undisclosed sum, as well as licensing PBF-509, an adenosine A2A receptor antagonist which is in Phase I clinical trials for non-small cell lung cancer, from Palobiofarma.
In November 2016, the company announced it would acquire Selexys Pharmaceuticals for $665 million. In December, the company acquired Encore Vision, gaining the company's principle compound, EV06, is a first-in-class topical therapy for presbyopia. In December Novartis acquired Ziarco Group Limited, bolstering its presence in eczema treatments.
In late October 2017, Reuters announced that Novartis would acquire Advanced Accelerator Applications for $3.9 billion, paying $41 per ordinary share and $82 per American depositary share representing a 47 percent premium.
In March 2018, GlaxoSmithKline announced that it has reached an agreement with Novartis to acquire Novartis' 36.5 percent stake in their Consumer Healthcare Joint Venture for $13 billion (£9.2 billion). In April of the same year, the business utilised some of the proceeds from the aforementioned GlaxoSmithKline deal to acquire Avexis for $218 per share or $8.7 billion in total, gaining the lead compound AVXS-101 used to treat spinal muscular atrophy. In August 2018, Novartis signed a deal with Laekna-a Shanghai-based pharmaceutical company for its two clinical-stage cancer drugs. Novartis gave Laekna the exclusive international rights for the drugs that are oral pan-Akt kinase inhibitors namely; afuresertib (ASB138) and uprosertib (UPB795). In mid-October, the company announced it would acquire Endocyte Inc for $2.1 billion ($24 per share) merging it with a newly created subsidiary. Endocyte will bolster Novartis' offering in its radiopharmaceuticals business, with Endocyte's first in class candidate 177Lu-PSMA-617 being targeted against metastatic castration-resistant prostate cancer. In late December the company announced it would acquire France-based contract manufacturer, CellforCure from LFB, boosting its capacity to produce cell and gene therapies.
On 9 April 2019, Novartis announced that it had completed the spin-off of Alcon as a separate commercial entity. Alcon was listed on the SIX exchange in Switzerland and NYSE exchange in the U.S. Novartis announced during late 2019 a five-year artificial intelligence "alliance" with Microsoft. The companies aim to create applications for "Microsoft's AI capabilities", in turn improving the other's drug development processes. Microsoft seeks to "test AI products it is already working on in 'real-life' situations". The deal will pursue solutions for "organizing and using" data generated from Novartis' laboratory experiments, clinical trials, and manufacturing plants. It will also look at improving manufacturing of Chimeric antigen receptor T cell (CAR T cells). Finally, the deal "will also apply AI to generative chemistry to enhance drug design". In November 2019, Sandoz announced it would acquire the Japanese business of Aspen Global inc for €300 million (around $330 million), boosting the business's presence in Asia. In late November 2019, the business announced it would acquire The Medicines Company for ($85 per share) in order to acquire amongst other assets, the cholesterol lowering therapy; inclisiran.
In April 2020, the company announced it would acquire Amblyotech.
In September 2020, Novartis was imposed a fine of €385 million by the French competition authority on accusations of abusive practices to preserve sales of Lucentis over a cheaper drug. Also in September, BioNTech has leased a large production facility from Novartis to follow all advance demands for its coronavirus vaccine in Europe and sell it to China.
In July 2020, Novartis agreed to pay $678 million to settle allegations that the company violated the False Claims Act and Anti-Kickback Statute by paying physicians to induce them to prescribe certain of the company's drugs. Novartis allegedly spent hundreds of millions of dollars on fraudulent speaker programs that served as a means to bribe doctors with cash payments and other extravagant rewards. Many of these speaking programs were allegedly nothing more than social gatherings at expensive restaurants, with limited or no discussion about the Novartis drugs.
In October Novartis announced it would acquire Vedere Bio for $280 million boosting the businesses cell and gene therapy offerings.
In October 2020, as part of a joint venture to develop therapeutic drugs to combat COVID-19, Novartis bought 6% of all shares outstanding in Swiss DARPin research company Molecular Partners AG at CHF 23 per share.
In December 2020, Novartis announced it would acquire Cadent Therapeutics for up to $770 million, gaining full rights to CAD-9303 (a NMDAr positive allosteric modulator), MIJ-821 (a NMDAr negative allosteric modulator) and CAD-1883 a clinical-stage SK channel positive allosteric modulator.
In September 2021, the company announced it would acquire gene-therapy business, Arctos Medical, broadening its optogenetics range. In December, Novartis announced it would purchase Gyroscope Therapeutics from health care investment company, Syncona Ltd, for up to $1.5 billion.
In February 2022, New York City-based biotechnology company Cambrian Biopharma announced it had licensed rights to mTOR inhibitor programs from Novartis. As part of the deal, Cambrian was setting up a subsidiary called Tornado Therapeutics.
In August 2022, the company announced its plan to spin off Sandoz generic drugs unit to form a publicly traded business as part of a restructuring. With the unit having generated US$9.69 billion in 2021, the spin-off would create the biggest generic drugs company in Europe by sales.
In June 2023, Novartis announced it would acquire Chinook Therapeutics and its drug pipeline for up to $3.5 billion.
In July 2023, Novartis acquired DTx Pharma, a developer of technology for delivering RNA-based therapies, upfront for $500 million and an additional $500 million subject to reaching certain targets. Also in June, Novartis announced it would it would sell Xiidra to Bausch & Lomb for $1.75 billion and receive additional $750 million linked to future sales for Xiidra as well as two pipeline assets.
In September 2023, Novartis announced that the spin-off had been approved by its shareholders and that it would be completed by the next month, resulting in Novartis shareholders receiving one Sandoz share for every five Novartis shares. Sandoz will be listed on the SIX Swiss Exchange with a market capitalization between $18 billion and $25bn.
On 4 October 2023, Novartis completed the spin-off of Sandoz as a stand-alone company.
In November 2023, Legend Biotech and Novartis signed an out-license deal to develop and manufacture Legend's chimeric antigen receptor (CAR-T) therapies, that go after delta-like ligand protein 3 (DLL3) including large cell neuroendocrine carcinoma candidate LB2102 for $100 million upfront, and Legend Biotech will be eligible to receive up to $1.01 billion in clinical, regulatory, and commercial milestone payments and tiered royalties.
In December 2023, Novartis sold its 15 ophthalmology drugs to JB Chemicals for ₹1,089 crore ($116 million).
In 2023, the World Intellectual Property Organization (WIPO)’s Madrid Yearly Review ranked Novartis's number of marks applications filled under the Madrid System as 4th in the world, with 110 trademarks applications submitted during 2023.
In February 2024, Novartis announced it would acquire the German biotech firm MorphoSys AG for €2.7bn. Germany's antitrust regulator, the Federal Cartel Office, approved the takeover in March 2024.
In May 2024, Novartis announced it would acquire Mariana Oncology for $1 billion upfront and up to $750 million more if certain milestones were met.
In July 2024, Novartis entered into a strategic collaboration with Dren Bio to develop therapeutic bispecific antibodies for cancer, with the deal worth up to $3 billion.
In November 2024, Novartis and Ratio Therapeutics entered into a worldwide licence and collaboration agreement worth $745m to advance a somatostatin receptor 2 (SSTR2)-targeting radiotherapeutic candidate for cancer.
Acquisition history
Novartis
Novartis
Ciba-Geigy
J. R. Geigy Ltd
CIBA
Sandoz
Kern and Sandoz Chemistry Firm
Wander AG
Lek d.d. (Slovenia)
Aspen Global inc (Japanese business)
Hexal
Eon Labs
Chiron Corporation
Matrix Pharmaceuticals Inc
PowderJect
PathoGenesis
Cetus Corporation
Cetus Oncology
Biocine Company
Chiron Diagnostics
Chiron Intraoptics
Chiron Technologies
Adatomed GmbH
Zhejiang Tianyuan Bio-Pharmaceutical Co., Ltd
Alcon
Texas Pharmacal Company
Genoptix
Fougera Pharmaceuticals
CoStim Pharmaceuticals
GlaxoSmithKline (Cancer drug division)
Spinifex Pharmaceuticals
Admune Therapeutic
Selexys Pharmaceuticals
Ziarco Group Limited
Advanced Accelerator Applications
AveXis
Endocyte
CellforCure
The Medicines Company
Amblyotech
Vedere Bio
Cadent Therapeutics
Luc Therapeutics
Ataxion Therapeutics
Arctos Medical
Gyroscope Therapeutics
Chinook Therapeutics
DTx Pharma
MorphoSys
Mariana Oncology
Corporate structure
Novartis AG is a publicly traded Swiss holding company that operates through the Novartis Group and owns, directly or indirectly, all companies worldwide that operate as subsidiaries of the Novartis Group.
Novartis's businesses are divided into two operating divisions: Innovative Medicines and Sandoz (generics). The eye-care division Alcon was spun off into an independent company in April 2019. In August 2022, Novartis announced plans to spin off Sandoz as part of restructuring. The spin-off was completed in October 2023.
The Innovative Medicines business is made up of two commercial units: Innovative Medicines International and Innovative Medicines US. The two business units combine the pharmaceutical and oncology divisions and commercially focus on global and US market respectively.
Novartis operates directly through subsidiaries, each of which fall under one of the divisions, and that Novartis categorizes as fulfilling one or more of the following functions: Holding/Finance, Sales, Production, and Research
Novartis AG also held 33.3 percent of the shares of Roche until 2022, however it did not exercise control over Roche. Novartis also has two significant license agreements with Genentech, a Roche subsidiary. One agreement is for Lucentis; the other is for Xolair.
In 2014, Novartis established a center in Hyderabad, India, in order to offshore several of its R&D, clinical development, medical writing and administrative functions. The center supports the drug major's operations in the pharmaceuticals (Novartis), eye care (Alcon), and generic drugs segments (Sandoz).
Place in its market segments
Novartis is the world's first largest in life sciences and agribusiness markets. It is also the second-largest pharmaceutical company by market cap in 2019.
Alcon: At the time Novartis bought Alcon, they had annual sales of $6.5 billion and a net income of $2 billion. In April 2019, Novartis completed the spin-off of Alcon as a separate commercial entity.
Sandoz: , Sandoz has been recognized as the world's second-largest generic drug company. Sandoz' biosimilars lead its field, getting the first biosimilar approvals in the EU. In 2018, Sandoz reported US$9.9 billion in net sales. In August 2022, Novartis announced plans to spin off Sandoz by second half of 2023.
Vaccines and Diagnostics Division: In 2013, Novartis announced it was considering selling the vaccines and diagnostics division off. This sale was completed in late 2015, and the division was integrated into CSL's BioCSL operation, with the combined entity trading as Seqirus. In 2018, Novartis sold its consumer healthcare joint venture vaccines division to GlaxoSmithKline for US$13.0 billion.
Consumer: Novartis is not a leader in the over-the-counter or animal health segments; its leading OTC brands are Excedrin and Theraflu, but sales have been slowed by problems at its key US manufacturing plant.
In 2018, Novartis ranked second on the Access to Medicine Index, which "ranks companies on how readily they make their products available to the world's poor."
Finance
For the fiscal year 2022, Novartis reported earnings of US$6.955 billion, with an annual revenue of US$50.545 billion, a decrease of 71 percent over the previous fiscal cycle. Novartis shares traded at over $80.56 per share, and its market capitalization was valued at $198.34B as of 31 January 2023.
Research
The company's global research operations, called "Novartis Institutes for BioMedical Research (NIBR)" have their global headquarters in Cambridge, Massachusetts, United States. Two research institutes reside within NIBR that focus on diseases in the developing world: Novartis Institute for Tropical Diseases, which works on tuberculosis, dengue, and malaria, and Novartis Vaccines Institute for Global Health, which works on salmonella typhi (typhoid fever) and shigella.
Novartis is also involved in publicly funded collaborative research projects, with other industrial and academic partners. One example in the area of non-clinical safety assessment is the InnoMed PredTox project. The company is expanding its activities in joint research projects within the framework of the Innovative Medicines Initiative of EFPIA and the European Commission.
Novartis is working with Science 37 in order to allow video based telemedicine visits instead of physical traveling to clinics for patients. It is planning for ten clinical trials over three years using mobile technology to help free patients from burdensome hospital trips.
Products
Pharmaceuticals (66 in total as of 28 April 2023)
Consumer health
Benefiber
Bialcol Alcohol
Buckley's cold and cough formula
Bufferin
ChestEze
Comtrex cold and cough
Denavir/Vectavir
Desenex
Doan's pain relief
Ex-Lax
Excedrin
Fenistil
Gas-X
Habitrol
Keri skin care
Lamisil foot care
Lipactin herpes symptomatic treatment
Maalox
Nicotinell
No-doz
Quinvaxem (Pentavalent vaccine)
Otrivine
Prevacid 24HR
Savlon
Tavist
Theraflu
Vagistat
Tixylix
Voltaren
In January 2009, the United States Department of Health and Human Services awarded Novartis a $486 million contract for construction of the first US plant to produce cell-based influenza vaccine, to be located in Holly Springs, North Carolina. The stated goal of this program is the capability of producing 150,000,000 doses of pandemic vaccine within six months of declaring a flu pandemic.
In April 2014, Novartis divested its consumer health section with $3.5 billion worth of assets into a new joint venture with GlaxoSmithKline, named GSK Consumer Healthcare, of which Novartis will hold a 36.5% stake. In March 2018, GSK announced that it has reached an agreement with Novartis to acquire Novartis' 36.5% stake in their Consumer Healthcare Joint Venture for $13 billion (£9.2 billion).
Animal health
Pet care
Interceptor (Milbemycin oxime), oral worm control product
Sentinel Flavor Tabs (Milbemycin oxime, Lufenuron), oral flea control product
Deramaxx (Deracoxib), oral treatment for pain and inflammation from osteoarthritis in dogs
Capstar (Nitenpyram), oral tablet for flea control
Milbemax (Milbemycin oxime, Praziquantel), oral worm treatment
Program (Lufenuron), oral tablet for flea control
Livestock
Acatalk Duostar (Fluazuron, Ivermectin), tick control for cattle
CLiK (Dicyclanil), blowfly control for sheep
Denagard (Tiamulin), antibiotic for the treatment of swine dysentery associated with Brachyspira (formerly Serpulina or Treponema)
Fasinex (Triclabendazole), oral drench for cattle that is used for the treatment and control of all three stages of liver fluke
ViraShield, For use in healthy cattle, including pregnant cows and heifers, as an aid in the prevention of disease caused by infectious bovine rhinotracheitis (IBR), bovine virus diarrhoea (BVD Type 1 and BVD Type 2), parainfluenza Type 3 (PI3), and bovine respiratory syncytial (BRSV) viruses
Bioprotection (insect and rodent control)
Actara (Thiamethoxam)
Atrazine (Atrazine)
Larvadex (Cyromazine)
Neporex (Cyromazine)
Oxyfly (Lambda-cyhalothrin)
Virusnip (Potassium monopersulfate)
Controversies and criticism
Challenge to India's patent laws
Novartis fought a seven-year, controversial battle to patent Gleevec in India, and took the case all the way to the Indian Supreme Court, where the patent application was finally rejected. The patent application at the center of the case was filed by Novartis in India in 1998, after India had agreed to enter the World Trade Organization and to abide by worldwide intellectual property standards under the TRIPS agreement. As part of this agreement, India made changes to its patent law; the biggest of which was that prior to these changes, patents on products were not allowed, afterwards they were, albeit with restrictions. These changes came into effect in 2005, so Novartis' patent application waited in a "mailbox" with others until then, under procedures that India instituted to manage the transition. India also passed certain amendments to its patent law in 2005, just before the laws came into effect, which played a key role in the rejection of the patent application.
The patent application claimed the final form of Gleevec (the beta crystalline form of imatinib mesylate). In 1993 before India allowed patents on products, Novartis had patented imatinib, with salts vaguely specified, in many countries but could not patent it in India. The key differences between the two patent applications were that the 1998 patent application specified the counterion (Gleevec is a specific salt—imatinib mesylate) while the 1993 patent application did not claim any specific salts nor did it mention mesylate, and the 1998 patent application specified the solid form of Gleevec—the way the individual molecules are packed together into a solid when the drug itself is manufactured (this is separate from processes by which the drug itself is formulated into pills or capsules)—while the 1993 patent application did not. The solid form of imatinib mesylate in Gleevec is beta crystalline.
As provided under the TRIPS agreement, Novartis applied for Exclusive Marketing Rights (EMR) for Gleevec from the Indian Patent Office and the EMR was granted in November 2003. Novartis made use of the EMR to obtain orders against some generic manufacturers who had already launched Gleevec in India. Novartis set the price of Gleevec at US$2666 per patient per month; generic companies were selling their versions at US$177 to 266 per patient per month. Novartis also initiated a program to assist patients who could not afford its version of the drug, concurrent with its product launch.
When examination of Novartis' patent application began in 2005, it came under immediate attack from oppositions initiated by generic companies that were already selling Gleevec in India and by advocacy groups. The application was rejected by the patent office and by an appeal board. The key basis for the rejection was the part of Indian patent law that was created by amendment in 2005, describing the patentability of new uses for known drugs and modifications of known drugs. That section, Paragraph 3d, specified that such inventions are patentable only if "they differ significantly in properties with regard to efficacy." At one point, Novartis went to court to try to invalidate Paragraph 3d; it argued that the provision was unconstitutionally vague and that it violated TRIPS. Novartis lost that case and did not appeal. Novartis did appeal the rejection by the patent office to India's Supreme Court, which took the case.
The Supreme Court case hinged on the interpretation of Paragraph 3d. The Supreme Court decided that the substance that Novartis sought to patent was indeed a modification of a known drug (the raw form of imatinib, which was publicly disclosed in the 1993 patent application and in scientific articles), that Novartis did not present evidence of a difference in therapeutic efficacy between the final form of Gleevec and the raw form of imatinib, and that therefore the patent application was properly rejected by the patent office and lower courts.
Although the court ruled narrowly, and took care to note that the subject application was filed during a time of transition in Indian patent law, the decision generated widespread global news coverage and reignited debates on balancing public good with monopolistic pricing, innovation with affordability etc.
Had Novartis won and had its patent issued, it could not have prevented generics companies in India from selling generic Gleevec, but it could have obliged them to pay a reasonable royalty under a grandfather clause included in India's patent law.
In reaction to the decision, Ranjit Shahani, vice-chairman and managing director of Novartis India Ltd was quoted as saying "This ruling is a setback for patients that will hinder medical progress for diseases without effective treatment options." He also said that companies like Novartis would invest less money in research in India as a result of the ruling. Novartis also emphasised that it continues to be committed to good access to its drugs; according to Novartis, by 2013, "95% of patients in India—roughly 16,000 people—receive Glivec free of charge... and it has provided more than $1.7 billion worth of Glivec to Indian patients in its support program since it was started...."
Sexual discrimination
On 17 May 2010, a jury in the United States District Court for the Southern District of New York awarded $3,367,250 in compensatory damages against Novartis, finding that the company had committed sexual discrimination against twelve female sales representatives and entry-level managers since 2002, in matters of pay, promotion, and treatment after learning that the employees were pregnant. Two months later the company settled with the remaining plaintiffs for $152.5 million plus attorney fees.
Marketing violations
In September 2008, the US Food and Drug Administration (FDA) sent a notice to Novartis Pharmaceuticals regarding its advertising of Focalin XR, an ADHD drug, in which the company overstated its efficacy while marketing to the public and medical professionals.
In 2005, federal prosecutors opened an investigation into Novartis' marketing of several drugs: Trileptal, an antiseizure drug; three drugs for heart conditions—Diovan (the company's top-selling product), Exforge, and Tekturna; Sandostatin, a drug to treat a growth hormone disorder; and Zelnorm, a drug for irritable bowel syndrome. In September 2010, Novartis agreed to pay US$422.5 million in criminal and civil claims and to enter into a corporate integrity agreement with the US Office of the Inspector General. According to The New York Times, "Federal prosecutors accused Novartis of paying illegal kickbacks to health care professionals through speaker programs, advisory boards, entertainment, travel and meals. But aside from pleading guilty to one misdemeanor charge of mislabeling in an agreement that Novartis announced in February, the company denied wrongdoing." In the same New York Times article, Frank Lichtenberg, a Columbia professor who receives pharmaceutical financing for research on innovation in the industry, said off-label prescribing was encouraged by the American Medical Association and paid for by insurers, but off-label marketing was clearly illegal. "So it's not surprising that they would settle because they don't have a legal leg to stand on."
In April 2013, federal prosecutors filed two lawsuits against Novartis under the False Claims Act for off-label marketing and kickbacks; in both suits, prosecutors are seeking treble damages. The first suit "accused Novartis of inducing pharmacies to switch thousands of kidney transplant patients to its immunosuppressant drug Myfortic in exchange for kickbacks disguised as rebates and discounts". In the second, the Justice Department joined a qui tam, or whistleblower, lawsuit brought by a former sales rep over off-label marketing of three drugs: Lotrel and Valturna (both hypertension drugs), and the diabetes drug, Starlix. Twenty-seven states, the District of Columbia and Chicago and New York also joined.
Avastin
Outside the US, Novartis markets the drug ranibizumab (trade name Lucentis), which is a monoclonal antibody fragment derived from the same parent mouse antibody as bevacizumab (Avastin). Both Avastin and Lucentis were created by Genentech which is owned by Roche; Roche markets Avastin worldwide, and also markets Lucentis in the US. Lucentis has been approved worldwide as a treatment for wet macular degeneration and other retinal disorders; Avastin is used to treat certain cancers. Because the price of Lucentis is much higher than Avastin, many ophthalmologists began having compounding pharmacies formulate Avastin for administration to the eye and began treating their patients with Avastin. In 2011, four trusts of the National Health Service in the UK issued policies approving use and payment for administering Avastin for macular degeneration, in order to save money, even though Avastin had not been approved for that indication. In April 2012, after failing to persuade the trusts that it was uncertain whether Avastin was as safe and effective as Lucentis, and in order to retain the market for Lucentis, Novartis announced it would sue the trusts. However, in July Novartis offered significant discounts (kept confidential) to the trusts, and the trusts agreed to change their policy, and in November, Novartis dropped the litigation.
Valsartan
In the summer of 2013, two Japanese universities retracted several publications of clinical trials that purported to show that Valsartan (branded as Diovan) had cardiovascular benefits, when it was found that statistical analysis had been manipulated, and that a Novartis employee had participated in the statistical analysis but had not disclosed his relationship with Novartis but only his affiliation with Osaka City University, where he was a lecturer. As a result, several Japanese hospitals stopped using the drug, and media outlets ran reports on the scandal in Japan. In January 2014 Japan's Health Ministry filed a criminal complaint with the Tokyo public prosecutor's office against Novartis and an unspecified number of employees, for allegedly misleading consumers through advertisements that used the research to support the benefits of Diovan. On 1 July 2014 the prosecutor's office announced it was formally charging the company and one of its employees.
Corruption
In January 2018, Novartis began being investigated by US and Greek authorities for allegedly bribing Greek public officials in the 2006–2015 period, in a scheme which included two former prime ministers, several former health ministers, many high ranking party members of the Nea Dimokratia and PASOK ruling parties, as well as bankers. The manager of Novartis' Greek branch was prohibited from leaving the country. The minister's deputy described the allegations as "the biggest scandal since the creation of the Greek state", which caused "annual state expenditure on medicine to explode". Most of the ministers involved in the scandal have denied the allegations and sought to paint the case as "political targeting" and "fabrication" by the Syriza opposition party. However, the Greek Judicial Council ruled that the scandal was real. Besides bribery that involves artificial increases in the price of several medicines, the case also involves money laundering, with suspicions of "illegal funds of more than four billion euros ($4.2 billion)" were involved.
In June 2020, Novartis reached settlements with the US Department of Justice (DOJ) and the US Securities and Exchange Commission (SEC) resolving all Foreign Corrupt Practices Act (FCPA) investigations into historical conduct by the company and its subsidiaries. As part of the resolutions, Novartis and some of its current and former subsidiaries would pay US$233.9 million to the DOJ and US$112.8 million to the SEC.
Michael Cohen
Novartis paid $1.2 million to Essential Consultants, an entity owned by Michael Cohen, following the 2017 inauguration of Donald Trump. Cohen was paid monthly, with each payment just under $100,000. Novartis claims it paid Cohen to help it understand and influence the new administration's approach to drug pricing and regulation.
In July 2018, the US Senate committee report "White House Access for Sale" revealed that Novartis Ag's relationship with Cohen was "longer and more detailed". Novartis initially stated that the relationship ceased a month after entering the US$1.2 million contract with Cohen's consulting firm since the consultants were not able to provide the information the pharmaceutical company needed. Later, it became clear, however, that then-CEO Joseph Jimenez and Cohen communicated via email multiple times during 2017, which included ideas to lower drug prices to be discussed with the president. According to the report, several of the ideas appeared later in Trump's drug pricing plan, released in early 2018, in which pharmaceutical companies were protected from reduced revenues.
AveXis data integrity
Having already received approval for Zolgensma in May 2019, on 28 June AveXis (a Novartis company) voluntarily disclosed to the FDA that some data previously submitted to the agency as part of the Biologics License Application (BLA) package was inaccurate. Specifically, the data manipulation related to an in vivo murine potency assay used in the early development of the product but the issue the FDA and wider community has taken is that AveXis was aware of the data manipulation as early as 14 March 2019, almost two months before the BLA was approved. To compound the problem in early August it emerged a senior manager sold almost $1 million worth of stock immediately before the FDA probe became public on 6 August, but after the company had informed the FDA of the problem. As of September 2019, the FDA was still preparing its response to the scandal.
Philanthropy
Fight against leprosy
Novartis has been committed for decades to eliminate leprosy by providing free, multidrug therapy to all endemic countries since 2000.
See also
List of pharmaceutical companies
Pharmaceutical industry in Switzerland
References
Further reading
External links
Biotechnology companies established in 1996
Biotechnology companies of Switzerland
Companies listed on the SIX Swiss Exchange
Companies listed on the New York Stock Exchange
Companies in the Swiss Market Index
Eyewear companies of Switzerland
Life sciences industry
Multinational companies headquartered in Switzerland
Pharmaceutical companies established in 1996
Pharmaceutical companies of Switzerland
Swiss brands
Swiss companies established in 1996
Vaccine producers
Veterinary medicine companies
Companies in the Dow Jones Global Titans 50
Companies in the S&P Europe 350 Dividend Aristocrats | Novartis | Biology | 9,851 |
36,597,629 | https://en.wikipedia.org/wiki/Genus%20of%20a%20quadratic%20form | In mathematics, the genus is a classification of quadratic forms and lattices over the ring of integers. An integral quadratic form is a quadratic form on Zn, or equivalently a free Z-module of finite rank. Two such forms are in the same genus if they are equivalent over the local rings Zp for each prime p and also equivalent over R.
Equivalent forms are in the same genus, but the converse does not hold. For example, x2 + 82y2 and 2x2 + 41y2 are in the same genus but not equivalent over Z. Forms in the same genus have equal discriminant and hence there are only finitely many equivalence classes in a genus.
The Smith–Minkowski–Siegel mass formula gives the weight or mass of the quadratic forms in a genus, the count of equivalence classes weighted by the reciprocals of the orders of their automorphism groups.
Binary quadratic forms
For binary quadratic forms there is a group structure on the set C of equivalence classes of forms with given discriminant. The genera are defined by the generic characters. The principal genus, the genus containing the principal form, is precisely the subgroup C2 and the genera are the cosets of C2: so in this case all genera contain the same number of classes of forms.
See also
Spinor genus
References
External links
Quadratic forms | Genus of a quadratic form | Mathematics | 279 |
31,296 | https://en.wikipedia.org/wiki/Tachyon | A tachyon () or tachyonic particle is a hypothetical particle that always travels faster than light. Physicists believe that faster-than-light particles cannot exist because they are inconsistent with the known laws of physics. If such particles did exist they could be used to send signals faster than light and into the past. According to the theory of relativity this would violate causality, leading to logical paradoxes such as the grandfather paradox. Tachyons would exhibit the unusual property of increasing in speed as their energy decreases, and would require infinite energy to slow to the speed of light. No verifiable experimental evidence for the existence of such particles has been found.
In the 1967 paper that coined the term, Gerald Feinberg proposed that tachyonic particles could be made from excitations of a quantum field with imaginary mass. However, it was soon realized that Feinberg's model did not in fact allow for superluminal (faster-than-light) particles or signals and that tachyonic fields merely give rise to instabilities, not causality violations. The term tachyonic field refers to imaginary mass fields rather than to faster-than-light particles.
Etymology
The term tachyon comes from the , tachus, meaning swift. The complementary particle types are called luxons (which always move at the speed of light) and bradyons (which always move slower than light); both of these particle types are known to exist.
History
The first hypothesis regarding faster-than-light particles is sometimes attributed to physicist Arnold Sommerfeld, who, in 1904, named them "meta-particles". The possibility of existence of faster-than-light particles was also proposed by in 1923.
The term tachyon was coined by Gerald Feinberg in a 1967 paper titled "Possibility of faster-than-light particles". He had been inspired by the science-fiction story "Beep" by James Blish. Feinberg studied the kinematics of such particles according to special relativity. In his paper, he also introduced fields with imaginary mass (now also referred to as tachyons) in an attempt to understand the microphysical origin such particles might have.
Oleksa-Myron Bilanuik, Vijay Deshpande and E. C. George Sudarshan discussed this more recently in their 1962 paper on the topic and in 1969.
In September 2011, it was reported that a tau neutrino had traveled faster than the speed of light; however, later updates from CERN on the OPERA experiment indicate that the faster-than-light readings were due to a faulty element of the experiment's fibre optic timing system.
Special relativity
In special relativity, a faster-than-light particle would have spacelike four-momentum, unlike ordinary particles that have time-like four-momentum. While some theories suggest the mass of tachyons is imaginary, modern formulations often consider their mass to be real, with redefined formulas for momentum and energy. Additionally, since tachyons are confined to the spacelike portion of the energy–momentum graph, they cannot slow down to subluminal (slower-than-light) speeds.
Mass
In a Lorentz invariant theory, the same formulas that apply to ordinary slower-than-light particles (sometimes called bradyons in discussions of tachyons) must also apply to tachyons. In particular, the energy–momentum relation:
(where p is the relativistic momentum of the bradyon and m is its rest mass) should still apply, along with the formula for the total energy of a particle:
This equation shows that the total energy of a particle (bradyon or tachyon) contains a contribution from its rest mass (the "rest mass–energy") and a contribution from its motion, the kinetic energy. When (the particle's velocity) is larger than (the speed of light), the denominator in the equation for the energy is imaginary, as the value under the square root is negative. Because the total energy of the particle must be real (and not a complex or imaginary number) in order to have any practical meaning as a measurement, the numerator must also be imaginary (i.e. the rest mass m must be imaginary, as a pure imaginary number divided by another pure imaginary number is a real number).
In some modern formulations of the theory, the mass of tachyons is regarded as real.
Speed
One curious effect is that, unlike ordinary particles, the speed of a tachyon increases as its energy decreases. In particular, approaches zero when approaches infinity. (For ordinary bradyonic matter, increases with increasing speed, becoming arbitrarily large as approaches , the speed of light.) Therefore, just as bradyons are forbidden to break the light-speed barrier, so are tachyons forbidden from slowing down to below c, because infinite energy is required to reach the barrier from either above or below.
As noted by Albert Einstein, Richard C. Tolman, and others, special relativity implies that faster-than-light particles, if they existed, could be used to communicate backwards in time.
Neutrinos
In 1985, Chodos proposed that neutrinos can have a tachyonic nature. The possibility of standard model particles moving at faster-than-light speeds can be modeled using Lorentz invariance violating terms, for example in the Standard-Model Extension. In this framework, neutrinos experience Lorentz-violating oscillations and can travel faster than light at high energies. This proposal was strongly criticized.
Superluminal information
If tachyons can transmit information faster than light, then, according to relativity, they violate causality, leading to logical paradoxes of the "kill your own grandfather" type. This is often illustrated with thought experiments such as the "tachyon telephone paradox" or "logically pernicious self-inhibitor."
The problem can be understood in terms of the relativity of simultaneity in special relativity, which says that different inertial reference frames will disagree on whether two events at different locations happened "at the same time" or not, and they can also disagree on the order of the two events. (Technically, these disagreements occur when the spacetime interval between the events is 'space-like', meaning that neither event lies in the future light cone of the other.)
If one of the two events represents the sending of a signal from one location and the second event represents the reception of the same signal at another location, then, as long as the signal is moving at the speed of light or slower, the mathematics of simultaneity ensures that all reference frames agree that the transmission-event happened before the reception-event. However, in the case of a hypothetical signal moving faster than light, there would always be some frames in which the signal was received before it was sent, so that the signal could be said to have moved backward in time. Because one of the two fundamental postulates of special relativity says that the laws of physics should work the same way in every inertial frame, if it is possible for signals to move backward in time in any one frame, it must be possible in all frames. This means that if observer A sends a signal to observer B which moves faster than light in A's frame but backwards in time in B's frame, and then B sends a reply which moves faster than light in B's frame but backwards in time in A's frame, it could work out that A receives the reply before sending the original signal, challenging causality in every frame and opening the door to severe logical paradoxes. This is known as the tachyonic antitelephone.
Reinterpretation principle
The reinterpretation principle asserts that a tachyon sent back in time can always be reinterpreted as a tachyon traveling forward in time, because observers cannot distinguish between the emission and absorption of tachyons. The attempt to detect a tachyon from the future (and violate causality) would actually create the same tachyon and send it forward in time (which is causal).
However, this principle is not widely accepted as resolving the paradoxes. Instead, what would be required to avoid paradoxes is that, unlike any known particle, tachyons do not interact in any way and can never be detected or observed, because otherwise a tachyon beam could be modulated and used to create an anti-telephone or a "logically pernicious self-inhibitor". All forms of energy are believed to interact at least gravitationally, and many authors state that superluminal propagation in Lorentz invariant theories always leads to causal paradoxes.
Fundamental models
In modern physics, all fundamental particles are regarded as excitations of quantum fields. There are several distinct ways in which tachyonic particles could be embedded into a field theory.
Fields with imaginary mass
In the paper that coined the term "tachyon", Gerald Feinberg studied Lorentz invariant quantum fields with imaginary mass. Because the group velocity for such a field is superluminal, naively it appears that its excitations propagate faster than light. However, it was quickly understood that the superluminal group velocity does not correspond to the speed of propagation of any localized excitation (like a particle). Instead, the negative mass represents an instability to tachyon condensation, and all excitations of the field propagate subluminally and are consistent with causality. Despite having no faster-than-light propagation, such fields are referred to simply as "tachyons" in many sources.
Tachyonic fields play an important role in modern physics. Perhaps the most famous is the Higgs boson of the Standard Model of particle physics, which has an imaginary mass in its uncondensed phase. In general, the phenomenon of spontaneous symmetry breaking, which is closely related to tachyon condensation, plays an important role in many aspects of theoretical physics, including the Ginzburg–Landau and BCS theories of superconductivity. Another example of a tachyonic field is the tachyon of bosonic string theory.
Tachyons are predicted by bosonic string theory and also the Neveu-Schwarz (NS) and NS-NS sectors, which are respectively the open bosonic sector and closed bosonic sector, of RNS superstring theory prior to the GSO projection. However such tachyons are not possible due to the Sen conjecture, also known as tachyon condensation. This resulted in the necessity for the GSO projection.
Lorentz-violating theories
In theories that do not respect Lorentz invariance, the speed of light is not (necessarily) a barrier, and particles can travel faster than the speed of light without infinite energy or causal paradoxes. A class of field theories of that type is the so-called Standard Model extensions. However, the experimental evidence for Lorentz invariance is extremely good, so such theories are very tightly constrained.
Fields with non-canonical kinetic term
By modifying the kinetic energy of the field, it is possible to produce Lorentz invariant field theories with excitations that propagate superluminally. However, such theories, in general, do not have a well-defined Cauchy problem (for reasons related to the issues of causality discussed above), and are probably inconsistent quantum mechanically.
In fiction
Tachyons have appeared in many works of fiction. They have been used as a standby mechanism upon which many science fiction authors rely to establish faster-than-light communication, with or without reference to causality issues. The word tachyon has become widely recognized to such an extent that it can impart a science-fictional connotation even if the subject in question has no particular relation to superluminal travel (a form of technobabble, akin to positronic brain).
See also
Lorentz-violating neutrino oscillations
Massive particle – bradyon, aka tardyon
Massless particle – luxon
Retrocausality
Tachyonic antitelephone
Virtual particle
Wheeler–Feynman absorber theory
References
External links
Hypothetical particles
String theory
Time travel | Tachyon | Physics,Astronomy | 2,509 |
25,024,518 | https://en.wikipedia.org/wiki/1970%20ascariasis%20poisoning%20incident | The 1970 ascariasis poisoning incident was a poisoning incident that took place in Quebec in February, 1970. At least seven people claimed to have been infected with parasitic worm eggs by Eric Kranz, a former postgraduate student from Hempstead, New York. The victims were Canadians Richard Davis, William Butler, David Fisk, and Keith Fern, with three other friends and acquaintances reported to be mildly infested. Doctors said that one of the men may have been affected by as many as 400,000 larvae.
Eric Kranz was a 23-year-old postdoctoral student in parasitology at Macdonald College in Sainte-Anne-de-Bellevue, Quebec. He shared a house with four roommates: Davis, Butler, Fisk, and Fern. The roommates were at odds with Kranz, who had not paid rent totalling , and asked him to move out. Kranz became agitated and allegedly told the roommates, "I'll put parasites in your food and you'll wake up dead". Kranz did pay the full rent balance on January 31, but the roommates evicted him anyway. Some time around February 1, Kranz prepared a festive Winter Carnival dinner for his roommates, and allegedly tainted the food with eggs stolen from the university laboratory where he studied. The roommates were hospitalized around February 12, and Kranz left Quebec a couple of days later. As the medical investigation continued, doctors suspected poisoning and authorities were notified. On February 25 Kranz was charged with attempted murder and a warrant was sought for his arrest. He returned voluntarily to Quebec, surrendered to authorities on 9 March, pleaded not guilty, and was remanded on bail.
Trial and acquittal
Kranz went on trial in June, 1971, charged with intentionally endangering the lives of his four roommates. There was expert evidence before the court consistent with the presence of Ascaris larvae in the bodies of two of the complainants: however, opinions from three other laboratory sources were not available. The defence further claimed that the infection could have occurred by way of a recurring sewage backup into the kitchen sink of the house: a version of events which was denied by at least one of the complainants. Kranz also pointed out in the course of his testimony that his roommates could have come into contact with Ascaris eggs simply by handling his clothing. Following consideration of the evidence, Judge Gerard Laganiere held that there was insufficient evidence to demonstrate the defendant's guilt beyond a reasonable doubt, and Kranz was acquitted.
Medical aspects
About a week after the dinner, the roommates began to develop cough, dyspnea, weight loss, and fever. The symptoms did not improve, and on February 12 they sought treatment at Queen Elizabeth Hospital's emergency room with symptoms of acute respiratory distress. In addition to these symptoms physicians noted wheezing and the appearance of hives. The roommates were treated for pneumonia, but the infection did not respond to antibiotics. Davis and Butler were in critical condition. After about four days, the staff was able to confirm the ascariasis diagnosis upon isolating live larvae, about 4 mm long, in the sputum and gastric washings. This confirmed exposure to ascariasis ova which were in the microscopic larval stage, migrating via the blood from intestine to lung. As these larvae ascended the trachea and were swallowed again they began developing into mature worms. Four weeks after infection, the victims passed numerous immature worms in bowel movements. Physicians cleared the developing adults with a course of piperazine, effectively ending the infestation. The victims were released from the hospital March 5, but the attending physician said that one of the men would probably have permanent lung damage.
This infection established a baseline case or index case for Ascaris suum infection in humans. Doctors had originally consulted Walter Reed Army Medical Center but found no precedent for human infection.
References
Further reading
The Illicit Use of Biological Agents Since 1900, W. Seth Carus
Poisoning by drugs, medicaments and biological substances
Ascariasis Poisoning Incident, 1970
1970 disasters in Canada | 1970 ascariasis poisoning incident | Environmental_science | 852 |
22,520,264 | https://en.wikipedia.org/wiki/Neuroinformatics%20%28journal%29 | Neuroinformatics is a quarterly peer-reviewed scientific journal published by Springer Science+Business Media. It covers all aspects of neuroinformatics. The journal is abstracted and indexed in MEDLINE/PubMed, Scopus and the Science Citation Index Expanded. According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.085. The founding co-editors-in-chief were Giorgio A. Ascoli, Erik De Schutter, and David N. Kennedy. The current editor-in-chief is John Darrell Van Horn from the University of Virginia.
References
External links
Neuroinformatics
Neuroscience journals
Springer Science+Business Media academic journals
English-language journals
Academic journals established in 2003
Quarterly journals | Neuroinformatics (journal) | Biology | 152 |
25,804,104 | https://en.wikipedia.org/wiki/List%20of%20trigonometric%20identities | In trigonometry, trigonometric identities are equalities that involve trigonometric functions and are true for every value of the occurring variables for which both sides of the equality are defined. Geometrically, these are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities potentially involving angles but also involving side lengths or other lengths of a triangle.
These identities are useful whenever expressions involving trigonometric functions need to be simplified. An important application is the integration of non-trigonometric functions: a common technique involves first using the substitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity.
Pythagorean identities
The basic relationship between the sine and cosine is given by the Pythagorean identity:
where means and means
This can be viewed as a version of the Pythagorean theorem, and follows from the equation for the unit circle. This equation can be solved for either the sine or the cosine:
where the sign depends on the quadrant of
Dividing this identity by , , or both yields the following identities:
Using these identities, it is possible to express any trigonometric function in terms of any other (up to a plus or minus sign):
Reflections, shifts, and periodicity
By examining the unit circle, one can establish the following properties of the trigonometric functions.
Reflections
When the direction of a Euclidean vector is represented by an angle this is the angle determined by the free vector (starting at the origin) and the positive -unit vector. The same concept may also be applied to lines in a Euclidean space, where the angle is that determined by a parallel to the given line through the origin and the positive -axis. If a line (vector) with direction is reflected about a line with direction then the direction angle of this reflected line (vector) has the value
The values of the trigonometric functions of these angles for specific angles satisfy simple identities: either they are equal, or have opposite signs, or employ the complementary trigonometric function. These are also known as .
Shifts and periodicity
Signs
The sign of trigonometric functions depends on quadrant of the angle. If and is the sign function,
The trigonometric functions are periodic with common period so for values of outside the interval they take repeating values (see above).
Angle sum and difference identities
These are also known as the (or ).
The angle difference identities for and can be derived from the angle sum versions by substituting for and using the facts that and . They can also be derived by using a slightly modified version of the figure for the angle sum identities, both of which are shown here.
These identities are summarized in the first two rows of the following table, which also includes sum and difference identities for the other trigonometric functions.
Sines and cosines of sums of infinitely many angles
When the series converges absolutely then
Because the series converges absolutely, it is necessarily the case that and In particular, in these two identities an asymmetry appears that is not seen in the case of sums of finitely many angles: in each product, there are only finitely many sine factors but there are cofinitely many cosine factors. Terms with infinitely many sine factors would necessarily be equal to zero.
When only finitely many of the angles are nonzero then only finitely many of the terms on the right side are nonzero because all but finitely many sine factors vanish. Furthermore, in each term all but finitely many of the cosine factors are unity.
Tangents and cotangents of sums
Let (for ) be the th-degree elementary symmetric polynomial in the variables
for that is,
Then
using the sine and cosine sum formulae above.
The number of terms on the right side depends on the number of terms on the left side.
For example:
and so on. The case of only finitely many terms can be proved by mathematical induction. The case of infinitely many terms can be proved by using some elementary inequalities.
Secants and cosecants of sums
where is the th-degree elementary symmetric polynomial in the variables and the number of terms in the denominator and the number of factors in the product in the numerator depend on the number of terms in the sum on the left. The case of only finitely many terms can be proved by mathematical induction on the number of such terms.
For example,
Ptolemy's theorem
Ptolemy's theorem is important in the history of trigonometric identities, as it is how results equivalent to the sum and difference formulas for sine and cosine were first proved. It states that in a cyclic quadrilateral , as shown in the accompanying figure, the sum of the products of the lengths of opposite sides is equal to the product of the lengths of the diagonals. In the special cases of one of the diagonals or sides being a diameter of the circle, this theorem gives rise directly to the angle sum and difference trigonometric identities. The relationship follows most easily when the circle is constructed to have a diameter of length one, as shown here.
By Thales's theorem, and are both right angles. The right-angled triangles and both share the hypotenuse of length 1. Thus, the side , , and .
By the inscribed angle theorem, the central angle subtended by the chord at the circle's center is twice the angle , i.e. . Therefore, the symmetrical pair of red triangles each has the angle at the center. Each of these triangles has a hypotenuse of length , so the length of is , i.e. simply . The quadrilateral's other diagonal is the diameter of length 1, so the product of the diagonals' lengths is also .
When these values are substituted into the statement of Ptolemy's theorem that , this yields the angle sum trigonometric identity for sine: . The angle difference formula for can be similarly derived by letting the side serve as a diameter instead of .
Multiple-angle and half-angle formulae
Multiple-angle formulae
Double-angle formulae
Formulae for twice an angle.
Triple-angle formulae
Formulae for triple angles.
Multiple-angle formulae
Formulae for multiple angles.
Chebyshev method
The Chebyshev method is a recursive algorithm for finding the th multiple angle formula knowing the th and th values.
can be computed from , , and with
This can be proved by adding together the formulae
It follows by induction that is a polynomial of the so-called Chebyshev polynomial of the first kind, see Chebyshev polynomials#Trigonometric definition.
Similarly, can be computed from and with
This can be proved by adding formulae for and
Serving a purpose similar to that of the Chebyshev method, for the tangent we can write:
Half-angle formulae
Also
Table
These can be shown by using either the sum and difference identities or the multiple-angle formulae.
The fact that the triple-angle formula for sine and cosine only involves powers of a single function allows one to relate the geometric problem of a compass and straightedge construction of angle trisection to the algebraic problem of solving a cubic equation, which allows one to prove that trisection is in general impossible using the given tools.
A formula for computing the trigonometric identities for the one-third angle exists, but it requires finding the zeroes of the cubic equation , where is the value of the cosine function at the one-third angle and is the known value of the cosine function at the full angle. However, the discriminant of this equation is positive, so this equation has three real roots (of which only one is the solution for the cosine of the one-third angle). None of these solutions are reducible to a real algebraic expression, as they use intermediate complex numbers under the cube roots.
Power-reduction formulae
Obtained by solving the second and third versions of the cosine double-angle formula.
In general terms of powers of or the following is true, and can be deduced using De Moivre's formula, Euler's formula and the binomial theorem.
Product-to-sum and sum-to-product identities
The product-to-sum identities or prosthaphaeresis formulae can be proven by expanding their right-hand sides using the angle addition theorems. Historically, the first four of these were known as Werner's formulas, after Johannes Werner who used them for astronomical calculations. See amplitude modulation for an application of the product-to-sum formulae, and beat (acoustics) and phase detector for applications of the sum-to-product formulae.
Product-to-sum identities
Sum-to-product identities
The sum-to-product identities are as follows:
Hermite's cotangent identity
Charles Hermite demonstrated the following identity. Suppose are complex numbers, no two of which differ by an integer multiple of . Let
(in particular, being an empty product, is 1). Then
The simplest non-trivial example is the case :
Finite products of trigonometric functions
For coprime integers ,
where is the Chebyshev polynomial.
The following relationship holds for the sine function
More generally for an integer
or written in terms of the chord function ,
This comes from the factorization of the polynomial into linear factors (cf. root of unity): For any complex and an integer ,
Linear combinations
For some purposes it is important to know that any linear combination of sine waves of the same period or frequency but different phase shifts is also a sine wave with the same period or frequency, but a different phase shift. This is useful in sinusoid data fitting, because the measured or observed data are linearly related to the and unknowns of the in-phase and quadrature components basis below, resulting in a simpler Jacobian, compared to that of and .
Sine and cosine
The linear combination, or harmonic addition, of sine and cosine waves is equivalent to a single sine wave with a phase shift and scaled amplitude,
where and are defined as so:
given that
Arbitrary phase shift
More generally, for arbitrary phase shifts, we have
where and satisfy:
More than two sinusoids
The general case reads
where
and
Lagrange's trigonometric identities
These identities, named after Joseph Louis Lagrange, are:
for
A related function is the Dirichlet kernel:
A similar identity is
The proof is the following. By using the angle sum and difference identities,
Then let's examine the following formula,
and this formula can be written by using the above identity,
So, dividing this formula with completes the proof.
Certain linear fractional transformations
If is given by the linear fractional transformation
and similarly
then
More tersely stated, if for all we let be what we called above, then
If is the slope of a line, then is the slope of its rotation through an angle of
Relation to the complex exponential function
Euler's formula states that, for any real number x:
where i is the imaginary unit. Substituting −x for x gives us:
These two equations can be used to solve for cosine and sine in terms of the exponential function. Specifically,
These formulae are useful for proving many other trigonometric identities. For example, that
means that
That the real part of the left hand side equals the real part of the right hand side is an angle addition formula for cosine. The equality of the imaginary parts gives an angle addition formula for sine.
The following table expresses the trigonometric functions and their inverses in terms of the exponential function and the complex logarithm.
Relation to complex hyperbolic functions
Trigonometric functions may be deduced from hyperbolic functions with complex arguments. The formulae for the relations are shown below.
Series expansion
When using a power series expansion to define trigonometric functions, the following identities are obtained:
Infinite product formulae
For applications to special functions, the following infinite product formulae for trigonometric functions are useful:
Inverse trigonometric functions
The following identities give the result of composing a trigonometric function with an inverse trigonometric function.
Taking the multiplicative inverse of both sides of the each equation above results in the equations for
The right hand side of the formula above will always be flipped.
For example, the equation for is:
while the equations for and are:
The following identities are implied by the reflection identities. They hold whenever are in the domains of the relevant functions.
Also,
The arctangent function can be expanded as a series:
Identities without variables
In terms of the arctangent function we have
The curious identity known as Morrie's law,
is a special case of an identity that contains one variable:
Similarly,
is a special case of an identity with :
For the case ,
For the case ,
The same cosine identity is
Similarly,
Similarly,
The following is perhaps not as readily generalized to an identity containing variables (but see explanation below):
Degree measure ceases to be more felicitous than radian measure when we consider this identity with 21 in the denominators:
The factors 1, 2, 4, 5, 8, 10 may start to make the pattern clear: they are those integers less than that are relatively prime to (or have no prime factors in common with) 21. The last several examples are corollaries of a basic fact about the irreducible cyclotomic polynomials: the cosines are the real parts of the zeroes of those polynomials; the sum of the zeroes is the Möbius function evaluated at (in the very last case above) 21; only half of the zeroes are present above. The two identities preceding this last one arise in the same fashion with 21 replaced by 10 and 15, respectively.
Other cosine identities include:
and so forth for all odd numbers, and hence
Many of those curious identities stem from more general facts like the following:
and
Combining these gives us
If is an odd number () we can make use of the symmetries to get
The transfer function of the Butterworth low pass filter can be expressed in terms of polynomial and poles. By setting the frequency as the cutoff frequency, the following identity can be proved:
Computing
An efficient way to compute to a large number of digits is based on the following identity without variables, due to Machin. This is known as a Machin-like formula:
or, alternatively, by using an identity of Leonhard Euler:
or by using Pythagorean triples:
Others include:
Generally, for numbers for which , let . This last expression can be computed directly using the formula for the cotangent of a sum of angles whose tangents are and its value will be in . In particular, the computed will be rational whenever all the values are rational. With these values,
where in all but the first expression, we have used tangent half-angle formulae. The first two formulae work even if one or more of the values is not within . Note that if is rational, then the values in the above formulae are proportional to the Pythagorean triple .
For example, for terms,
for any .
An identity of Euclid
Euclid showed in Book XIII, Proposition 10 of his Elements that the area of the square on the side of a regular pentagon inscribed in a circle is equal to the sum of the areas of the squares on the sides of the regular hexagon and the regular decagon inscribed in the same circle. In the language of modern trigonometry, this says:
Ptolemy used this proposition to compute some angles in his table of chords in Book I, chapter 11 of Almagest.
Composition of trigonometric functions
These identities involve a trigonometric function of a trigonometric function:
where are Bessel functions.
Further "conditional" identities for the case α + β + γ = 180°
A conditional trigonometric identity is a trigonometric identity that holds if specified conditions on the arguments to the trigonometric functions are satisfied. The following formulae apply to arbitrary plane triangles and follow from as long as the functions occurring in the formulae are well-defined (the latter applies only to the formulae in which tangents and cotangents occur).
Historical shorthands
The versine, coversine, haversine, and exsecant were used in navigation. For example, the haversine formula was used to calculate the distance between two points on a sphere. They are rarely used today.
Miscellaneous
Dirichlet kernel
The Dirichlet kernel is the function occurring on both sides of the next identity:
The convolution of any integrable function of period with the Dirichlet kernel coincides with the function's th-degree Fourier approximation. The same holds for any measure or generalized function.
Tangent half-angle substitution
If we set then
where sometimes abbreviated to .
When this substitution of for is used in calculus, it follows that is replaced by , is replaced by and the differential is replaced by . Thereby one converts rational functions of and to rational functions of in order to find their antiderivatives.
Viète's infinite product
See also
Aristarchus's inequality
Derivatives of trigonometric functions
Exact trigonometric values (values of sine and cosine expressed in surds)
Exsecant
Half-side formula
Hyperbolic function
Laws for solution of triangles:
Law of cosines
Spherical law of cosines
Law of sines
Law of tangents
Law of cotangents
Mollweide's formula
List of integrals of trigonometric functions
Mnemonics in trigonometry
Pentagramma mirificum
Proofs of trigonometric identities
Prosthaphaeresis
Pythagorean theorem
Tangent half-angle formula
Trigonometric number
Trigonometry
Uses of trigonometry
Versine and haversine
References
Bibliography
External links
Values of sin and cos, expressed in surds, for integer multiples of 3° and of °, and for the same angles csc and sec and tan
Mathematical identities
Identities
Mathematics-related lists | List of trigonometric identities | Mathematics | 3,750 |
5,209,618 | https://en.wikipedia.org/wiki/Communications%20Opportunity%2C%20Promotion%20and%20Enhancement%20Bill%20of%202006 | The Communications Opportunity, Promotion and Enhancement (COPE) Act of 2006 () was a bill in the U.S. House of Representatives. It was part of a major overhaul of the Telecommunications Act of 1996 being considered by the US Congress. The Act was sponsored by Commerce Committee Chairman Joe Barton (R-TX), Rep. Fred Upton (R-MI), Rep. Charles Pickering (R-MS) and Rep. Bobby Rush (D-IL).
Overview
The last version of the Act (HR 5252) included network neutrality provisions defined by the FCC. An amendment offered by Rep. Ed Markey (D-MA) would have supplemented these with a prohibition against service tiering, which would have prevented Internet service providers charging consumers more money in exchange for not reducing their Internet speed. The COPE Act was passed by the full House on June 8, 2006; the Markey Amendment failed, leaving the final bill without meaningful network neutrality provisions.
The US Senate was also involved in the issue. Senator Ron Wyden (D-OR) introduced the Internet Nondiscrimination Act of 2006, and Senators Olympia Snowe (R-ME) and Byron Dorgan (D-ND) were expected to introduce a bipartisan amendment supporting net neutrality when the Senate took up its own rewrite (the "Communications, Consumer's Choice, and Broadband Deployment Act of 2006", aka S. 2686 ) of the Telecommunications Act of 1996 later that year.
The bill would have created a single set of national video franchising rules that permit competitors to enter the market without obtaining thousands of individual city-by-city agreements.
The legislation would have protected fees paid to local authorities, preserved public, educational and government programming, and provided federal consumer protection and customer service standards.
The bill was lobbied for by AT&T and received support from Verizon Communications whilst organizations such as Save the Internet and Common Cause have opposed it.
See also
Communications Act of 1934
Telecommunications Act of 1996
Notes
References
Document, description of the Communications Act of 2006
Library of Congress. H.R.5252-RS - 29 September 2006 - 109-355 Communications Act of 2006 (text of the proposed legislation)
Library of Congress. H.R.5252 - All Congressional Actions w/Amendments All speeches, amendments on the House Floor, 1 May 2006 through 29 September 2006 (ongoing)
External links
"House Rejects Net Neutrality", by John Nichols, The Nation, June 9, 2006
"Defeat for Net Neutrality Backers", by Tom Lasseter, BBC, June 9, 2006.
Full text of COPE 2006 (pdf)
U.S. House Record of the Roll Call Vote on the Markey Amendment
WashingtonWatch.com page on H.R. 5252: The Communications Act of 2006
Communications, Opportunity and Promotion and Enhancement Act of 2006 (PDF)
Proposed legislation of the 109th United States Congress
Telecommunications in the United States
Net neutrality | Communications Opportunity, Promotion and Enhancement Bill of 2006 | Engineering | 594 |
44,127,276 | https://en.wikipedia.org/wiki/Berkeley%20Madonna | Berkeley Madonna is a mathematical modelling software package, developed at the University of California at Berkeley by Robert Macey and George Oster. It numerically solves ordinary differential equations and difference equations, originally developed to execute STELLA programs.
Its strength lies in a relatively simple syntax to define differential equations coupled with a simple yet powerful user interface. In particular, Berkeley Madonna provides the facility of putting parameters onto a slider that can in turn be moved by a user to change the value. Such visualizations enable quick assessments of whether or not a particular model class is suitable to describe the data to be analyzed and modeled, and, later, communicating models easily to other disciplines such as medical decision makers.
Uses
It has become a standard in the development and communication of pharmacometric models describing drug concentration and its effects in drug development
as well as modeling of physiological processes.
A user community exists in the form of a LinkedIn user group with more than 750 members (February 2023).
The use of system dynamics modeling has expanded into other areas such as system physics, epidemiology, environmental health, and population ecology.
Versions
There are two versions of Berkeley Madonna: a free version with slightly limited functionality and a licensed version that is registered to individuals.
References
Further reading
"Berkeley-Madonna Implementation of Ikeda's Model". pp. 582–585.
External links
Mathematical software | Berkeley Madonna | Mathematics | 279 |
3,194,537 | https://en.wikipedia.org/wiki/Callendar%E2%80%93Van%20Dusen%20equation | The Callendar–Van Dusen equation is an equation that describes the relationship between resistance (R) and temperature (T) of platinum resistance thermometers (RTD).
As commonly used for commercial applications of RTD thermometers, the relationship between resistance and temperature is given by the following equations. The relationship above 0 °C (up to the melting point of aluminum ~ 660 °C) is a simplification of the equation that holds over a broader range down to -200 °C. The longer form was published in 1925 (see below) by M.S. Van Dusen and is given as:
While the simpler form was published earlier by Callendar, it is generally valid only over the range between 0 °C to 661 °C and is given as:
Where constants A, B, and C are derived from experimentally determined parameters α, β, and δ using resistance measurements made at 0 °C, 100 °C and 260 °C.
Together,
It is important to note that these equations are listed as the basis for the temperature/resistance tables for idealized platinum resistance thermometers and are not intended to be used for the calibration of an individual thermometer, which would require the experimentally determined parameters to be found.
These equations are cited in International Standards for platinum RTD's resistance versus temperature functions DIN/IEC 60751 (also called IEC 751), also adopted as BS-1904, and with some modification, JIS C1604.
The equation was found by British physicist Hugh Longbourne Callendar, and refined for measurements at lower temperatures by M. S. Van Dusen, a chemist at the U.S. National Bureau of Standards (now known as the National Institute of Standards and Technology ) in work published in 1925 in the Journal of the American Chemical Society.
Starting in 1968, the Callendar-Van Dusen Equation was replaced by an interpolating formula given by a 20th order polynomial first published in The International Practical Temperature Scale of 1968 by the Comité International des Poids et Mesures.
Starting in 1990, the interpolating formula was further refined with the publication of The International Temperature Scale of 1990. The ITS-90 is published by the Comité Consultatif de Thermométrie and the Comité International des Poids et Mesures. This work provides a 12th order polynomial that is valid over an even broader temperature range that spans from 13.8033 K to 273.16 K and a second 9th order polynomial that is valid over the temperature range of 0 °C to 961.78 °C.
References
External links
Caldus. Callendar-Van Dusen conversion between resistance and temperature in python.
Thermometers
Equations | Callendar–Van Dusen equation | Mathematics,Technology,Engineering | 557 |
65,396,149 | https://en.wikipedia.org/wiki/Backusella%20macrospora | Backusella macrospora is a species of zygote fungus in the order Mucorales. It was described by Andrew S. Urquhart and James K. Douch in 2020. The specific epithet refers to the large size of the sporangiospores. The type locality is Tarra-Bulga National Park, Australia.
See also
Fungi of Australia
References
External links
Zygomycota
Fungi described in 2020
Fungus species | Backusella macrospora | Biology | 93 |
46,776,490 | https://en.wikipedia.org/wiki/FASTKD5 | FAST kinase domain-containing protein 5 (FASTKD5) is a protein that in humans is encoded by the FASTKD5 gene on chromosome 20. This protein is part of the FASTKD family, which is known for regulating the energy balance of mitochondria under stress. FASTKD5 is also required for RNA granules to process precursor mRNAs not flanked by tRNAs.
Structure
FASTKD5 shares structural characteristics of the FASTKD family, including an amino terminal mitochondrial targeting domain and three C-terminal domains: two FAST kinase-like domains (FAST_1 and FAST_2) and a RNA-binding domain (RAP). The mitochondrial targeting domain directs FASTKD5 to be imported into the mitochondria. Though the functions of the C-terminal domains are unknown, RAP possibly binds RNA during trans-splicing. This protein forms a 103 kDa protein complex with unidentified proteins.
Function
As a member of the FASTKD family, FASTKD5 localizes to the mitochondria to modulate their energy balance, especially under conditions of stress. Though ubiquitously expressed in all tissues, FASTKD5 appears more abundantly in skeletal muscle, heart muscle, and other tissues enriched in mitochondria. FASTKD5 also localizes to RNA granules, membraneless bodies containing mRNAs and associated RNA-binding proteins, where it facilitates posttranscriptional RNA processing. This protein is required for the maturation of precursor mRNAs that are not flanked by tRNAs, and thus cannot be processed by the canonical mRNA maturation pathway.
Clinical significance
Though the link to FASTKD5 remains uncharacterized, the accumulation of abnormal RNA granules can lead to some neurodegenerative diseases.
Interactions
FASTKD5 has been shown to interact with:
FASTKD2,
DHX30, and
GRSF1.
References
Uncharacterized proteins | FASTKD5 | Biology | 398 |
24,269,149 | https://en.wikipedia.org/wiki/Butyriboletus%20appendiculatus | Butyriboletus appendiculatus is an edible pored mushroom that grows under oaks and other broad leaved trees such as beech. It is commonly known as the butter bolete. It often grows in large colonies beneath the oak trees, and is frequently found cohabiting with old oaks in ancient woodland. It is relatively rare in Britain. Its stipe and pores are often bright yellow (hence its name of butter bolete) and its flesh stains bright blue when cut or bruised.
Taxonomy
The species was first described scientifically by German polymath Jacob Christian Schäffer in 1774 as Boletus appendiculatus. American Charles Horton Peck later used the name in 1896 for a species he found in Washington, but the name was illegitimate because Schäffer's earlier usage has priority. Until 2014, it was classified in the genus Boletus. Molecular phylogenetic analysis demonstrated that it and other members of Boletus section Appendiculati were phylogenetically distinct from Boletus, and the genus Butyriboletus was created to contain them.
The specific epithet appendiculatus means "with a small appendage".
Description
Fruit bodies of Butyriboletus appendiculatus have convex to flattened, brown to yellowish brown caps measuring in diameter. They have a dry to slightly sticky surface texture that may develop cracks with age. The mushroom has very firm yellowish flesh that may slowly change blue when cut or bruised. The pores on the cap undersurface are butter yellow, and may also bruise blue, although this is less likely in young specimens. The stipe is long by thick at the top near the attachment to the cap, and ranges from thicker at the base to equal throughout, to tapered at the bottom. It is also yellow, sometimes developing brownish to reddish stains, and may have fine reticulations near the top. The spore print is dark olive-brown. Individual spores are ellipsoidal to spindle-shaped, smooth, and measure 12–15 by 3.5–5 μm.
Similar species
The Europe species Butyriboletus subappendiculatus is quite similar to B. appendiculatus in microscopic characters. It can be distinguished in the field by the lack of a bruising color reaction, more pallid cap colors, and growth under conifers. Also similar are Butyriboletus regius and Boletus edulis.
Edibility
The bolete is edible and considered choice by several sources, although some warn that certain individuals may have an allergic reaction to it. The earthy flavor of the mushrooms make them suitable for soups, sauces and stews. Cooked portions will often turn blue, then gray, then return to their original yellow color.
Distribution and habitat
Butyriboletus appendiculatus is found in Europe and North America. Fruit bodies grow singly, scattered, or in groups under hardwood trees. In North America, it is more common in the Pacific Northwest region, where it often associates with live oak and tanoak.
See also
List of Boletus species
List of North American boletes
References
appendiculatus
Edible fungi
Fungi described in 1774
Fungi of Europe
Fungi of North America
Taxa named by Jacob Christian Schäffer
Fungus species
Mycorrhizal associates of oaks | Butyriboletus appendiculatus | Biology | 687 |
10,453,294 | https://en.wikipedia.org/wiki/Mendelian%20randomization | In epidemiology, Mendelian randomization (commonly abbreviated to MR) is a method using measured variation in genes to examine the causal effect of an exposure on an outcome. Under key assumptions (see below), the design reduces both reverse causation and confounding, which often substantially impede or mislead the interpretation of results from epidemiological studies.
The study design was first proposed in 1986 and subsequently described by Gray and Wheatley as a method for obtaining unbiased estimates of the effects of an assumed causal variable without conducting a traditional randomized controlled trial (the standard in epidemiology for establishing causality). These authors also coined the term Mendelian randomization.
Motivation
One of the predominant aims of epidemiology is to identify modifiable causes of health outcomes and disease especially those of public health concern. In order to ascertain whether modifying a particular trait (e.g. via an intervention, treatment or policy change) will convey a beneficial effect within a population, firm evidence that this trait causes the outcome of interest is required. However, many observational epidemiological study designs are limited in the ability to discern correlation from causation - specifically whether a particular trait causes an outcome of interest, is simply related to that outcome (but does not cause it) or is a consequence of the outcome itself. Only the former will be beneficial within a public health setting where the aim is to modify that trait to reduce the burden of disease. There are many epidemiological study designs that aim to understand relationships between traits within a population sample, each with shared and unique advantages and limitations in terms of providing causal evidence, with the "gold standard" being randomized controlled trials.
Well-known successful demonstrations of causal evidence consistent across multiple studies with different designs include the identified causal links between smoking and lung cancer, and between blood pressure and stroke. However, there have also been notable failures when exposures hypothesized to be a causal risk factor for a particular outcome were later shown by well conducted randomized controlled trials not to be causal. For instance, it was previously thought that hormone replacement therapy would prevent cardiovascular disease, but it is now known to have no such benefit. Another notable example is that of selenium and prostate cancer. Some observational studies found an association between higher circulating selenium levels (usually acquired through various foods and dietary supplements ) and lower risk of prostate cancer. However, the Selenium and Vitamin E Cancer Prevention Trial (SELECT) showed evidence that dietary selenium supplementation actually increased the risk of prostate and advanced prostate cancer and had an additional off-target effect on increasing type 2 diabetes risk. Mendelian randomization methods now support the view that high selenium status may not prevent cancer in the general population, and may even increase the risk of specific types.
Such inconsistencies between observational epidemiological studies and randomized controlled trials are likely a function of social, behavioral, or physiological confounding factors in many observational epidemiological designs, which are particularly difficult to measure accurately and difficult to control for. Moreover, randomized controlled trials (RCTs) are usually expensive, time-consuming and laborious and many epidemiological findings cannot be ethically replicated in clinical trials. Mendelian randomization studies appear capable of resolving questions of potential confounding more efficiently than RCTs
Definition
Mendelian randomization (MR) is fundamentally an instrumental variables estimation method hailing from econometrics. The method uses the properties of germline genetic variation (usually in the form of single nucleotide polymorphisms or SNPs) strongly associated with a putative exposure as a "proxy" or "instrument" for that exposure to test for and estimate a causal effect of the exposure on an outcome of interest from observational data. The genetic variation used will have either well-understood effects on exposure patterns (e.g. propensity to smoke heavily) or effects that mimic those produced by modifiable exposures (e.g., raised blood cholesterol). Importantly, the genotype must only affect the disease status indirectly via its effect on the exposure of interest.
As genotypes are assigned randomly when passed from parents to offspring during meiosis, then groups of individuals defined by genetic variation associated with an exposure at a population level should be largely unrelated to the confounding factors that typically plague observational epidemiology studies. Germline genetic variation (i.e. that which can be inherited) is also temporarily fixed at conception and not modified by the onset of any outcome or disease, precluding reverse causation. Additionally, given improvements in modern genotyping technologies, measurement error and systematic misclassification is often low with genetic data. In this regard Mendelian randomization can be thought of as analogous to "nature's randomized controlled trial".
Mendelian randomization requires three core instrumental variable assumptions. Namely that:
The genetic variant(s) being used as an instrument for the exposure is associated with the exposure. This is known as the "relevance" assumption.
There are no common causes (i.e. confounders) of the genetic variant(s) and the outcome of interest. This is known as the "independence" or "exchangeability" assumption.
There is no independent pathway between the genetic variant(s) and the outcome other than through the exposure. This is known as the "exclusion restriction" or "no horizontal pleiotropy" assumption.
To ensure that the first core assumption is validated, Mendelian randomization requires distinct associations between genetic variation and exposures of interest. These are usually obtained from genome-wide association studies though can also be candidate gene studies. The second assumption relies on there being no population substructure (e.g. geographical factors that induce an association between the genotype and outcome), mate choice that is not associated with genotype (i.e. random mating or panmixia) and no dynastic effects (i.e. where the expression of parental genotype in the parental phenotype directly affects the offspring phenotype).
Statistical analysis
Mendelian randomization is usually applied through the use of instrumental variables estimation with genetic variants acting as instruments for the exposure of interest. This can be implemented using data on the genetic variants, exposure and outcome of interest for a set of individuals in a single dataset or using summary data on the association between the genetic variants and the exposure and the association between the genetic variants and the outcome in separate datasets. The method has also been used in economic research studying the effects of obesity on earnings, and other labor market outcomes.
When a single dataset is used the methods of estimation applied are those frequently used elsewhere in instrumental variable estimation, such as two-stage least squares. If multiple genetic variants are associated with the exposure they can either be used individually as instruments or combined to create an allele score which is used as a single instrument.
Analysis using summary data often applies data from genome-wide association studies. In this case the association between genetic variants and the exposure is taken from the summary results produced by a genome-wide association study for the exposure. The association between the same genetic variants and the outcome is then taken from the summary results produced by a genome-wide association study for the outcome. These two sets of summary results are then used to obtain the MR estimate. Given the following notation:
effect of genetic variant on the exposure ;
estimated effect of genetic variant on the outcome
estimated standard error of this estimated effect;
MR estimate of the causal effect of the exposure on the outcome
and considering the effect of a single genetic variant, the MR estimate can be obtained from the Wald ratio:
When multiple genetic variants are used, the individual ratios for each genetic variants are combined using inverse variance weighting where each individual ratio is weighted by the uncertainty in their estimation. This gives the IVW estimate which can be calculated as:
Alternatively, the same estimate can be obtained from a linear regression which used the genetic variant-outcome association as the outcome and the genetic variant-exposure association as the exposure. This linear regression is weighted by the uncertainty in the genetic-variant outcome association and does not include a constant.
These methods only provide reliable estimates of the causal effect of the exposure on the outcome under the core instrumental variable assumptions. Alternative methods are available that are robust to a violation of the third assumption, i.e. that provide reliable results under some types of horizontal pleiotropy. Additionally some biases that arise from violations of the second IV assumption, such as dynastic effects, can be overcome through the use of data which includes siblings or parents and their offspring.
History
The Mendelian randomization method depends on two principles derived from the original work by Gregor Mendel on genetic inheritance. Its foundation come from Mendel’s laws namely 1) the law of segregation in which there is complete segregation of the two allelomorphs in equal number of germ-cells of a heterozygote and 2) separate pairs of allelomorphs segregate independently of one another and which were first published as such in 1906 by Robert Heath Lock. Another progenitor of Mendelian randomization is Sewall Wright who introduced path analysis, a form of causal diagram used for making causal inference from non-experimental data. The method relies on causal anchors, and the anchors in the majority of his examples were provided by Mendelian inheritance, as is the basis of MR. Another component of the logic of MR is the instrumental gene, the concept of which was introduced by Thomas Hunt Morgan. This is important as it removed the need to understand the physiology of the gene for making the inference about genetic processes.
Since that time the literature includes examples of research using molecular genetics to make inference about modifiable risk factors, which is the essence of MR. One example is the work of Gerry Lower and colleagues in 1979 who used the N-acetyltransferase phenotype as an anchor to draw inference about various exposures including smoking and amine dyes as risk factors for bladder cancer. Another example is the work of Martijn Katan (then of Wageningen University & Research, Netherlands) in which he advocated a study design using Apolipoprotein E allele as an instrumental variable anchor to study the observed relationship between low blood cholesterol levels and increased risk of cancer. In fact, the term “Mendelian randomization” was first used in print by Richard Gray and Keith Wheatley (both of Radcliffe Infirmary, Oxford, UK) in 1991 in a somewhat different context; in a method allowing instrumental variable estimation but in relation to an approach relying on Mendelian inheritance rather than genotype. In their 2003 paper, Shah Ebrahim and George Davey Smith use the term again to describe the method of using germline genetic variants for understanding causality in an instrumental variable analysis, and it is this methodology that is now widely used and to which the meaning is ascribed. The Mendelian randomization method is now widely adopted in causal epidemiology, and the number of MR studies reported in the scientific literature has grown every year since the 2003 paper. In 2021 STROBE-MR guidelines were published to assist readers and reviewers of Mendelian randomization studies to evaluate the validity and utility of published studies.
References
Further reading
External links
Making sense of Mendelian randomisation and its use in health research
Epidemiology
Genetic epidemiology
Applications of randomness
Causal inference
Observational study | Mendelian randomization | Environmental_science | 2,380 |
8,430,768 | https://en.wikipedia.org/wiki/Globalization%20and%20disease | Globalization, the flow of information, goods, capital, and people across political and geographic boundaries, allows infectious diseases to rapidly spread around the world, while also allowing the alleviation of factors such as hunger and poverty, which are key determinants of global health. The spread of diseases across wide geographic scales has increased through history. Early diseases that spread from Asia to Europe were bubonic plague, influenza of various types, and similar infectious diseases.
In the current era of globalization, the world is more interdependent than at any other time. Efficient and inexpensive transportation has left few places inaccessible, and increased global trade in agricultural products has brought more and more people into contact with animal diseases that have subsequently jumped species barriers (see zoonosis).
Globalization intensified during the Age of Exploration, but trading routes had long been established between Asia and Europe, along which diseases were also transmitted. An increase in travel has helped spread diseases to natives of lands who had not previously been exposed. When a native population is infected with a new disease, where they have not developed antibodies through generations of previous exposure, the new disease tends to run rampant within the population.
Etiology, the modern branch of science that deals with the causes of infectious disease, recognizes five major modes of disease transmission: airborne, waterborne, bloodborne, by direct contact, and through vector (insects or other creatures that carry germs from one species to another). As humans began traveling overseas and across lands which were previously isolated, research suggests that diseases have been spread by all five transmission modes.
Travel patterns and globalization
The Age of Exploration generally refers to the period between the 15th and 17th centuries. During this time, technological advances in shipbuilding and navigation made it easier for nations to explore outside previous boundaries. Globalization has had many benefits, for example, new products to Europeans were discovered, such as tea, silk and sugar when Europeans developed new trade routes around Africa to India and the Spice Islands, Asia, and eventually running to the Americas.
In addition to trading in goods, many nations began to trade in slavery. Trading in slaves was another way by which diseases were carried to new locations and peoples, for instance, from sub-Saharan Africa to the Caribbean and the Americas. During this time, different societies began to integrate, increasing the concentration of humans and animals in certain places, which led to the emergence of new diseases as some jumped in mutation from animals to humans.
During this time sorcerers' and witch doctors' treatment of disease was often focused on magic and religion, and healing the entire body and soul, rather than focusing on a few symptoms like modern medicine. Early medicine often included the use of herbs and meditation. Based on archaeological evidence, some prehistoric practitioners in both Europe and South America used trephining, making a hole in the skull to release illness. Severe diseases were often thought of as supernatural or magical. The result of the introduction of Eurasian diseases to the Americas was that many more native peoples were killed by disease and germs than by the colonists' use of guns or other weapons. Scholars estimate that over a period of four centuries, epidemic diseases wiped out as much as 90 percent of the American indigenous populations.
In Europe during the age of exploration, diseases such as smallpox, measles and tuberculosis (TB) had already been introduced centuries before through trade with Asia and Africa. People had developed some antibodies to these and other diseases from the Eurasian continent. When the Europeans traveled to new lands, they carried these diseases with them. (Note: Scholars believe TB was already endemic in the Americas.) When such diseases were introduced for the first time to new populations of humans, the effects on the native populations were widespread and deadly. The Columbian Exchange, referring to Christopher Columbus's first contact with the native peoples of the Caribbean, began the trade of animals, and plants, and unwittingly began an exchange of diseases.
It was not until the 1800s that humans began to recognize the existence and role of germs and microbes in relation to disease. Although many thinkers had ideas about germs, it was not until French doctor Louis Pasteur spread his theory about germs, and the need for washing hands and maintaining sanitation (particularly in medical practice), that anyone listened. Many people were quite skeptical, but on May 22, 1881, Pasteur persuasively demonstrated the validity of his germ theory of disease with an early example of vaccination. The anthrax vaccine was administered to 25 sheep while another 25 were used as a control. On May 31, 1881, all of the sheep were exposed to anthrax. While every sheep in the control group died, each of the vaccinated sheep survived. Pasteur's experiment would become a milestone in disease prevention. His findings, in conjunction with other vaccines that followed, changed the way globalization affected the world.
Effects of globalization on disease in the modern world
Modern modes of transportation allow more people and products to travel around the world at a faster pace; they also open the airways to the transcontinental movement of infectious disease vectors. One example is the West Nile virus. It is believed that this disease reached the United States via "mosquitoes that crossed the ocean by riding in airplane wheel wells and arrived in New York City in 1999." With the use of air travel, people are able to go to foreign lands, contract a disease and not have any symptoms of illness until after they get home, and having exposed others to the disease along the way. Another example of the potency of modern modes of transportation in increasing the spread of disease is the 1918 Spanish Flu pandemic. Global transportation, back in the early 20th century, was able to spread a virus because the network of transmittance and trade was already global. The virus was found on crew members of ships and trains, and all the infected employees spread the virus everywhere they traveled. As a result, almost 50-100 million people died of this global transmission.
As medicine has progressed, many vaccines and cures have been developed for some of the worst diseases (plague, syphilis, typhus, cholera, malaria) that people develop. But, because the evolution of disease organisms is very rapid, even with vaccines, there is difficulty providing full immunity to many diseases. Since vaccines are made partly from the virus itself, when an unknown virus is introduced into the environment, it takes time for the medical community to formulate a curable vaccine. The lack of operational and functional research and data, which provide a quicker and more strategized pathway to a reliable vaccine, makes for a lengthy vaccine development timeline. Even though frameworks are set up and preparations plans are utilized to decrease the COVID-19 cases, a vaccine is the only way to ensure complete immunization. Some systems like the IIS, Immunization Information System, help give preliminary structure for quick responses to outbreaks and unknown viruses. These systems employ past data and research-based on modern world vaccine development successes. Finding vaccines at all for some diseases remains extremely difficult. Without vaccines, the global world remains vulnerable to infectious diseases.
Evolution of disease presents a major threat in modern times. For example, the current "swine flu" or H1N1 virus is a new strain of an old form of flu, known for centuries as Asian flu based on its origin on that continent. From 1918 to 1920, a post-World War I global influenza epidemic killed an estimated 50–100 million peens, including half a million in the United States alone. H1N1 is a virus that has evolved from and partially combined with portions of avian, swine, and human flu.
Globalization has increased the spread of infectious diseases from South to North, but also the risk of non-communicable diseases by transmission of culture and behavior from North to South. It is important to target and reduce the spread of infectious diseases in developing countries. However, addressing the risk factors of non-communicable diseases and lifestyle risks in the South that cause disease, such as use or consumption of tobacco, alcohol, and unhealthy foods, is important as well.
Even during pandemics, it is vital to recognize economic globalization in being a catalyst in the spread of the coronavirus. Economic factors are especially damaged by increased global lockdown regulations and trade blockades. As transportation globalized, economies expanded. Internalized economies saw great financial opportunities in global trade. With increased interconnectivity among economies and the globalization of the world economy, the spread of the coronavirus maximized the potentiality of global recessions. The coronavirus pandemic caused many economic disruptions, which caused a functional disconnect in the supply chain and the flow of goods. As transportation modes are relevant to the spread of infectious diseases, it is important to also recognize the economy being the motor of this globalized transmission system.
Specific diseases
Plague
Bubonic plague is a variant of the deadly flea-borne disease plague, which is caused by the enterobacteria Yersinia pestis, that devastated human populations beginning in the 14th century. Bubonic plague is primarily spread by fleas that lived on the black rat, an animal that originated in South Asia and spread to Europe by the 6th century. It became common to cities and villages, traveling by ship with explorers. A human would become infected after being bitten by an infected flea. The first sign of an infection of bubonic plague is swelling of the lymph nodes, and the formation of buboes. These buboes would first appear in the groin or armpit area, and would often ooze pus or blood. Eventually infected individuals would become covered with dark splotches caused by bleeding under the skin. The symptoms would be accompanied by a high fever, and within four to seven days of infection, more than half of the affected would die.
The first recorded outbreak of plague occurred in China in the 1330s, a time when China was engaged in substantial trade with western Asia and Europe. The plague reached Europe in October 1347. It was thought to have been brought into Europe through the port of Messina, Sicily, by a fleet of Genoese trading ships from Kaffa, a seaport on the Crimean peninsula. When the ship left port in Kaffa, many of the inhabitants of the town were dying, and the crew was in a hurry to leave. By the time the fleet reached Messina, all the crew were either dead or dying; the rats that took passage with the ship slipped unnoticed to shore and carried the disease with them and their fleas.
Within Europe, the plague struck port cities first, then followed people along both sea and land trade routes. It raged through Italy into France and the British Isles. It was carried over the Alps into Switzerland, and eastward into Hungary and Russia. For a time during the 14th and 15th centuries, the plague would recede. Every ten to twenty years, it would return. Later epidemics, however, were never as widespread as the earlier outbreaks, when 60% of the population died.
The third plague pandemic emerged in Yunnan province of China in the mid-nineteenth century. It spread east and south through China, reaching Guangzhou (Canton) and Hong Kong in 1894, where it entered the global maritime trade routes. Plague reached Singapore and Bombay in 1896. China lost an estimated 2 million people between plague's reappearance in the mid-nineteenth century and its retreat in the mid-twentieth. In India, between 1896 and the 1920s, plague claimed an estimated 12 million lives, most in the Bombay province. Plague spread into the countries around the Indian Ocean, the Red Sea and the Mediterranean. From China it also spread eastward to Japan, the Philippines and Hawaii, and in Central Asia it spread overland into the Russian territories from Siberia to Turkistan. By 1901 there had been outbreaks of plague on every continent, and new plague reservoirs would produce regular outbreaks over the ensuing decades.
Measles
Measles is a highly contagious airborne virus spread by contact with infected oral and nasal fluids. When a person with measles coughs or sneezes, they release microscopic particles into the air. During the 4- to 12-day incubation period, an infected individual shows no symptoms, but as the disease progresses, the following symptoms appear: runny nose, cough, red eyes, extremely high fever and a rash.
Measles is an endemic disease, meaning that it has been continually present in a community, and many people developed resistance. In populations that have not been exposed to measles, exposure to the new disease can be devastating. In 1529, a measles outbreak in Cuba killed two-thirds of the natives who had previously survived smallpox. Two years later measles was responsible for the deaths of half the indigenous population of Honduras, and ravaged Mexico, Central America, and the Inca civilization.
Historically, measles was very prevalent throughout the world, as it is highly contagious. According to the National Immunization Program, 90% of people were infected with measles by age 15, acquiring immunity to further outbreaks. Until a vaccine was developed in 1963, measles was considered to be deadlier than smallpox. Vaccination reduced the number of reported occurrences by 98%. Major epidemics have predominantly occurred in unvaccinated populations, particularly among nonwhite Hispanic and African American children under 5 years old. In 2000 a group of experts determined that measles was no longer endemic in the United States. The majority of cases that occur are among immigrants from other countries.
Typhus
Typhus is caused by rickettsia, which is transmitted to humans through lice. The main vector for typhus is the rat flea. Flea bites and infected flea feaces in the respiratory tract are the two most common methods of transmission. In areas where rats are not common, typhus may also be transmitted through cat and opossum fleas. The incubation period of typhus is 7–14 days. The symptoms start with a fever, then headache, rash, and eventually stupor. Spontaneous recovery occurs in 80–90% of victims.
The first outbreak of typhus was recorded in 1489. Historians believe that troops from the Balkans, hired by the Spanish army, brought it to Spain with them. By 1490 typhus traveled from the eastern Mediterranean into Spain and Italy, and by 1494, it had swept across Europe. From 1500 to 1914, more soldiers were killed by typhus than from all the combined military actions during that time. It was a disease associated with the crowded conditions of urban poverty and refugees as well. Finally, during World War I, governments instituted preventative delousing measures among the armed forces and other groups, and the disease began to decline. The creation of antibiotics has allowed disease to be controlled within two days of taking a 200 mg dose of tetracycline.
Syphilis
Syphilis is a sexually transmitted disease that causes open sores, delirium and rotting skin, and is characterized by genital ulcers. Syphilis can also do damage to the nervous system, brain and heart. The disease can be transmitted from mother to child.
The origins of syphilis are unknown, and some historians argue that it descended from a twenty-thousand-year-old African zoonosis. Other historians place its emergence in the New World, arguing that the crews of Columbus's ships first brought the disease to Europe. The first recorded case of syphilis occurred in Naples in 1495, after King Charles VIII of France besieged the city of Naples, Italy. The soldiers, and the prostitutes who followed their camps, came from all corners of Europe. When they went home, they took the disease with them and spread it across the continent.
Smallpox
Smallpox is a highly contagious disease caused by the Variola virus. There are four variations of smallpox; variola major, variola minor, haemorrhagic, and malignant, with the most common being variola major and variola minor. Symptoms of the disease including hemorrhaging, blindness, back ache, vomiting, which generally occur shortly after the 12- to 17-day incubation period. The virus begins to attack skin cells, and eventually leads to an eruption of pimples that cover the whole body. As the disease progresses, the pimples fill up with pus or merge. This merging results in a sheet that can detach the bottom layer from the top layer of skin. The disease is easily transmitted through airborne pathways (coughing, sneezing, and breathing), as well as through contaminated bedding, clothing or other fabrics,
It is believed that smallpox first emerged over 3000 years ago, probably in India or Egypt. There have been numerous recorded devastating epidemics throughout the world, with high losses of life.
Smallpox was a common disease in Eurasia in the 15th century, and was spread by explorers and invaders. After Columbus landed on the island of Hispaniola during his second voyage in 1493, local people started to die of a virulent infection. Before the smallpox epidemic started, more than one million indigenous people had lived on the island; afterward, only ten thousand had survived.
During the 16th century, Spanish soldiers introduced smallpox by contact with natives of the Aztec capital Tenochtitlan. A devastating epidemic broke out among the indigenous people, killing thousands.
In 1617, smallpox reached Massachusetts, probably brought by earlier explorers to Nova Scotia, Canada." By 1638 the disease had broken out among people in Boston, Massachusetts. In 1721 people fled the city after an outbreak, but the residents spread the disease to others throughout the Thirteen Colonies. Smallpox broke out in six separate epidemics in the United States through 1968.
The smallpox vaccine was developed in 1798 by Edward Jenner. By 1979 the disease had been completely eradicated, with no new outbreaks. The WHO stopped providing vaccinations and by 1986, vaccination was no longer necessary to anyone in the world except in the event of future outbreak.
Leprosy
Leprosy, also known as Hansen's Disease, is caused by a bacillus, Mycobacterium leprae. It is a chronic disease with an incubation period of up to five years. Symptoms often include irritation or erosion of the skin, and effects on the peripheral nerves, mucosa of the upper respiratory tract and eyes. The most common sign of leprosy are pale reddish spots on the skin that lack sensation.
Leprosy originated in India, more than four thousand years ago. It was prevalent in ancient societies in China, Egypt and India, and was transmitted throughout the world by various traveling groups, including Roman Legionnaires, Crusaders, Spanish conquistadors, Asian seafarers, European colonists, and Arab, African, and American slave traders. Some historians believe that Alexander the Great's troops brought leprosy from India to Europe during the 3rd century BC. With the help of the crusaders and other travelers, leprosy reached epidemic proportions by the 13th century.
Once detected, leprosy can be cured using multi-drug therapy, composed of two or three antibiotics, depending on the type of leprosy. In 1991 the World Health Assembly began an attempt to eliminate leprosy. By 2005 116 of 122 countries were reported to be free of leprosy.
Malaria
On Nov. 6, 1880 Alphonse Laveran discovered that malaria (then called "Marsh Fever") was a protozoan parasite, and that mosquitoes carry and transmit malaria. Malaria is a protozoan infectious disease that is generally transmitted to humans by mosquitoes between dusk and dawn. The European variety, known as "vivax" after the Plasmodium vivax parasite, causes a relatively mild, yet chronically aggravating disease. The west African variety is caused by the sporozoan parasite, Plasmodium falciparum, and results in a severely debilitating and deadly disease.
Malaria was common in parts of the world where it has now disappeared, as the vast majority of Europe (disease of African descent are particularly diffused in the Empire romain) and North America . In some parts of England, mortality due to malaria was comparable to that of sub-Saharan Africa today. Although William Shakespeare was born at the beginning of a colder period called the "Little Ice Age", he knew enough ravages of this disease to include in eight parts. Plasmodium vivax lasted until 1958 in the polders of Belgium and the Netherlands.
In the 1500s, it was the European settlers and their slaves who probably brought malaria on the American continent (we know that Columbus had this disease before his arrival in the new land). The Spanish Jesuit missionaries saw the Indians bordering on Lake Loxa Peru used the Cinchona bark powder to treat fevers. However, there is no reference to malaria in the medical literature of the Maya or Aztecs. The use of the bark of the "fever tree" was introduced into European medicine by Jesuit missionaries whose Barbabe Cobo who experimented in 1632 and also by exports, which contributed to the precious powder also being called "Jesuit powder". A study in 2012 of thousands of genetic markers for Plasmodium falciparum samples confirmed the African origin of the parasite in South America (Europeans themselves have been affected by this disease through Africa): it borrowed from the mid-sixteenth century and the mid-nineteenth the two main roads of the slave trade, the first leading to the north of South America (Colombia) by the Spanish, the second most leading south (Brazil) by Portugueses.
Parts of Third World countries are more affected by malaria than the rest of the world. For instance, many inhabitants of sub-Saharan Africa are affected by recurring attacks of malaria throughout their lives. In many areas of Africa, there is limited running water. The residents' use of wells and cisterns provides many sites for the breeding of mosquitoes and spread of the disease. Mosquitoes use areas of standing water like marshes, wetlands, and water drums to breed.
Tuberculosis
The bacterium that causes tuberculosis, Mycobacterium tuberculosis, is generally spread when an infected person coughs and another person inhales the bacteria. Once inhaled TB frequently grows in the lungs, but can spread to any part of the body. Although TB is highly contagious, in most cases the human body is able to fend off the bacteria. But, TB can remain dormant in the body for years, and become active unexpectedly. If and when the disease does become active in the body, it can multiply rapidly, causing the person to develop many symptoms including cough (sometimes with blood), night sweats, fever, chest pains, loss of appetite and loss of weight. This disease can occur in both adults and children and is especially common among those with weak or undeveloped immune systems.
Tuberculosis (TB) has been one of history's greatest killers, taking the lives of over 3 million people annually. It has been called the "white plague". According to the WHO, approximately fifty percent of people infected with TB today live in Asia. It is the most prevalent, life-threatening infection among AIDS patients. It has increased in areas where HIV seroprevalence is high.
Air travel and the other methods of travel which have made global interaction easier, have increased the spread of TB across different societies. Luckily, the BCG vaccine was developed, which prevents TB meningitis and miliary TB in childhood. But, the vaccine does not provide substantial protection against the more virulent forms of TB found among adults. Most forms of TB can be treated with antibiotics to kill the bacteria. The two antibiotics most commonly used are rifampicin and isoniazid. There are dangers, however, of a rise of antibiotic-resistant TB. The TB treatment regimen is lengthy, and difficult for poor and disorganized people to complete, increasing resistance of bacteria. Antibiotic-resistant TB is also known as "multidrug-resistant tuberculosis." "Multidrug-resistant tuberculosis" is a pandemic that is on the rise. Patients with MDR-TB are mostly young adults who are not infected with HIV or have other existing illness. Due to the lack of health care infrastructure in underdeveloped countries, there is a debate as to whether treating MDR-TB will be cost effective or not. The reason is the high cost of "second-line" antituberculosis medications. It has been argued that the reason the cost of treating patients with MDR-TB is high is because there has been a shift in focus in the medical field, in particular the rise of AIDS, which is now the world's leading infectious cause of death. Nonetheless, it is still important to put in the effort to help and treat patients with "multidrug-resistant tuberculosis" in poor countries.
HIV/AIDS
HIV and AIDS are among the newest and deadliest diseases. According to the World Health Organization, it is unknown where the HIV virus originated, but it appeared to move from animals to humans. It may have been isolated within many groups throughout the world. It is believed that HIV arose from another, less harmful virus, that mutated and became more virulent. The first two AIDS/HIV cases were detected in 1981. As of 2013, an estimated 1.3 million persons in the United States were living with HIV or AIDS, almost 110,000 in the UK and an estimated 35 million people worldwide are living with HIV".
Despite efforts in numerous countries, awareness and prevention programs have not been effective enough to reduce the numbers of new HIV cases in many parts of the world, where it is associated with high mobility of men, poverty and sexual mores among certain populations. Uganda has had an effective program, however. Even in countries where the epidemic has a very high impact, such as Eswatini and South Africa, a large proportion of the population do not believe they are at risk of becoming infected. Even in countries such as the UK, there is no significant decline in certain at-risk communities. 2014 saw the greatest number of new diagnoses in gay men, the equivalent of nine being diagnosed a day.
Initially, HIV prevention methods focused primarily on preventing the sexual transmission of HIV through behaviour change. The ABC Approach - "Abstinence, Be faithful, Use a Condom". However, by the mid-2000s, it became evident that effective HIV prevention requires more than that and that interventions need to take into account underlying socio-cultural, economic, political, legal and other contextual factors.
Ebola
The Ebola outbreak, which was the 26th outbreak since 1976, started in Guinea in March 2014. The WHO warned that the number of Ebola patients could rise to 20,000, and said that it used $489m (£294m) to contain Ebola within six to nine months. The outbreak was accelerating. Medecins sans Frontieres has just opened a new Ebola hospital in Monrovia, and after one week it is already a capacity of 120 patients. It said that the number of patients seeking treatment at its new Monrovia centre was increasing faster than they could handle both in terms of the number of beds and the capacity of the staff, adding that it was struggling to cope with the caseload in the Liberian capital. Lindis Hurum, MSF's emergency coordinator in Monrovia, said that it was humanitarian emergency and they needed a full-scale humanitarian response. Brice de la Vinge, MSF director of operations, said that it was not until five months after the declaration of the Ebola outbreak that serious discussions started about international leadership and coordination, and said that it was not acceptable.
Leptospirosis
Leptospirosis, also known as the "rat fever" or "field fever" is an infection caused by Leptospira. Symptoms can range from none to mild such as headaches, muscle pains, and fevers; to severe with bleeding from the lungs or meningitis. Leptospira is transmitted by both wild and domestic animals, most commonly by rodents. It is often transmitted by animal urine or by water or soil containing animal urine coming into contact with breaks in the skin, eyes, mouth, or nose.
The countries with the highest reported incidence are located in the Asia-Pacific region (Seychelles, India, Sri Lanka and Thailand) with incidence rates over 10 per 1000,000 people s well as in Latin America and the Caribbean (Trinidad and Tobago, Barbados, Jamaica, El Salvador, Uruguay, Cuba, Nicaragua and Costa Rica) However, the rise in global travel and eco-tourism has led to dramatic changes in the epidemiology of leptospirosis, and travelers from around the world have become exposed to the threat of leptospirosis. Despite decreasing prevalence of leptospirosis in endemic regions, previously non-endemic countries are now reporting increasing numbers of cases due to recreational exposure International travelers engaged in adventure sports are directly exposed to numerous infectious agents in the environment and now comprise a growing proportion of cases worldwide.
Disease X
The World Health Organization (WHO) proposed the name Disease X in 2018 to focus on preparations and predictions of a major pandemic.
COVID-19
The virus outbreak originated in Wuhan, China. It was first detected in December 2019, which is why scientists called it COVID-19 (coronavirus disease 2019). This outbreak has since caused a health issue in the city of Wuhan, China which evolved into a global pandemic. The World Health Organization officially declared it a pandemic on March 11, 2020.
As of May 2020, scientists believe that COVID-19, a zoonotic disease, is linked to the wet markets in China. Epidemiologists have also warned of the virus's contagiousness. Specialists have declared that the spread of SARS-CoV-2 is still unknown. The generally accepted notion among virologists and experts is that the action of inhaling droplets from an infected person is most likely the way SARS-CoV-2 is spreading. As more people travel and more goods and capital are traded globally, COVID-19 cases started to slowly appear all over the world.
Some of the symptoms that COVID-19 patients could experience is shortness of breath (which might be a sign of pneumonia), cough, fever, and diarrhea. The three most recorded and common symptoms are fever, tiredness, and coughing, as reported by the World Health Organization. COVID-19 is also categorized among the viruses that can show no symptoms in the carrier. Asymptomatic COVID-19 carriers transmitted the virus to many people which eventually did show symptoms, some being deadly.
The first number of cases was detected in Wuhan, China, the origin of the outbreak. On December 31, 2019, Wuhan Municipal Health Commission announced to the World Health Organization that the number of pneumonia cases that have been previously detected in Wuhan, Hubei Province is now under investigation. Proper identification of a novel coronavirus was developed and reported, making the pneumonia cases in China the first reported cases of COVID-19. As of November 25, 2021, there have been around 260 million confirmed COVID-19 cases around the world. Confirmed deaths as a result of COVID-19 is over 5 million globally. Over 235 million of the 260 million confirmed COVID-19 cases have successfully recovered. Countries showing lack of preparation and awareness in January and February 2020 are now reporting the highest numbers of COVID-19 cases. The United States leads the worldwide count with almost 49 million confirmed cases. Deaths in The United States have crossed 798,000, maintaining the highest death count of any country. Brazil, Russia, Spain, UK, and Italy have all suffered because of the increase in cases, leading to an impaired health system unable to attend to so many sick people at one time.
The first-ever confirmed case of COVID-19 in the United States was in Washington State on January 21, 2020. It was a man who had just returned from China. Following this incident, on January 31, 2020, Trump announced that travel to and from China is restricted, effective on February 2, 2020. On March 11, 2020, Trump issues executive order to restrict travel from Europe, except for the UK and Ireland. On May 24, 2020, Trump bans travel from Brazil, as Brazil becomes the new center of the coronavirus pandemic. International restrictions were set to decrease international entities of entering a country, potentially carrying the virus. This is because governments understand that with the accessibility in travel and free trade, any person can travel and carry the virus to a new environment. Recommendations to U.S. travelers have been set by the State Department. As of March 19, 2020, some countries have been marked Level 4 "do not travel". The coronavirus pandemic travel restrictions have affected almost 93% of the global population. Increased travel restrictions effectively aid multilateral and bilateral health organizations to control the number of confirmed cases of COVID-19.
Non-communicable disease
Globalization can benefit people with non-communicable diseases such as heart problems or mental health problems. Global trade and rules set forth by the World Trade Organization can actually benefit the health of people by making their incomes higher, allowing them to afford better health care, but making many non-communicable diseases more likely as well. Also the national income of a country, mostly obtained by trading on the global market, is important because it dictates how much a government spends on health care for its citizens. It also has to be acknowledged that an expansion in the definition of disease often accompanies development, so the net effect is not clearly beneficial due to this and other effects of increased affluence. Metabolic syndrome is one obvious example, although poorer countries have not yet experienced this and are still having the diseases listed above.
Economic globalization and disease
Globalization is multifaceted in implementation and is objective in the framework and systemic ideology. Infectious diseases spread mainly as a result of the modern globalization of many and almost all industries and sectors. Economic globalization is the interconnectivity of world economies and the interdependency of internal and external supply chains. With the advancement of science and technology, the possibility of economic globalization is enabled even more. Economic factors have been defined by global boundaries rather than national. The cost of activities of economic measures has been significantly decreased as a result of the advancements in the fields of technology and science, slowly creating an interconnected economy lacking centralized integration. As economies increase levels of integration and singularity within the partnership, any global financial and economic disruptions would cause a global recession. Collateral damage is further observed with the increase in integrated economic activity. Countries lean more on economic benefits than health benefits, which lead to a miscalculated and ill-reported health issue.
See also
Chagas disease
Eradication of infectious diseases
Global catastrophic risk
Infectious disease
List of epidemics
Pandemic
Rodentology
Transmission (medicine)
Tropical disease
Virgin soil epidemic
Wildlife smuggling and zoonoses
References
Biological globalization
Global health
Infectious diseases | Globalization and disease | Environmental_science | 7,216 |
6,855,491 | https://en.wikipedia.org/wiki/Mauritius%20Time | Mauritius Time, or MUT, is the time zone used by the Indian Ocean island nation of Mauritius. The zone is four hours ahead of UTC (UTC+04:00).
Mauritius does not use daylight saving time, however, it has been used in the past. Daylight saving time was first introduced in Mauritius in 1982, however, it was discontinued the following year. It was re-introduced in 2008, however, it was not used in 2009 or since. In 2008, the period started at 2 am UTC+5 (1 am UTC+4) on 26 October 2008 (the last Sunday in October), and ended at 2 am UTC+5 (1 am UTC+4) on 29 March 2009 (the last Sunday in March). Mauritius is in the Southern Hemisphere, so summer begins towards the end of the year.
References
Time zones
Mauritius | Mauritius Time | Physics | 175 |
1,505,381 | https://en.wikipedia.org/wiki/Numerical%20weather%20prediction | Numerical weather prediction (NWP) uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. A number of global and regional forecast models are run in different countries worldwide, using current weather observations relayed from radiosondes, weather satellites and other observing systems as inputs.
Mathematical models based on the same physical principles can be used to generate either short-term weather forecasts or longer-term climate predictions; the latter are widely applied for understanding and projecting climate change. The improvements made to regional models have allowed significant improvements in tropical cyclone track and air quality forecasts; however, atmospheric models perform poorly at handling processes that occur in a relatively constricted area, such as wildfires.
Manipulating the vast datasets and performing the complex calculations necessary to modern numerical weather prediction requires some of the most powerful supercomputers in the world. Even with the increasing power of supercomputers, the forecast skill of numerical weather models extends to only about six days. Factors affecting the accuracy of numerical predictions include the density and quality of observations used as input to the forecasts, along with deficiencies in the numerical models themselves. Post-processing techniques such as model output statistics (MOS) have been developed to improve the handling of errors in numerical predictions.
A more fundamental problem lies in the chaotic nature of the partial differential equations that describe the atmosphere. It is impossible to solve these equations exactly, and small errors grow with time (doubling about every five days). Present understanding is that this chaotic behavior limits accurate forecasts to about 14 days even with accurate input data and a flawless model. In addition, the partial differential equations used in the model need to be supplemented with parameterizations for solar radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, and the effects of terrain. In an effort to quantify the large amount of inherent uncertainty remaining in numerical predictions, ensemble forecasts have been used since the 1990s to help gauge the confidence in the forecast, and to obtain useful results farther into the future than otherwise possible. This approach analyzes multiple forecasts created with an individual forecast model or multiple models.
History
The history of numerical weather prediction began in the 1920s through the efforts of Lewis Fry Richardson, who used procedures originally developed by Vilhelm Bjerknes to produce by hand a six-hour forecast for the state of the atmosphere over two points in central Europe, taking at least six weeks to do so. It was not until the advent of the computer and computer simulations that computation time was reduced to less than the forecast period itself. The ENIAC was used to create the first weather forecasts via computer in 1950, based on a highly simplified approximation to the atmospheric governing equations. In 1954, Carl-Gustav Rossby's group at the Swedish Meteorological and Hydrological Institute used the same model to produce the first operational forecast (i.e., a routine prediction for practical use). Operational numerical weather prediction in the United States began in 1955 under the Joint Numerical Weather Prediction Unit (JNWPU), a joint project by the U.S. Air Force, Navy and Weather Bureau. In 1956, Norman Phillips developed a mathematical model which could realistically depict monthly and seasonal patterns in the troposphere; this became the first successful climate model. Following Phillips' work, several groups began working to create general circulation models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory.
As computers have become more powerful, the size of the initial data sets has increased and newer atmospheric models have been developed to take advantage of the added available computing power. These newer models include more physical processes in the simplifications of the equations of motion in numerical simulations of the atmosphere. In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977. The development of limited area (regional) models facilitated advances in forecasting the tracks of tropical cyclones as well as air quality in the 1970s and 1980s. By the early 1980s models began to include the interactions of soil and vegetation with the atmosphere, which led to more realistic forecasts.
The output of forecast models based on atmospheric dynamics is unable to resolve some details of the weather near the Earth's surface. As such, a statistical relationship between the output of a numerical weather model and the ensuing conditions at the ground was developed in the 1970s and 1980s, known as model output statistics (MOS). Starting in the 1990s, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible.
Initialization
The atmosphere is a fluid. As such, the idea of numerical weather prediction is to sample the state of the fluid at a given time and use the equations of fluid dynamics and thermodynamics to estimate the state of the fluid at some time in the future. The process of entering observation data into the model to generate initial conditions is called initialization. On land, terrain maps available at resolutions down to globally are used to help model atmospheric circulations within regions of rugged topography, in order to better depict features such as downslope winds, mountain waves and related cloudiness that affects incoming solar radiation. The main inputs from country-based weather services are observations from devices (called radiosondes) in weather balloons that measure various atmospheric parameters and transmits them to a fixed receiver, as well as from weather satellites. The World Meteorological Organization acts to standardize the instrumentation, observing practices and timing of these observations worldwide. Stations either report hourly in METAR reports, or every six hours in SYNOP reports. These observations are irregularly spaced, so they are processed by data assimilation and objective analysis methods, which perform quality control and obtain values at locations usable by the model's mathematical algorithms. The data are then used in the model as the starting point for a forecast.
A variety of methods are used to gather observational data for use in numerical models. Sites launch radiosondes in weather balloons which rise through the troposphere and well into the stratosphere. Information from weather satellites is used where traditional data sources are not available. Commerce provides pilot reports along aircraft routes and ship reports along shipping routes. Research projects use reconnaissance aircraft to fly in and around weather systems of interest, such as tropical cyclones. Reconnaissance aircraft are also flown over the open oceans during the cold season into systems which cause significant uncertainty in forecast guidance, or are expected to be of high impact from three to seven days into the future over the downstream continent. Sea ice began to be initialized in forecast models in 1971. Efforts to involve sea surface temperature in model initialization began in 1972 due to its role in modulating weather in higher latitudes of the Pacific.
Computation
An atmospheric model is a computer program that produces meteorological information for future times at given locations and altitudes. Within any modern model is a set of equations, known as the primitive equations, used to predict the future state of the atmosphere. These equations—along with the ideal gas law—are used to evolve the density, pressure, and potential temperature scalar fields and the air velocity (wind) vector field of the atmosphere through time. Additional transport equations for pollutants and other aerosols are included in some primitive-equation high-resolution models as well. The equations used are nonlinear partial differential equations which are impossible to solve exactly through analytical methods, with the exception of a few idealized cases. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models and almost all regional models use finite difference methods for all three spatial dimensions, while other global models and a few regional models use spectral methods for the horizontal dimensions and finite-difference methods in the vertical.
These equations are initialized from the analysis data and rates of change are determined. These rates of change predict the state of the atmosphere a short time into the future; the time increment for this prediction is called a time step. This future atmospheric state is then used as the starting point for another application of the predictive equations to find new rates of change, and these new rates of change predict the atmosphere at a yet further time step into the future. This time stepping is repeated until the solution reaches the desired forecast time. The length of the time step chosen within the model is related to the distance between the points on the computational grid, and is chosen to maintain numerical stability. Time steps for global models are on the order of tens of minutes, while time steps for regional models are between one and four minutes. The global models are run at varying times into the future. The UKMET Unified Model is run six days into the future, while the European Centre for Medium-Range Weather Forecasts' Integrated Forecast System and Environment Canada's Global Environmental Multiscale Model both run out to ten days into the future, and the Global Forecast System model run by the Environmental Modeling Center is run sixteen days into the future. The visual output produced by a model solution is known as a prognostic chart, or prog.
Parameterization
Some meteorological processes are too small-scale or too complex to be explicitly included in numerical weather prediction models. Parameterization is a procedure for representing these processes by relating them to variables on the scales that the model resolves. For example, the gridboxes in weather and climate models have sides that are between and in length. A typical cumulus cloud has a scale of less than , and would require a grid even finer than this to be represented physically by the equations of fluid motion. Therefore, the processes that such clouds represent are parameterized, by processes of various sophistication. In the earliest models, if a column of air within a model gridbox was conditionally unstable (essentially, the bottom was warmer and moister than the top) and the water vapor content at any point within the column became saturated then it would be overturned (the warm, moist air would begin rising), and the air in that vertical column mixed. More sophisticated schemes recognize that only some portions of the box might convect and that entrainment and other processes occur. Weather models that have gridboxes with sizes between can explicitly represent convective clouds, although they need to parameterize cloud microphysics which occur at a smaller scale. The formation of large-scale (stratus-type) clouds is more physically based; they form when the relative humidity reaches some prescribed value. The cloud fraction can be related to this critical value of relative humidity.
The amount of solar radiation reaching the ground, as well as the formation of cloud droplets occur on the molecular scale, and so they must be parameterized before they can be included in the model. Atmospheric drag produced by mountains must also be parameterized, as the limitations in the resolution of elevation contours produce significant underestimates of the drag. This method of parameterization is also done for the surface flux of energy between the ocean and the atmosphere, in order to determine realistic sea surface temperatures and type of sea ice found near the ocean's surface. Sun angle as well as the impact of multiple cloud layers is taken into account. Soil type, vegetation type, and soil moisture all determine how much radiation goes into warming and how much moisture is drawn up into the adjacent atmosphere, and thus it is important to parameterize their contribution to these processes. Within air quality models, parameterizations take into account atmospheric emissions from multiple relatively tiny sources (e.g. roads, fields, factories) within specific grid boxes.
Domains
The horizontal domain of a model is either global, covering the entire Earth, or regional, covering only part of the Earth. Regional models (also known as limited-area models, or LAMs) allow for the use of finer grid spacing than global models because the available computational resources are focused on a specific area instead of being spread over the globe. This allows regional models to resolve explicitly smaller-scale meteorological phenomena that cannot be represented on the coarser grid of a global model. Regional models use a global model to specify conditions at the edge of their domain (boundary conditions) in order to allow systems from outside the regional model domain to move into its area. Uncertainty and errors within regional models are introduced by the global model used for the boundary conditions of the edge of the regional model, as well as errors attributable to the regional model itself.
The vertical coordinate is handled in various ways. Lewis Fry Richardson's 1922 model used geometric height () as the vertical coordinate. Later models substituted the geometric coordinate with a pressure coordinate system, in which the geopotential heights of constant-pressure surfaces become dependent variables, greatly simplifying the primitive equations. This correlation between coordinate systems can be made since pressure decreases with height through the Earth's atmosphere. The first model used for operational forecasts, the single-layer barotropic model, used a single pressure coordinate at the 500-millibar (about ) level, and thus was essentially two-dimensional. High-resolution models—also called mesoscale models—such as the Weather Research and Forecasting model tend to use normalized pressure coordinates referred to as sigma coordinates. This coordinate system receives its name from the independent variable used to scale atmospheric pressures with respect to the pressure at the surface, and in some cases also with the pressure at the top of the domain.
Model output statistics
Because forecast models based upon the equations for atmospheric dynamics do not perfectly determine weather conditions, statistical methods have been developed to attempt to correct the forecasts. Statistical models were created based upon the three-dimensional fields produced by numerical weather models, surface observations and the climatological conditions for specific locations. These statistical models are collectively referred to as model output statistics (MOS), and were developed by the National Weather Service for their suite of weather forecasting models in the late 1960s.
Model output statistics differ from the perfect prog technique, which assumes that the output of numerical weather prediction guidance is perfect. MOS can correct for local effects that cannot be resolved by the model due to insufficient grid resolution, as well as model biases. Because MOS is run after its respective global or regional model, its production is known as post-processing. Forecast parameters within MOS include maximum and minimum temperatures, percentage chance of rain within a several hour period, precipitation amount expected, chance that the precipitation will be frozen in nature, chance for thunderstorms, cloudiness, and surface winds.
Ensembles
In 1963, Edward Lorenz discovered the chaotic nature of the fluid dynamics equations involved in weather forecasting. Extremely small errors in temperature, winds, or other initial inputs given to numerical models will amplify and double every five days, making it impossible for long-range forecasts—those made more than two weeks in advance—to predict the state of the atmosphere with any degree of forecast skill. Furthermore, existing observation networks have poor coverage in some regions (for example, over large bodies of water such as the Pacific Ocean), which introduces uncertainty into the true initial state of the atmosphere. While a set of equations, known as the Liouville equations, exists to determine the initial uncertainty in the model initialization, the equations are too complex to run in real-time, even with the use of supercomputers. These uncertainties limit forecast model accuracy to about five or six days into the future.
Edward Epstein recognized in 1969 that the atmosphere could not be completely described with a single forecast run due to inherent uncertainty, and proposed using an ensemble of stochastic Monte Carlo simulations to produce means and variances for the state of the atmosphere. Although this early example of an ensemble showed skill, in 1974 Cecil Leith showed that they produced adequate forecasts only when the ensemble probability distribution was a representative sample of the probability distribution in the atmosphere.
Since the 1990s, ensemble forecasts have been used operationally (as routine forecasts) to account for the stochastic nature of weather processes – that is, to resolve their inherent uncertainty. This method involves analyzing multiple forecasts created with an individual forecast model by using different physical parametrizations or varying initial conditions. Starting in 1992 with ensemble forecasts prepared by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible. The ECMWF model, the Ensemble Prediction System, uses singular vectors to simulate the initial probability density, while the NCEP ensemble, the Global Ensemble Forecasting System, uses a technique known as vector breeding. The UK Met Office runs global and regional ensemble forecasts where perturbations to initial conditions are used by 24 ensemble members in the Met Office Global and Regional Ensemble Prediction System (MOGREPS) to produce 24 different forecasts.
In a single model-based approach, the ensemble forecast is usually evaluated in terms of an average of the individual forecasts concerning one forecast variable, as well as the degree of agreement between various forecasts within the ensemble system, as represented by their overall spread. Ensemble spread is diagnosed through tools such as spaghetti diagrams, which show the dispersion of one quantity on prognostic charts for specific time steps in the future. Another tool where ensemble spread is used is a meteogram, which shows the dispersion in the forecast of one quantity for one specific location. It is common for the ensemble spread to be too small to include the weather that actually occurs, which can lead to forecasters misdiagnosing model uncertainty; this problem becomes particularly severe for forecasts of the weather about ten days in advance. When ensemble spread is small and the forecast solutions are consistent within multiple model runs, forecasters perceive more confidence in the ensemble mean, and the forecast in general. Despite this perception, a spread-skill relationship is often weak or not found, as spread-error correlations are normally less than 0.6, and only under special circumstances range between 0.6–0.7.
In the same way that many forecasts from a single model can be used to form an ensemble, multiple models may also be combined to produce an ensemble forecast. This approach is called multi-model ensemble forecasting, and it has been shown to improve forecasts when compared to a single model-based approach. Models within a multi-model ensemble can be adjusted for their various biases, which is a process known as superensemble forecasting. This type of forecast significantly reduces errors in model output.
Applications
Air quality modeling
Air quality forecasting attempts to predict when the concentrations of pollutants will attain levels that are hazardous to public health. The concentration of pollutants in the atmosphere is determined by their transport, or mean velocity of movement through the atmosphere, their diffusion, chemical transformation, and ground deposition. In addition to pollutant source and terrain information, these models require data about the state of the fluid flow in the atmosphere to determine its transport and diffusion. Meteorological conditions such as thermal inversions can prevent surface air from rising, trapping pollutants near the surface, which makes accurate forecasts of such events crucial for air quality modeling. Urban air quality models require a very fine computational mesh, requiring the use of high-resolution mesoscale weather models; in spite of this, the quality of numerical weather guidance is the main uncertainty in air quality forecasts.
Climate modeling
A General Circulation Model (GCM) is a mathematical model that can be used in computer simulations of the global circulation of a planetary atmosphere or ocean. An atmospheric general circulation model (AGCM) is essentially the same as a global numerical weather prediction model, and some (such as the one used in the UK Unified Model) can be configured for both short-term weather forecasts and longer-term climate predictions. Along with sea ice and land-surface components, AGCMs and oceanic GCMs (OGCM) are key components of global climate models, and are widely applied for understanding the climate and projecting climate change. For aspects of climate change, a range of man-made chemical emission scenarios can be fed into the climate models to see how an enhanced greenhouse effect would modify the Earth's climate. Versions designed for climate applications with time scales of decades to centuries were originally created in 1969 by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. When run for multiple decades, computational limitations mean that the models must use a coarse grid that leaves smaller-scale interactions unresolved.
Ocean surface modeling
The transfer of energy between the wind blowing over the surface of an ocean and the ocean's upper layer is an important element in wave dynamics. The spectral wave transport equation is used to describe the change in wave spectrum over changing topography. It simulates wave generation, wave movement (propagation within a fluid), wave shoaling, refraction, energy transfer between waves, and wave dissipation. Since surface winds are the primary forcing mechanism in the spectral wave transport equation, ocean wave models use information produced by numerical weather prediction models as inputs to determine how much energy is transferred from the atmosphere into the layer at the surface of the ocean. Along with dissipation of energy through whitecaps and resonance between waves, surface winds from numerical weather models allow for more accurate predictions of the state of the sea surface.
Tropical cyclone forecasting
Tropical cyclone forecasting also relies on data provided by numerical weather models. Three main classes of tropical cyclone guidance models exist: Statistical models are based on an analysis of storm behavior using climatology, and correlate a storm's position and date to produce a forecast that is not based on the physics of the atmosphere at the time. Dynamical models are numerical models that solve the governing equations of fluid flow in the atmosphere; they are based on the same principles as other limited-area numerical weather prediction models but may include special computational techniques such as refined spatial domains that move along with the cyclone. Models that use elements of both approaches are called statistical-dynamical models.
In 1978, the first hurricane-tracking model based on atmospheric dynamics—the movable fine-mesh (MFM) model—began operating. Within the field of tropical cyclone track forecasting, despite the ever-improving dynamical model guidance which occurred with increased computational power, it was not until the 1980s when numerical weather prediction showed skill, and until the 1990s when it consistently outperformed statistical or simple dynamical models. Predictions of the intensity of a tropical cyclone based on numerical weather prediction continue to be a challenge, since statistical methods continue to show higher skill over dynamical guidance.
Wildfire modeling
On a molecular scale, there are two main competing reaction processes involved in the degradation of cellulose, or wood fuels, in wildfires. When there is a low amount of moisture in a cellulose fiber, volatilization of the fuel occurs; this process will generate intermediate gaseous products that will ultimately be the source of combustion. When moisture is present—or when enough heat is being carried away from the fiber, charring occurs. The chemical kinetics of both reactions indicate that there is a point at which the level of moisture is low enough—and/or heating rates high enough—for combustion processes to become self-sufficient. Consequently, changes in wind speed, direction, moisture, temperature, or lapse rate at different levels of the atmosphere can have a significant impact on the behavior and growth of a wildfire. Since the wildfire acts as a heat source to the atmospheric flow, the wildfire can modify local advection patterns, introducing a feedback loop between the fire and the atmosphere.
A simplified two-dimensional model for the spread of wildfires that used convection to represent the effects of wind and terrain, as well as radiative heat transfer as the dominant method of heat transport led to reaction–diffusion systems of partial differential equations. More complex models join numerical weather models or computational fluid dynamics models with a wildfire component which allow the feedback effects between the fire and the atmosphere to be estimated. The additional complexity in the latter class of models translates to a corresponding increase in their computer power requirements. In fact, a full three-dimensional treatment of combustion via direct numerical simulation at scales relevant for atmospheric modeling is not currently practical because of the excessive computational cost such a simulation would require. Numerical weather models have limited forecast skill at spatial resolutions under , forcing complex wildfire models to parameterize the fire in order to calculate how the winds will be modified locally by the wildfire, and to use those modified winds to determine the rate at which the fire will spread locally.
See also
Atmospheric physics
Atmospheric thermodynamics
Tropical cyclone forecast model
Types of atmospheric models
References
Further reading
From Turbulence to sCl
External links
NOAA Supercomputer upgrade
Air Resources Laboratory
Fleet Numerical Meteorology and Oceanography Center
European Centre for Medium-Range Weather Forecasts
UK Met Office
Computational science
Numerical climate and weather models
Applied mathematics
Weather prediction
Computational fields of study | Numerical weather prediction | Physics,Mathematics,Technology | 5,123 |
67,349,603 | https://en.wikipedia.org/wiki/Mass%20spectrometry%20at%20Swansea | Swansea University has had a long established history of development and innovation in mass spectrometry and chromatography.
Mass Spectrometry Research Unit
In 1975, John H. Beynon was appointed the Royal Society Research Professor and established the Mass Spectrometry Research Unit at Swansea University (at that time known as the University College of Swansea). In 1986, Dai Games moved from Cardiff University to become the Units new Director.
In 1984, the first observation of He22+ was made at the unit, its the same as molecular hydrogen (isolectronic molecules) except it has lots more energy 3310 kJ per mole.
National Mass Spectrometry Service
A grant of £670,000 was awarded in 1985 by the then Science and Engineering Research Council (SERC) to establish a national Mass Spectrometry Center at Swansea University to provide an analytical service to British Universities. It was officially opened in April 1987 by Lord Callaghan. In 2002, the center was enlarged and the new laboratories were opened by Lord Morgan. Following successful £3,000,000 contract renewal Edwina Hart, the Minister for Economy, Science and Transport, officially re-opened the EPSRC National Research Facility after refurbishment in 2015.
Biomolecular Analysis Mass Spectrometry
A Biomolecular Analysis Mass Spectrometry (BAMS) facility was officially opened in 2003, headed by Professor Newton and Dr Dudley. It was a collaborative entity between the Department of Biological Sciences and the Medical School. It focused on the study of nucleosides, nucleotides and cyclic nucleotides.
Stable isotope mass spectrometry
Stable isotope mass spectrometry is conducted in the Department of Geography, and was recently used by the Landmark Trust to determine very precisely the age of the timber from Llwyn Celyn farmhouse to the year 1420.
References
External links
National Mass Spectrometry Service
EPSRC National Research Facilities
Mass spectrometry
Chromatography
Swansea University | Mass spectrometry at Swansea | Physics,Chemistry | 407 |
26,773,036 | https://en.wikipedia.org/wiki/Minor%20metals | Minor metals is a widely used term in the metal industry that generally refers to metals which are a by-product of smelting a base metal. Minor metals do not have a real exchange, and are not traded on the London Metal Exchange (LME).
Characteristics
Two characteristics are regularly associated with minor metals: (1) their global production is relatively small in comparison to base metals, and (2), they are predominantly extracted as by-products of base metals. However, due to the diversity of the metals often classified as minor metals, there is still much discussion about what exactly defines a minor metal. Minor metals have a wide variety of uses, including pharmaceutical, semiconductor, automotive, glass, battery, solar and many others. Many of these minor metals are critical to 21st century technology. They are more difficult to extract from their naturally occurring host minerals than base metals.
Industry
According to the Minor Metals Trade Association (MMTA), its members alone account for over US$10 billion in annual trade of minor metal products.
Production
Recent research based on data from the United States Geological Survey (USGS) indicates that China is not only the leading primary producer of minor metals, supplying about 40 percent of all production, but that China's share of global production increased 34 percent between 2000 and 2009.
Applications
Minor metals are used in a wide diversity of end-use applications, from capacitors for consumer electronics (tantalum) and metallic cathodes for rechargeable batteries (cobalt) to photovoltaic solar cells (silicon) and semiconductor materials (gallium and indium). The primary end-uses of minor metals can also help to categorize the metals into four groups:
Electronic metals (e.g. gallium and germanium)
Power metals (e.g. molybdenum and zirconium)
Structural metals (e.g. chromium and vanadium)
Performance metals (e.g. titanium and rhenium)
Minor metals
Metals often classified as minor metals include: antimony (Sb), arsenic (As), beryllium (Be), bismuth (Bi), cadmium (Cd), cerium (Ce), chromium (Cr), cobalt (Co), gadolinium (Gd), gallium (Ga), germanium (Ge), hafnium (Hf), indium (In), lithium (Li), magnesium (Mg), manganese (Mn), mercury (Hg), molybdenum (Mo), neodymium (Nd), niobium (Nb), iridium (Ir), osmium (Os), praseodymium (Pr), rhenium (Re), rhodium (Rh), ruthenium (Ru), samarium (Sm), selenium (Se), silicon (Si), tantalum (Ta), tellurium (Te), titanium (Ti), tungsten (W), vanadium (V), and zirconium (Zr).
See also
Noble metal
Sprott Molybdenum Participation Corporation
Uranium Participation Corporation
Vital Materials
References
External links
LME website
Metals | Minor metals | Chemistry | 676 |
42,083,967 | https://en.wikipedia.org/wiki/K-theory%20of%20a%20category | In algebraic K-theory, the K-theory of a category C (usually equipped with some kind of additional data) is a sequence of abelian groups Ki(C) associated to it. If C is an abelian category, there is no need for extra data, but in general it only makes sense to speak of K-theory after specifying on C a structure of an exact category, or of a Waldhausen category, or of a dg-category, or possibly some other variants. Thus, there are several constructions of those groups, corresponding to various kinds of structures put on C. Traditionally, the K-theory of C is defined to be the result of a suitable construction, but in some contexts there are more conceptual definitions. For instance, the K-theory is a 'universal additive invariant' of dg-categories and small stable ∞-categories.
The motivation for this notion comes from algebraic K-theory of rings. For a ring R Daniel Quillen in introduced two equivalent ways to find the higher K-theory. The plus construction expresses Ki(R) in terms of R directly, but it's hard to prove properties of the result, including basic ones like functoriality. The other way is to consider the exact category of projective modules over R and to set Ki(R) to be the K-theory of that category, defined using the Q-construction. This approach proved to be more useful, and could be applied to other exact categories as well. Later Friedhelm Waldhausen in extended the notion of K-theory even further, to very different kinds of categories, including the category of topological spaces.
K-theory of Waldhausen categories
In algebra, the S-construction is a construction in algebraic K-theory that produces a model that can be used to define higher K-groups. It is due to Friedhelm Waldhausen and concerns a category with cofibrations and weak equivalences; such a category is called a Waldhausen category and generalizes Quillen's exact category. A cofibration can be thought of as analogous to a monomorphism, and a category with cofibrations is one in which, roughly speaking, monomorphisms are stable under pushouts. According to Waldhausen, the "S" was chosen to stand for Graeme B. Segal.
Unlike the Q-construction, which produces a topological space, the S-construction produces a simplicial set.
Details
The arrow category of a category C is a category whose objects are morphisms in C and whose morphisms are squares in C. Let a finite ordered set be viewed as a category in the usual way.
Let C be a category with cofibrations and let be a category whose objects are functors such that, for , , is a cofibration, and is the pushout of and . The category defined in this manner is itself a category with cofibrations. One can therefore iterate the construction, forming the sequence. This sequence is a spectrum called the K-theory spectrum of C.
The additivity theorem
Most basic properties of algebraic K-theory of categories are consequences of the following important theorem. There are versions of it in all available settings. Here's a statement for Waldhausen categories. Notably, it's used to show that the sequence of spaces obtained by the iterated S-construction is an Ω-spectrum.
Let C be a Waldhausen category. The category of extensions has as objects the sequences in C, where the first map is a cofibration, and is a quotient map, i.e. a pushout of the first one along the zero map A → 0. This category has a natural Waldhausen structure, and the forgetful functor from to C × C respects it. The additivity theorem says that the induced map on K-theory spaces is a homotopy equivalence.
For dg-categories the statement is similar. Let C be a small pretriangulated dg-category with a semiorthogonal decomposition . Then the map of K-theory spectra K(C) → K(C1) ⊕ K(C2) is a homotopy equivalence. In fact, K-theory is a universal functor satisfying this additivity property and Morita invariance.
Category of finite sets
Consider the category of pointed finite sets. This category has an object for every natural number k, and the morphisms in this category are the functions which preserve the zero element. A theorem of Barratt, Priddy and Quillen says that the algebraic K-theory of this category is a sphere spectrum.
Miscellaneous
More generally in abstract category theory, the K-theory of a category is a type of decategorification in which a set is created from an equivalence class of objects in a stable (∞,1)-category, where the elements of the set inherit an Abelian group structure from the exact sequences in the category.
Group completion method
The Grothendieck group construction is a functor from the category of rings to the category of abelian groups. The higher K-theory should then be a functor from the category of rings but to the category of higher objects such as simplicial abelian groups.
Topological Hochschild homology
Waldhausen introduced the idea of a trace map from the algebraic K-theory of a ring to its Hochschild homology; by way of this map, information can be obtained about the K-theory from the Hochschild homology. Bökstedt factorized this trace map, leading to the idea of a functor known as the topological Hochschild homology of the ring's Eilenberg–MacLane spectrum.
K-theory of a simplicial ring
If R is a constant simplicial ring, then this is the same thing as K-theory of a ring.
See also
Volodin space
Cotriple homology
Notes
References
Further reading
For the recent ∞-category approach, see
Category theory
K-theory | K-theory of a category | Mathematics | 1,253 |
241,026 | https://en.wikipedia.org/wiki/Up%20quark | The up quark or u quark (symbol: u) is the lightest of all quarks, a type of elementary particle, and a significant constituent of matter. It, along with the down quark, forms the neutrons (one up quark, two down quarks) and protons (two up quarks, one down quark) of atomic nuclei. It is part of the first generation of matter, has an electric charge of + e and a bare mass of . Like all quarks, the up quark is an elementary fermion with spin , and experiences all four fundamental interactions: gravitation, electromagnetism, weak interactions, and strong interactions. The antiparticle of the up quark is the up antiquark (sometimes called antiup quark or simply antiup), which differs from it only in that some of its properties, such as charge have equal magnitude but opposite sign.
Its existence (along with that of the down and strange quarks) was postulated in 1964 by Murray Gell-Mann and George Zweig to explain the Eightfold Way classification scheme of hadrons. The up quark was first observed by experiments at the Stanford Linear Accelerator Center in 1968.
History
In the beginnings of particle physics (first half of the 20th century), hadrons such as protons, neutrons and pions were thought to be elementary particles. However, as new hadrons were discovered, the 'particle zoo' grew from a few particles in the early 1930s and 1940s to several dozens of them in the 1950s. The relationships between each of them were unclear until 1961, when Murray Gell-Mann and Yuval Ne'eman (independently of each other) proposed a hadron classification scheme called the Eightfold Way, or in more technical terms, SU(3) flavor symmetry.
This classification scheme organized the hadrons into isospin multiplets, but the physical basis behind it was still unclear. In 1964, Gell-Mann and George Zweig (independently of each other) proposed the quark model, then consisting only of up, down, and strange quarks. However, while the quark model explained the Eightfold Way, no direct evidence of the existence of quarks was found until 1968 at the Stanford Linear Accelerator Center. Deep inelastic scattering experiments indicated that protons had substructure, and that protons made of three more-fundamental particles explained the data (thus confirming the quark model).
At first people were reluctant to describe the three bodies as quarks, instead preferring Richard Feynman's parton description, but over time the quark theory became accepted (see November Revolution).
Mass
Despite being extremely common, the bare mass of the up quark is not well determined, but probably lies between 1.8 and . Lattice QCD calculations give a more precise value: .
When found in mesons (particles made of one quark and one antiquark) or baryons (particles made of three quarks), the 'effective mass' (or 'dressed' mass) of quarks becomes greater because of the binding energy caused by the gluon field between each quark (see mass–energy equivalence). The bare mass of up quarks is so light, it cannot be straightforwardly calculated because relativistic effects have to be taken into account.
See also
Down quark
Isospin
Quark model
Quantum Mechanics
References
Further reading
Quarks
Elementary particles | Up quark | Physics | 723 |
66,373,531 | https://en.wikipedia.org/wiki/Taphrina%20tosquinetii | Taphrina tosquinetii is a fungal plant pathogen that causes large blisters on both surfaces of the leaves of alder.
Description of the gall
The ascomycete induces a gall that distorts the leaves of alder. The leaves are slightly thickened, brittle and incurved with blister-like growth on both sides, which can increase the size of an infected leaf to twice the normal size. Later the leaf tissue becomes pale and thin with a whitish bloom when the asci develop. Species infected include common alder (Alnus glutinosa), grey alder (Alnus incana) and Alnus x pubescens.
References
Taphrinomycetes
Fungi described in 1866
Fungi of Europe
Galls
Taxa named by Edmond Tulasne
Fungus species | Taphrina tosquinetii | Biology | 167 |
30,820,704 | https://en.wikipedia.org/wiki/Evolution%20by%20gene%20duplication | Evolution by gene duplication is an event by which a gene or part of a gene can have two identical copies that can not be distinguished from each other. This phenomenon is understood to be an important source of novelty in evolution, providing for an expanded repertoire of molecular activities. The underlying mutational event of duplication may be a conventional gene duplication mutation within a chromosome, or a larger-scale event involving whole chromosomes (aneuploidy) or whole genomes (polyploidy). A classic view, owing to Susumu Ohno, which is known as Ohno model, he explains how duplication creates redundancy, the redundant copy accumulates beneficial mutations which provides fuel for innovation. Knowledge of evolution by gene duplication has advanced more rapidly in the past 15 years due to new genomic data, more powerful computational methods of comparative inference, and new evolutionary models.
Theoretical models
Several models exist that try to explain how new cellular functions of genes and their encoded protein products evolve through the mechanism of duplication and divergence. Although each model can explain certain aspects of the evolutionary process, the relative importance of each aspect is still unclear. This page only presents which theoretical models are currently discussed in the literature. Review articles on this topic can be found at the bottom.
In the following, a distinction will be made between explanations for the short-term effects (preservation) of a gene duplication and its long-term outcomes.
Preservation of gene duplicates
Since a gene duplication occurs in only one cell, either in a single-celled organism or in the germ cell of a multi-cellular organism, its carrier (i.e. the organism) usually has to compete against other organisms that do not carry the duplication. If the duplication disrupts the normal functioning of an organism, the organism has a reduced reproductive success (or low fitness) compared to its competitors and will most likely die out rapidly. If the duplication has no effect on fitness, it might be maintained in a certain proportion of a population. In certain cases, the duplication of a certain gene might be immediately beneficial, providing its carrier with a fitness advantage.
Dosage effect or gene amplification
The so-called 'dosage' of a gene refers to the amount of mRNA transcripts and subsequently translated protein molecules produced from a gene per time and per cell.
If the amount of gene product is below its optimal level, there are two kinds of mutations that can increase dosage: increases in gene expression by promoter mutations and increases in gene copy number by gene duplication.
The more copies of the same (duplicated) gene a cell has in its genome, the more gene product can be produced simultaneously. Assuming that no regulatory feedback loops exist that automatically down-regulate gene expression, the amount of gene product (or gene dosage) will increase with each additional gene copy, until some upper limit is reached or sufficient gene product is available.
Furthermore, under positive selection for increased dosage, a duplicated gene could be immediately advantageous and quickly increase in frequency in a population. In this case, no further mutations would be necessary to preserve (or retain) the duplicates. However, at a later time, such mutations could still occur, leading to genes with different functions (see below).
Gene dosage effects after duplication can also be harmful to a cell and the duplication might therefore be selected against. For instance, when the metabolic network within a cell is fine-tuned so that it can only tolerate a certain amount of a certain gene product, gene duplication would offset this balance.
Activity reducing mutations
In cases of gene duplications that have no immediate fitness effect, a retention of the duplicate copy could still be possible if both copies accumulate mutations that for instance reduce the functional efficiency of the encoded proteins without inhibiting this function altogether. In such a case, the molecular function (e.g. protein/enzyme activity) would still be available to the cell to at least the extent that was available before duplication (now provided by proteins expressed from two gene loci, instead of one gene locus). However, the accidental loss of one gene copy might then be detrimental, since one copy of the gene with reduced activity would almost certainly lie below the activity that was available before duplication.
Long-term fate of duplicated genes
If a gene duplication is preserved, the most likely fate is that random mutations in one duplicate gene copy will eventually cause the gene to become non-functional
. Such non-functional remnants of genes, with detectable sequence homology, can sometimes still be found in genomes and are called pseudogenes.
Functional divergence between the duplicate genes is another possible fate. There are several theoretical models that try to explain the mechanisms leading to divergence:
Neofunctionalization
The term neofunctionalization was first coined by Force et al. 1999,
but it refers to the general mechanism proposed by Ohno 1970. The long-term outcome of Neofunctionalization is that one copy retains the original (pre-duplication) function of the gene, while the second copy acquires a distinct function. It is also known as the MDN model, "mutation during non-functionality". The major criticism of this model is the high likelihood of non-functionalization, i.e. the loss of all functionality of the gene, due to random accumulation of mutations.
IAD model
IAD stands for 'innovation, amplification, divergence' and aims to explain evolution of new gene functions while preserving its existing functions.
Innovation, i.e. the establishment of a new molecular function, can occur via side-activities of genes and thus proteins this is called Enzyme promiscuity. For example, enzymes can sometimes catalyse more than just one reaction, even though they usually are optimised for catalysing just one reaction. Such promiscuous protein functions, if they provide an advantage to the host organism, can then be amplified with additional copies of the gene. Such a rapid amplification is best known from bacteria that often carry certain genes on smaller non-chromosomal DNA molecules (called plasmids) which are capable of rapid replication. Any gene on such a plasmid is also replicated and the additional copies amplify the expression of the encoded proteins, and with it any promiscuous function. After several such copies have been made, and are also passed on to descendent bacterial cells, a few of these copies might accumulate mutations that eventually will lead to a side-activity becoming the main activity.
The IAD model have been previously tested in the lab by using bacterial enzyme with dual function as starting point. This enzyme is capable of catalyzing not only its original function, but also side function that can carried out by other enzyme.
By allowing the bacteria with this enzyme to evolve under selection to improve both activities (original and side) for several generations, it was shown that one ancestral bifunctional gene with poor activities (Innovation) evolved first by gene amplification to increase expression of the poor enzyme, and later accumulated more beneficial mutations that improved one or both of the activities that can be passed on to the next generation (divergence)
Subfunctionalization
"Subfunctionalization" was also first coined by Force et al. 1999. This model requires the ancestral (pre-duplication) gene to have several functions (sub-functions), which the descendant (post-duplication) genes specialise on in a complementary fashion. There are now at least two different models that are labeled as subfunctionalization, "DDC" and "EAC".
DDC model
DDC stands for "duplication-degeneration-complementation". This model was first introduced by Force et al. 1999. The first step is gene duplication. The gene duplication in itself is neither advantageous, nor deleterious, so it will remain at low frequency within a population of individuals that do not carry a duplication. According to DDC, this period of neutral drift may eventually lead to the complementary retention of sub-functions distributed over the two gene copies. This comes about by activity reducing (degenerative) mutations in both duplicates, accumulating over time periods and many generations. Taken together, the two mutated genes provide the same set of functions as the ancestral gene (before duplication). However, if one of the genes was removed, the remaining gene would not be able to provide the full set of functions and the host cell would likely suffer some detrimental consequences. Therefore, at this later stage of the process, there is a strong selection pressure against removing any of the two gene copies that arose by gene duplication. The duplication becomes permanently established in the genome of the host cell or organism.
EAC model
EAC stands for "Escape from Adaptive Conflict". This name first appeared in a publication by Hittinger and Carroll 2007.
The evolutionary process described by the EAC model actually begins before the gene duplication event. A singleton (not duplicated) gene evolves towards two beneficial functions simultaneously. This creates an "adaptive conflict" for the gene, since it is unlikely to execute each individual function with maximum efficiency. The intermediate evolutionary result could be a multi-functional gene and after a gene duplication its sub-functions could be carried out by specialised descendants of the gene. The result would be the same as under the DDC model, two functionally specialised genes (paralogs). In contrast to the DDC model, the EAC model puts more emphasis on the multi-functional pre-duplication state of the evolving genes and gives a slightly different explanation as to why the duplicated multi-functional genes would benefit from additional specialisation after duplication (because of the adaptive conflict of the multi-functional ancestor that needs to be resolved). Under EAC there is an assumption of a positive selection pressure driving evolution after gene duplication, whereas the DDC model only requires neutral ("undirected") evolution to take place, i.e. degeneration and complementation.
See also
Pseudogenes
Molecular evolution
Gene duplication
Functional divergence
Mutation
References
Molecular evolution
Molecular genetics
Mutation | Evolution by gene duplication | Chemistry,Biology | 2,083 |
19,057,150 | https://en.wikipedia.org/wiki/Intersection%20form%20of%20a%204-manifold | In mathematics, the intersection form of an oriented compact 4-manifold is a special symmetric bilinear form on the 2nd (co)homology group of the 4-manifold. It reflects much of the topology of the 4-manifolds, including information on the existence of a smooth structure.
Definition using intersection
Let be a closed 4-manifold (PL or smooth).
Take a triangulation of .
Denote by the dual cell subdivision.
Represent classes by -cycles and modulo viewed as unions of -simplices of T and of , respectively.
Define the intersection form modulo
by the formula
This is well-defined because the intersection of a cycle and a boundary consists of an even number of points (by definition of a cycle and a boundary).
If is oriented, analogously (i.e. counting intersections with signs) one defines the intersection form on the nd homology group
Using the notion of transversality, one can state the following results (which constitute an equivalent definition of the intersection form).
If classes are represented by closed surfaces (or -cycles modulo ) and meeting transversely, then
If is oriented and classes are represented by closed oriented surfaces (or -cycles) and meeting transversely, then every intersection point in has the sign or depending on the orientations, and is the sum of these signs.
Definition using cup product
Using the notion of the cup product , one can give a dual (and so an equivalent) definition as follows.
Let be a closed oriented 4-manifold (PL or smooth).
Define the intersection form on the nd cohomology group
by the formula
The definition of a cup product is dual (and so is analogous) to the above definition of the intersection form on homology of a manifold, but is more abstract.
However, the definition of a cup product generalizes to complexes and topological manifolds.
This is an advantage for mathematicians who are interested in complexes and topological manifolds (not only in PL and smooth manifolds).
When the 4-manifold is smooth, then in de Rham cohomology, if and are represented by -forms and , then the intersection form can be expressed by the integral
where is the wedge product.
The definition using cup product has a simpler analogue modulo (which works for non-orientable manifolds).
Of course one does not have this in de Rham cohomology.
Properties and applications
Poincare duality states that the intersection form is unimodular (up to torsion).
By Wu's formula, a spin 4-manifold must have even intersection form, i.e., is even for every x. For a simply-connected smooth 4-manifold (or more generally one with no 2-torsion residing in the first homology), the converse holds.
The signature of the intersection form is an important invariant. A 4-manifold bounds a 5-manifold if and only if it has zero signature. Van der Blij's lemma implies that a spin 4-manifold has signature a multiple of eight. In fact, Rokhlin's theorem implies that a smooth compact spin 4-manifold has signature a multiple of 16.
Michael Freedman used the intersection form to classify simply-connected topological 4-manifolds. Given any unimodular symmetric bilinear form over the integers, Q, there is a simply-connected closed 4-manifold M with intersection form Q. If Q is even, there is only one such manifold. If Q is odd, there are two, with at least one (possibly both) having no smooth structure. Thus two simply-connected closed smooth 4-manifolds with the same intersection form are homeomorphic. In the odd case, the two manifolds are distinguished by their Kirby–Siebenmann invariant.
Donaldson's theorem states a smooth simply-connected 4-manifold with positive definite intersection form has the diagonal (scalar 1) intersection form. So Freedman's classification implies there are many non-smoothable 4-manifolds, for example the E8 manifold.
References
4-manifolds
Geometric topology | Intersection form of a 4-manifold | Mathematics | 836 |
25,791,203 | https://en.wikipedia.org/wiki/ALL-IN-1 | ALL-IN-1 was an office automation product developed and sold by Digital Equipment Corporation in the 1980s. It was one of the first purchasable off the shelf electronic mail products. It was later known as Office Server V3.2 for OpenVMS Alpha and OpenVMS VAX systems before being discontinued.
Overview
ALL-IN-1 was advertised as an office automation system including functionality in Electronic Messaging, Word Processing and Time Management. It offered an application development platform and customization capabilities that ranged from scripting to code-level integration.
ALL-IN-1 was designed and developed by Skip Walter, John Churin and Marty Skinner from Digital Equipment Corporation who began work in 1977. Sheila Chance was hired as the software engineering manager in 1981. The first version of the software, called CP/OSS, the Charlotte Package of Office System Services, named after the location of the developers, was released in May 1982. In 1983, the product was renamed ALL-IN-1 and the Charlotte group continued to develop versions 1.1 through 1.3.
Digital then made the decision to move most of the development activity to its central engineering facility in Reading, United Kingdom, where a group there took responsibility for the product from version 2.0 (released in field test in 1984 and to customers in 1985) onward. The Charlotte group continued to work on the Time Management subsystem until version 2.3 and other contributions were made from groups based in Sophia Antipolis, France (System for Customization Management and the integration with VAX Notes), Reading (Message Router and MAILbus), and Nashua, New Hampshire (FMS). ALL-IN-1 V3.0 introduced shared file cabinets and the File Cabinet Server (FCS) to lay the foundation for an eventual integration with TeamLinks, Digital's PC office client. Previous integrations with PCs included PC ALL-IN-1, a DOS-based product introduced in 1989 that never proved popular with customers.
Bob Wyman was the first product manager. He oversaw the growth of the product culminating in over $2 billion per year in revenue and market leadership in the proprietary office automation sector.
Other consultants from Digital Equipment Corporation involved include Frank Nicodem, Donald Vickers and Tony Redmond.
See also
History of email
References
Bibliography
External links
Official HP Product Page
OpenVMS software
Productivity software
Email systems | ALL-IN-1 | Technology | 485 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.